I know right? This kind of enthusiasm for journalism is sorely missed in the gaming community. And the fact that he took time out of his own freetime just to travel to Italy and Morocco for a work trip instead of going on a holiday? Truly a soon-to-be Pulitzer-award-recipient.
"For the most part, the shadows actually looked better in real life than in Source 2, though I couldn't help but notice this particular shadow was still somewhat blocky around the edges. God pls fix."
You kinda feel sorry for the texture artists tbh, when the lit but untextured map looks so beautiful. I can see why SUPERHOT went down this route of graphics...
It's so refreshing to see a youtuber doing a deep analysis of what source 2 brings to the table. All the other youtubers are like, ooh it's shiny, ooh the movement feels weird, ...
7:00 Albedo is one of the 3 base textures used in a PBR workflow. It is basically the unlit solid color of the texture, and along with roughness and metalness, it can create a physically accurate material.
In physics, albedo is the fraction of light which you reflect. So, you can represent it as the base color, “this is what the material will reflect if hit with white light”
@@quintonlee4107 Not really - think of a diffuse map (in the Source Engine sense) as an albedo with more baked in information, such as shading and whatnot. It worked for years but the drive for realism has left these mostly outdated as you can only really add on normal maps for faux depth, and specularity to make it more/less shiny. The goal of albedo maps are to be as flat as possible; essentially they're just the base colors that you'd see if a white light was shined at the object. You then add on extra information such as roughness, metalness, and normal maps, which light will interact with to create a physically based material that looks far more accurate. These materials can adapt entirely to different environments and lighting conditions while diffuse maps are much more static.
I used to think making games couldn't be very hard, not that I add ever done a game but in my mind it was "how hard can it be, if there's so many people doing it?" so I decided to learn programing and do a game of my own and ... I quickly found out that making games, specially alone, is very hard.
Hey Phillip! The shading you see in halfbright is called Matcap or Studio lighting. Studio lighting is where a file containing light info is passed onto the faces in the scene, resulting in the shaded effect you see. Matcap is based off a 1:1 image with a high resolution. The image only contains a perfect sphere with the desired face shading applied, and the image is then used to shade the scene. Clever people can use a fixed camera angle and plot materials onto a Matcap, resulting in a very quickly rendering video, at the cost of production time, so it's useful for rerenders. If you found this interesting, please look further into these quick methods of approximate lighting by reading further on Google.
Why wouldn't they just perform a per-vertex lighting calculation using the dot product of the surface normal and the camera angle? It looks to me like the classic fake lighting used in the early 3D games like star fox. Is there something I am missing? Edit: OP literally just made this up. Blender has a Viewport Shader tab with a "Studio Lighting" and "Matcap" option. The author of this comment seems to have extrapolated that the similar looking lighting in the Blender software means that these Source 2 options are using those techniques. The use of Matcaps is largely limited to sculpting software. Studio Lighting is Blender specific so it doesn't make any sense to say that Source 2 uses Studio Lighting for it's "halfbright".
6:55 a really cool trick that Valve could do is to have an algorithm that populates the world with light probes based on where the player is likely to be. More probes closer to the ground, less probes in areas you can't reach or in the sky. This could improve the graphic detail a lot and decimate the map render time.
You can probably interpolate between probes, so I think it would be worse to implement your idea (same result on dense regions, bad ones on other regions). Plus the fact that there are so many probes everywhere shows that it does not come with too much of a performance / map size issue. Only question left is compilation time. Well if you are willing to increase the density locally, you're gonna have at least the same overall complexity I am guessing.
Using non grid-based probes requires a separate table of probe positions used for finding the closest probe to each dynamic objects' position instead of rounding the position to a grid and indexing. This introduces huge performance costs because searching through a large, irregular table takes much, much longer than indexing. To avoid this (using a grid-based system), more likely locations for dynamic objects could include grids at increased fidelity. This is an decent solution for performance because it avoids long search times, but this comes at the cost of more file size. A separate table is required to denote where these increased fidelity grids are, and finding light data requires a few extra steps during runtime, especially in areas more common for the player to traverse, which introduces extra lag in possibly the worst place in the physical game world, where players spend most of their time. Also, the data structure naturally contains some redundant data where grids may overlap, especially in high-density areas (the most important). To avoid using this unnecessary disk space, either the data will be irregularly spaced (introducing more runtime lag again), or the data structure will be indexed in such a way that avoids duplicate data by writing over unnecessary bits. This is complex from a memory-management perspective, as well as a customizability perspective. Similar algorithms (that actually prevent this redundancy problem) do exist, but optimizing it for CS2 purposes may be difficult (it's hard to say, really), and creating custom maps or adding objects to the scene forces lighting recalculations. This is technically an option, but makes custom projects using Source 2 much more complicated than using a regularly spaced grid to determine probe locations. Unless someone is using a powerful computer with lots of disk space, this is impractical. Also, somebody would actually have to create the Source 2 mod and a 3D heatmap editor for this to be implemented, because it certainly isn't a viable option for the masses. I do appreciate the good thoughts though! As someone who likes to create games in my spare time, it's always nice to see other people trying to solve these kinds of problems too, because eventually the best solutions will become commonplace standards among graphics software which nobody even really thinks twice about. If you also enjoy these (rather in-depth) software problem-solving situations, I recommend Nemean's video on the Fast Inverse Square Root as popularized by Quake. TL;DR I don't think this would work at all practically, especially not for the small improvements that this could make, but I enjoy the discussion nonetheless.
Low grav or noclipping players still need proper lighting applied to them. Weapons and grenades can also get launched into the sky and need to keep looking good.
there's something mesmerizing about going around looking at the cubemaps in these maps, i'm really not sure why i guess it's just really satisfying seeing them line up so nicely
Albedo from the official ValveSoftware page: "An albedo, often interchangeably referred to as diffuse map, is a term used to describe reflection coefficient of a material surface containing view-independent Base Color information without any additional lighting or shadow information. Often stored into a Image Texture. It's used to tell which color tint and initially add basic details for a Diffuse surface. And defines what we normally think of a texture as being before enhanced by other "treatment" textures like bump maps or specular masks." It is a diffuse map. Unity also uses the term albedo for the same.
I feel as though textureless visuals look timeless, or beautiful always, especially combined with lighting, because you, even subconsciously, see the resolution of a texture, but if everything is white, the resolution is technically infinite, and the better lighting gets, the harder that effect sells.
Albeideo means color, its the color/texture and nothing else. As for the shadows, GTA V has this feature where far away objects cast blurry shadows, thanks to Nvidia's PCSS. Easiest to see on a lamppole.
For anyone that still wants to do this for themselves and is getting some kind of error while launching the game. Launch CS2 normally, then go to Settings, then Game and then select 'Yes' on Install Counter-Strike Workshop Tools. After doing this, exit the game and wait for an update to install, put -tools in the launch settings and voilà. The game should now launch with an additional window, which are the Workshop Tools, ignore these and press the developer console button (this is usually ~, the tilde key, under the Esc button) and you've got your new Source2, Vconsole2 developer console, which should be its own window. - Also note you won't be allowed to play multiplayer with this on, just remove the -tools from the launch settings
Just a guess on the balls. Perhaps all these balls outsiide of the playable area are needed for the method used to calculated the lighting value of each ball. It could be that each ball is reliant on the balls next to it. And a couple of iterative calculations are used. So the balls outside of the playable domain could be like the undisturbed lightning balls. Since the balls are pre calculated it wouldn't really hurt performance.
Probably more that its easier to just tell the mapping tool to calculate everywhere than it is to specify exactly where is playable and where isn't playable. I'd imagine if they have so many of them then they shouldn't take up that much space in the map file.
Or maybe they use less probes, but then when you enable this tool, it's generates these balls at an unreasonablely dense and far away space, and that their only purpose is to show you what objects would look like underneath them. But yeah it could also just be auto generated. Storage and vram are so inexpensive nowadays.
The real answer is that the mappers do specify exactly what areas to generate these probes for, but lighting will be BADLY broken for any dynamically lit object that exists in an area not covered by them, this means if you can potentially throw a grenade there (even if its outside the map) it needs to be covered. Some other objects like antennas and satellite dishes in the skybox also rely on this non-baked lighting to be lit properly. There are settings on the volumes that generate the probes however that allow you to generate them less densely which appears to be done here, with more probes densely packed into actual playable space vs outside the level.
Being able to see all this cubemaps stacked in palace right next to each other is just so trippy... I'll never get over how easily we can fool our brains with things that look approximately right.
Philip digging every possible feature of Source 2 mapping is always admirable to see and not just that but also taking trips to compare maps to in real life locations.
The various PBR properties like albedo and AO etc are explained quite accessibly in the article "Real Shading in Unreal Engine 4 by Brian Karis, Epic Games", at the "Material Model" section on page 8.
The reason for the large number of wasted "balls" is probably to make look-up faster. The amount of data stored for each probe is very small, so they can be packed into a large buffer on the GPU that you can lookup into faster, by simply calculating the index based on the current vertex/fragment's world space position. However, because calculating the index like this is only really possible if your probe volume is a cuboid, you have to stretch the volume to cover all parts of the level that need GI. You could add multiple probe volumes with different sizes to save some VRAM (this is actually very common in other engines), but it would come at the expensive of higher memory bandwidth and computation. Either you use multiple render passes, which is bad for achieving good utilisation on modern cards, or you increase execution time, memory bandwidth and register usage by dynamically checking which volume to sample in the shader. Generally, if we have enough VRAM to just fit everything (RAM scales, clock speed does not), it makes sense to pack the whole thing in one buffer. That being said, if you wanted to dramatically increase the probe density to get more accurate lighting, then you might run run out of memory and have to resort to another method.
Basic breakdown of PBR Materials: Albedo: The unlit colour of a material Normal: The direction the light should bounce. Specular: The colour the material reflects, this also works in black and white, so generally the lighter the specular the brighter the reflection. Metallic: How metallic an object is. 0 is not metal, 1 is metal. Roughness: How rough or glossy a surface is. 0 roughness would be a mirror, 1 roughness would be absolutely no reflection. AO: Contact shadows and just generally simulating objects being close with a texture which is much cheaper than doing it with real time lighting. Emissive: Is used for giving a texture the appearance of emitting light on a texture level. These usually don't emit any actual light, but will show up in reflections. Those are the main ones that are used in basically every PBR shader. There's also some other maps that are used like thickness for determining how thick and object is for determining things like refraction and subsurface scattering.
Variable shadow penumbra can be used. I believe that Crysis 2 used it quite a bit, considering it was a mainstream game. PC and DX11 though, but still quite nice to see.
Portal-based culling occurs on the CPU, whereas CS2 seems to have GPU-based culling (based on what you are showing here, I don't have access to CS2). You still can't just add a portal to your face. E.g., in a room, if the camera isn't looking at any portal, you can consider everything outside the room occluded.
A subject that won't be covered anywhere else although it has a huge impact on ig content makers, hence the game itself. Happy that your channel exists although like many people here I haven't played cs for years.
Albedo map is the colors of the textures without any effect such as opacity, roughness, emmissives. All that combined creates the material itself. Like the green leather covering B site's roof
I have noticed these half shadows when comparing shadows in games to real shadows, it's nice to know it's not just me considering these things when out and about
Coming from someone who used to both watch and edit CoD4, these tools open up so many possibilities for editors in the CS scene. Can’t wait to see shat people come up with!
Albedo is the base color input of a texture, they usually don't have any lighting baked into it, if you bake some diffuse lighting in with the albedo, it becomes a diffuse texture. Hence when you were in the Albedo mode, it appears a lot like the fullbright option with everything being flat. Also the roughness mode deals with how coarse or shiny any surface of the map will be, for example metalic objects or the palace area of mirage willcb3 covered in black in the roughness mode. Hope this helps!😊
6:49 Light probes exist in Unity too and you don't need that much BALLS because they eat up memory. There are external tools in unity that places the BALLS only in necessary places.
Note that all the probes here are baked/pre-calculated so they're actually quite cheap (which isn't the case in all game engines where probes can be used for real-time global illumination/reflections) There's a lot because it affects the lighting for any dynamic object moving around, but that's also used to lit the particles of smokegrenades, so it helps quite a lot for this.
Funnily enough, unity has recently implemented exactly the kinds of BALLS seen in CS2, look up adaptive probe volumes, it's pretty cool. (Adaptive probe volumes don't eat memory like old light probes, so it's fine to have tons of them)
@@leplubodeslapin They arent that cheap, thats still a lot o vram and disk memory. Still, in air they do nothing but waste space, and rebaking them each time you update the map must be a anightmare.
@@googleslocik i've been using the CS2 tools for some weeks now and i can assure you it's fine :) maps are now compiled with the gpu and are using their ray tracing abilities, making it quite fast. A map i'm working on, that used to take 2 and a half hours to compile on Half-Life: Alyx (which was only using the CPU) now takes 20 min with similar settings. And for these 20 mins, it's like 25% visibility/occlusion calculations, 60% baked lightmap, and the probes are in what remains (along with navmesh calculation). When i said they're quite cheap i meant in comparison to the idea that they would be real-time, yes they're not free either ^^ It's not entiremy useless to have probes in the air, the volume they're in is also the volume where cubemaps can be effective. Anything outside won't have reflections, which turns metallic stuff to black, maybe that's an issue for visibility of grenades in the air ? (Speculating) In any case, there is a parameter to asjust the probe density in those volumes and each map creator can adjust those to help optimization. And grenades now go through map boundaries (skybox walls/ceiling) so maybe for future cs2 maps there won't be that much playable space above our heads (speculating again ^^)
8:20 oh shit, I'm used to AO being locked into camera's screen space perspective, great to see that people somehow figured out how to implement it differently
albedo is one of the many texture maps that goes into making a pbr texture, it's pretty much just the solid colour information/image that is accompanied by the other textures such as roughness and AO.
Source1 actually has $albedo as an alternative to $basetexture in most engine branch. It's broken for most combinations but can sometimes work as a basetexture that doesn't self shadow with normalmaps.
man that drawgray looks fucking nice, wish valve would add that and visible only player chams with blue and red colors for extra clarity, and then my competetive itch is scratched
actually blew my mind when you said you didn't know what albedo or roughness maps were, I really expected all source map makers had to learn 3d texturing at one point
Probably because Source 1 called albedo the "base texture" and didn't really have a 1:1 equivalent to a roughness map in the modern sense. Instead you just directly controlled the per-pixel intensity and sharpness of phong reflections separately, and then there was a mask for cubemap reflections on top of that if you wanted them.
Albedo is just another word for a regular color texture. A texture that decides waht colors are on a surface when no lighting is applied. When you enable that option it seems to just shown the albedo texture on each object exactly as that texture would be displayed if you looked at it in the files.
Just a small correction, AFAIK, bump and normal maps are still used with PBRs, since bump and normals give the texture a lot of definition basically for free
So on the topic of "strap an area portal to their face" Area portals themselves are part of the compiled visibility tree structure and cannot be moved. Most games, including source do a thing called frustum culling, where stuff out of the camera's view are skipped. So any benefit of strapping an area portal to your face is already covered.
It's a bit more complicated than that. Frustrum culling only culls items that fall outside the visible area; but not elements hidden by other elements in front. The GPU culling is more simillar to Z-buffering; where all the items are placed onto a blank canvas according to the camera view from closest element to furthest item, and if they are completely hidden; then they do not render at all. Thus skipping most of the rendering steps and improving performance.
@@ma1pa637 Yeah, GPU culling is more advanced. I was comparing frustum culling to area portals which only check if something is visible through the portal bounds.
0:35, you are basically selecting shading of Maya, then flat textures of Base/Albedo color. This is simply what any game developer constantly uses and checks in game engines. I would recommend since you create Map making videos, to dab into Metal/Rough , Spec/Gloss PBR texturing workflow. You will understand what Albedo, Normal, Metallic, Specular and Height maps are since you are showing them troughout the video. When you show PBR, you show the scale of grayscale color from 0.1 to 1. The black and white grayscale map you talk about before is basically that.
Now this is EXACTLY the type of video I like to see from you Phil
Год назад
About the balls... They are used for indirect lighting of the moving objects. So all that nice reflective light that's also shining on the buildings, can interact with the player. It's known as realtime global illumination, or realtime GI for short. If there was no realtime GI the character would look out of place where ever you go. But with realtime GI, it blends more with the lighting of the are it's currently on. So if you get close to a red wall on the new Inferno, your character will be red from the indirect light bouncing of the wall.
Albedo maps are pretty much what you think they are, its just the raw color data for a given texture. In fact, it's usually the only source of color in a material.
3:32 - Seen from the Earth, the Sun's angular size is around 0.5 degrees, and it's far away enough that light rays coming directly from it can essentially be considered parallel. The distance between that pole and the ground is irrelevant compared to the distance both are from the light source. In practical terms, what makes most (solar) shadows get blurry is edge diffraction and atmospheric scattering.
as a 3d artist, the albedo texture in 3d software usually represents the base colour of an object, which is what its doing in the game, showing the exact base colours of every object. fullbright seems to be lit using a lighting method called matcap, for the reasons said in the video. in pbr (physically based rendering) games and workflows (like source 2), there are usually several textures that make up a material, which is why there are albedo, roughness, AO, and normal maps. i'm not sure what the reflectivity represents in a pbr workflow, as it doesnt seem to be a standard pbr texture
3:08 as far as i know, that's because of light interference - light does not just go straight after this this thing under roof - some of rays kinda go around, slightly rotating which causes this
Yes, the size of the sun is tiny compared to the vast distance it is away. Its because light is a wave and particle at the same time, you experience a scattering around edges. Edit: Its probably atmospheric scattering rather than the wave interaction.
7:00 Albedo is the base color of the surface, roughness is the GGX roughness value and reflectivity is probably F0 (or maybe multiplied by some constant)
6:58 Albedo is a measurement of how reflective a surface is. usually, this is used as a key measurement to calculate greenhouse effects on celestial bodies such as planets and moons.