@@neintonine This could be done fairly easily by running a texture where a min/max of the depth buffer from a second camera is applied each frame. The second camera would be placed under the world pointed upwards and only render the collidable objects and the snow layer, a neat little trick for you :)
@@berghwilliam Oh... that is actually very smart. Tho I kinda dislike having to render multiple cameras, for performance reason. You can probably get some more frames, by decreasing the render texture of the snow buffer. But great idea. Edit: Just had an idea, how about instead of using a different camera, you first render everything except the snow (with your normal camera) and then the snow in a second pass. Now you can use the two depth textures for the min/max.
Im a big fan of cubemaps which are able to fake entire rooms/environments. This is most commonly used in reflections but we also see them used in games like spiderman where you can peer into rooms in buildings
I think those are using parallax mapping. With parallax mapping techniques you can essentially create the look a actual geometry without there being geometry there.
From my understanding parallax mapping is just a heightmap, cube maps is a shader that allows you to render the interior of a box based on perspective made out of a texture for each wall without needing an actual cube mesh.
@@quicktechtips42069It's a different process. Cubemaps are like a skybox, but you can use them on a different scale even though the depeh may not be completely accurate. Parallax textures effectively squash and stretch specific parts of a texture to simulate depth.
Parallax mapping doesn't necessarily need to be a height map, while things like bumpmaps for bullets holes can be parallaxed to make them appear to have depth, you can also use parallax for things like red dots to make them appear far in front of a gun like with a real red dot, without heightmaps, although some distortion math's, and likewise I think you could do the same to turn a room pseudo 3D like in Spiderman - it's all different techniques/math but it's all parallax mapping
I've been working with these texture maps since learning Blender and this is the first video that actually explains wtf they are more than a vague understanding. Thanks!
i like how the most rewatched part of the video is at 2:50 where it zooms onto Yelan's "leather jacket" i think part of it is the obvious reason why, but I also think the other part is trying to find this "detailed stitching on a character's leather jacket" because uh that's not where Yelan's jacket is.
I remember reading in a magazine from forever ago that Doom 3 used bump maps, because at the time, normal maps hadn't been invented yet. The author of the gaming magazine went into detail explaining what a bump map was, too. He could have been wrong, but the idea of using bump maps in a production game instead of normal maps felt wild to me in college.
@@rustyclark2356 I'm pretty confident it was, as the article described the technology, a greyscale bump map. When I got to college later and asked "why are we using these purple bump maps?", when I expected greyscale, it turned a lot of heads.
Doom 3 had full normal maps. Bump mapping had been around so long it'd been hacked into quake and half-life along with detail textures. That's also why doom3 and quake4 were so easily modified to support parallax occlusion mapping.
Man, I find your channel recently and I immediatly became a fan of your work! I was crafting character, but after watch your content and learn some stuff while i was studying i've decided to explore environment art and i'm fascinated. Anyway, thanks bro! Amazing work
There is also occlusion mapping that you didn't mention at all. Combined with normal maps it creates 3d like effects from all angles while remaining to be just a texture. In fact, I was hoping you will cover it in this video, since there isn't enough people giving an intuitive explanation of it.
THANK YOU!! I couldn't remember what it was called, but I remember there being another type of mapping which does similar displacement calculations, but restricts it to the bounds of the existing polygons. IMHO, it"s a crime not to at least mention parralax occlusion mapping, even if it's just to say "check out my other video".
Doom 3 wasn't the first game to use normal maps, bump maps and even tesselation : all of those were used in the first Outcast back in the end of the nineties, i think. Only at that time it was rendered in a voxel engine because there was no other way to produce those kind of graphics at that time. The result is very "low res" voxels but it looked great !
outcast doesn't use bump maps nor specular maps (which somehow everyone fails to mention and is half of the reason doom 3 looks the way it does). doom3 wasn't the first to use them, but it was the first to use them EVERYWHERE. stencil shadows too was one of its main party tricks. all those ground-breaking techniques being used in real time in a game came with a price though. doom3 has a lower raw poly count per scene than quake 3 even though it came out 5 years later.
So glad you found your audience! Happy to see you finally doing consistent numbers. The sad truth is, super niche technical tutorials don’t do the best for n RU-vid. This is a far better format.
My random game art question is: there are lots of videos talking about the best looking wild grass in video games, but has anyone achieved the perfect looking lawn in a video game?
A word about bump map and normal map. Bump map doesn't necessarily result in worse looking lighting than normal map. The thing about bump map (sometimes heightmap) is that it stores surface height information, but to calculate lighting we need information about surface direction (which is what is stored in normal maps). If we use bump map, the information about direction can be calculated from bump map at runtime (which is expensive), or it can be precalculated into a normal map automatically. Usually it's more common to just use an already prepared normal map for lightning, since all you need in that case is surface direction. But it is worth mentioning that, while normal map is simpler and sometimes faster, the information about surface height is lost. It is relatively easy to calculate surface direction from heightmap, but there is no easy way to revert this process.
Indeed, I did this with DTED terrain files that contain nothing but elevation data - essentially they are just height maps. It was fairly easy to compute the surface normals from the elevation data alone, and I then used this information and a programmable a light source vector to generate a 3D looking texture of the terrain. It's very effective, but probably not something you would want to do in real time.
first time here and I already love you man, funny as hell when it needs to be,. thank you, resume this topic more efficient than my animation school :D
''AAhAhaAh I would never leave you on a normal cliffhanger like the others that make you wait the next video.. ...I'll just put it behind a paywall'' bruh lol
The best thing is that you can make those normal maps among other maps from a simple single diffuse texture and make your work look much more professional.
Technical Correction at 0:56: While your words describe bump maps very well here there is actually the effect of a displacement map shown on screen. These are also 'just textures' that create depth, they do shift where the polygons are drawn and create 'real detail'. You can tell by the edges of the surface not being flat anymore but showing the bumps of the rocks. A normal or bump map would show some of the surface detail in the surface of the plane but would still come to a flat edge on the sides of the plane. - Source: I work in the field Sorry for being pedantic. I like the video!
Secret Service: Security Breach was technically the first title that used normal maps. Also the first for stencil shadows, specular lighting, and per-pixel shading.
Another great episode! For storing normal information in textures, I've seen a neat trick - when assuming normalized directions (a Vector with the length of 1) you only need to store 2 position coordinate values of the direction and can reconstruct the last position value from the information you have, in realtime on the gpu. It's a slight performance tradeoff though: for example reconstructing the z portion if x and y are given, with the requirement of x²+y²+z² = vectorLength² (Pythagoras theorem for 3D) can be reformulated to z = sqrt(vectorLength² - x² - y²) and further to z = 1 - sqrt(x² + y²)
before i watch the video is it that thing where there's an image or texture and the game somehow makes that texture looks 3d and stick out? but when you actually take a close look at it it's just a flat image
I remember the graphics and simulation module I took for my major in computer science. We were to code our own animation from scratch using OpenGL in C++. But because I had no idea how to create models and didn’t want the jank of amateurish model movements, I calculated everything implicitly. Meaning the only "model" I had was a flat plane going from the top left to the bottom right of the screen, which I used as a canvas to draw my implicitly calculated environment on. And, what can I say, the details were perfect no matter how far you zoomed in. The fragment shader was a pain to debug though.
Very informative, but I still feel like I have to add some information. Textures aren't magic, they don't allow you to add "limitless detail" simply because they're bound by the exact same limitation as 3D models: resolution. Your GPU is going to sample these textures for every pixel on the screen, even when said pixel falls in between the pixels of the actual texture, that's called interpolation. There are smart ways to interpolate, but at the end of the day you'll still run into issues if your textures are too small, or if you're looking at the object from too close. That means that if you're texturing a building for example, although you won't have to overload your model with millions and millions of polygons, you WILL need big textures to accurately cover the entire surface. In fact, these textures are often generated using higher resolution versions of the same mesh, through a process called texture baking. All of this to say: you're trading an issue for another. Yes, bump/normal maps reduce the computation time, but they take disk space and, perhaps more importantly, VRAM when the game is running. So if your GPU doesn't have a lot of memory, or if it is too slow for example due to a small communication bus, you will still experience performance issues.
now there's a new thing called "parallax mapping" which is like a fusion of normal maps and displacement maps. it fakes displacement by shifting the pixels instead of the geometry. all benefits of displacement maps with the processing requirements of normal maps. pretty wild stuff.
Yes, which is why whenver I look at a highly detailed model with friends (especially for VR models used for VR) I would always exclaime "Wow look at those normal maps!" to them.
as an artist i really want to give my art life by turning it into a game but cause of my potato laptop i cant do much so i watch videos like these to hopefully learn alot so when i finally can afford a pc i would be able to make my own game and this video helps alot and im subscribing
The difference between Bump maps vs Normal maps is really interesting. Bump maps are more user-readable, allow you to compare the heights of any two points across the map (useful for casting shadows), can easily be reused as a displacment map, and use a third of the data compared to a normal map. Meanwhile, Normal maps are quicker for a computer to interpret, as they store the raw data of "Which direction is this surface pointing" for diffuse/reflection calculations. They also don't have to make sense, you can use a normal map to describe impossible geometry that can't be described with a bump map. (Eg, a ramp that slopes downwards in a circle indefinitely)
honestly i am more impressed by the devs to figure that shit out and then implement it. This stuff is sooo old by now but we still use it so much. Imagine no normal maps..... holy shit
0:07 Crazy? I was crazy once. They locked me in a room. A RUBBER room. A rubber room with rats. And rats make me crazy.Crazy? I was crazy once. They locked me in a room. A RUBBER room. A rubber room with rats. And rats make me crazy.Crazy? I was crazy once. They locked me in a room. A RUBBER room. A rubber room with rats. And rats make me crazy.Crazy? I was crazy once. They locked me in a room. A RUBBER room. A rubber room with rats. And rats make me crazy.Crazy? I was crazy once. They locked me in a room. A RUBBER room. A rubber room with rats. And rats make me crazy.Crazy? I was crazy once. They locked me in a room. A RUBBER room. A rubber room with rats. And rats make me crazy.Crazy? I was crazy once. They locked me in a room. A RUBBER room. A rubber room with rats. And rats make me crazy.Crazy? I was crazy once. They locked me in a room. A RUBBER room. A rubber room with rats. And rats make me crazy.Crazy? I was crazy once. They locked me in a room. A RUBBER room. A rubber room with rats. And rats make me crazy.Crazy? I was crazy once. They locked me in a room. A RUBBER room. A rubber room with rats. And rats make me crazy.Crazy? I was crazy once. They locked me in a room. A RUBBER room. A rubber room with rats. And rats make me crazy.Crazy? I was crazy once. They locked me in a room. A RUBBER room. A rubber room with rats. And rats make me crazy.Crazy? I was crazy once. They locked me in a room. A RUBBER room. A rubber room with rats. And rats make me crazy.
Woah! That was so great. I'll take that as a note for my future indie game development. Anyway, is Unreal Engine 5's Nanite already specialized in displacement mapping or not? Just curious due to Nanite's potential.
I wonder if it’s possible to change between maps as the object get closer to the screen ? E.g. normal mad for far away object, bump for object at a decent distance, and displacement maps for close up objects
Your courses look really good, i just wish you had some where you only need Blender or/and other free Programms. If you had something like this i would buy it, because i love your style.
so using roughness and normal maps is how as displacement will have to have enough polygons there to begin with for it to work thus requiring more polygons to work.
I don't know much about displacement maps in 3D but aren't they super efficient in 2D games. I mean they were used in retro consoles for really advanced effects without using much performance.
So you never really mentioned a way to work on a course, but I've been looking for something like it. I 3D model about 2 hours a day for most days on Blender and I understand LUA in depth and am currently learning C#. If you have a way to sign up as a potential creator, I'd be interested. Edit: I personally use Unity.
Doom 3 wasn’t the first game that uses normal maps. One of the first games were Evolva and Virtua Fighter 4. The sixth console generation began to use normal maps. Even the flopped Sega Dreamcast supports it. Only PlayStation 2 couldn’t do it (there’s a workaround for it, though), GameCube and Xbox also uses normal maps in their video games and were more widespread compared to Dreamcast and PlayStation 2.
I get it. But thats maybe because I’ve spent so much time in 3D software like blender. Normal maps can do something similar But you’d normally have to bake it from a high poly sculpt. You can do it freehand but its harder. Its good for making a model look like its made out if clay or adding scratches or cracks. aaaan they are in the video noice