Pretty good visual effects for a seemingly low-budget production! All the CGI was rendered in Unreal Editor??? Edit: www.unrealengine.com/en-US/spotlights/behind-the-lens-of-the-short-film-challenge-australia
@@simongravel7407 If I'm not mistaken everything was done in Engine. Such an amazing experience to work in. Makes creating art feel real and your ideas can just come alive without having to go to school for years.
As a hobbyist, raising a glass right now to not fussing around with baking normal maps! It's a dream come true, I'm sure I'm not the only one who found this an annoying technical hurdle when learning the 3d asset pipeline.
its not tragic once you get good at it and can be really satisfying when you get it to look good but thinking about not having to do it anymore makes me happy.
@@chancemcclendon3906 It's tragic for me and many more i believe. Every time i retopologized a model, or created/baked lightmap, i felt that i ways trowing away precious time of my life. Now we can focus more on telling a story. The only thing that actually matters.
@@mgodoi3891 I feel the same, it is a pain in the brain and I never manage to do it. Then texturing in substance or mixer is impossible since I have no curvature, no normals. Also creating geometry is difficult, re doing the process for lods and collision. Because of all that, I never managed to create a full asset, despite having spent years on learning and practicing. I hope nanite with help me, but I'm not sure lol. I'm trying to sculpt an asset but I can't understand how to use sculpt to make non organic asset, there is 0 precision on what I'm doing. And what about animating an object with 50 millions triangles
@@mgodoi3891 You could have just used the real time renderer in UE4. Obviously it wasn't as good as lumen but it was still good and allowed real time workflows without annoying light maps or baking times. If you have an RTX card you could even use the RT lighting instead of lightmaps. Unless you are releasing a project that needs to maximize performance (and static lighting quality) I wouldn't be wasting my time with light maps and baking. Learn the real time lighting workflow. It's far quicker for messing around while learning other aspects and seeing instant results. Obviously it can't do global illumination but you can somewhat fake that. Lumen solves that problem.
Honestly thought 36:00 was a high res photo....Then the camera started moving later... I am honestly mind blown by the level of detail achieved. Now all thats left is physics and foliage.
Amazing, amazing. The level of brilliance here in working from first principles to develop a generalized, scalable, graphics engine is just genius. This is a quantum leap for real-time graphics, and likely represents the final prototype for completely photorealistic real time imagery.
Yeah. You can basically photogrammetrize your neighbourhood -> clean up the models -> place on a level -> welcome to your virtual neighbourhood. No more bothering with LOD models. Just have one version and you're set.
Raytracing is better than this and is becoming a reality soon in terms of real time cg. While this tech is brilliant, imo it came too late because it will soon be obsolete compared to RT anyways.
@Ceki Nanite is not related to lighting, but raytracing is. Nanite solves the geometry resolution problem while RT solves both the geometry and the lighting accuracy problem. So RT is just a better solution
Actually in 55:50 Brian considered Voxels as well to virtualize geometry first, but for the sake of workflow in computer graphics and compatibility with artist's former products and tools etc, they stick to triangles. That compromise is ok, but also undermines a true revolution in computer graphics, which voxels pose with their many advantages over polygons. Some of them are deforming, destructing and bending of voxels and proper physics calculation in general, which you can't do exactly with static meshes in nanite. Also UE5 already borrows some advantages of voxels at least to some extent by using voxel cone tracing global illumination and signed distance field global illumination in Lumen. And Nanite seems to use signed distance fields as well. It is exiting, how unreal engine and other engines are going to be developed further. My guess for the market is, that after further graphical improvements near photorealism customers are going to demand much more approaches to realistic physics. Read the many customer's critical comments and even shitstorms on popular sports games and their developers. They seem to be already fed up with the x iteration of graphicaly slightly improved sports games every year again and again, while the player's movements still do look artifical and are not satisfying.
@@aladdin8623 Is there any reason the current implementation of nanite precludes voxels from being used as well, for example to represent volumetric objects and generate triangle- based and nanite-enabled meshes where the voxels say a particular material should be?
@@alan83251 In short, i don't know :) I only can guess here. Since nanite does also use signed distance fields, which are related to voxels, the mix with volumetric objects might be even easier. In general it should be definitely possible to combine polygons and voxels as we already have seen in several games or applications in the past. And this is good news and important for a possible transition or cowork at least. Among other notable caveates combining the two techniques the light leaking problem should be mentioned. Walls made out of ploygons have to be thicker than 10 cm to avoid that. This and other drawbacks can be read in the documentation for lumen. If my eyes are not mistaken light leakage for example becomes visible in the UE5 Demo the ancient. Every time the female figure echo lifts her energy ball behind her back for a prepared energy strike, the emitted light from that energy ball shouldn't reach her legs, because her body is in the way. But it does. If her body was made out of voxels, there might be proper solutions to prevent light leakage. For example one can try to place light blocking voxels under her skin. Another promising approach for preventing light leakage is based on the former mentioned SDFs. To learn more about that look for SDFGI used in the godot engine 4, which was awarded by epic by the way. Lumen seems to use that partially already among other GI solutions all together. When it comes to SDFs in general and raymarching on them i highly recommend the works of Inigo Quilez. What he does achieve by just a bunch of algorithims in that regard is truly remarkable and amazing. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Jf9MlYtkJM0.html
I never understood what Euclidean was trying to do. Were they trying to create a sort of point cloud type system that was compressed into a hash function?
@@shayhan6227 As far as I can tell they were mostly trying to defraud the Australian government and various private investors. But as for what they were _saying_ they're doing, if you parse the muddy marketing language (because they never provided anything close to a technical breakdown) it _looks_ like some kind of virtualized voxel system with a custom ray-marched renderer. So funnily enough quite similar to what UE5 is doing, except with voxels instead of polygons and signed distance fields.
@@shayhan6227 They achieved what nanite achieved a decade earlier and were subjected to worse hostility than UE5 has been. While the application of their concept and achievement couldn't be realised because of the form taken (point cloud data rather than triangles), their pioneering efforts were well ahead of their time and poorly appreciated by the safari rabble. Solving the geometry problem through efficiency than brute force, as everyone else was doing, was visionary, and we should all be thankful nanite clones their original concept.
17 дней назад
@@KillahMate love, when people know technical 3d stuff. i never dived into this, staying a web developer at most :-D
What would be amazing to see is procedural generation of landscapes with Nanite. A level design artist could simply enter the parameters: what kind of environment, biome, topography, paths or roads if any, etc. Then the engine using Nanite and the Quixel library will generate a landscape which the artist could then just fine tune and make ready for production. This tech is already being tested in 3d renderers through use of algorithms. Hopefully one day something like it will be in the engine.
You could build something like that today using Houdini and Houdini Engine and then let Nanite do its magic I think. It would need a fairly complex HDA but nothing too hard if you build it up step by step.
Unreal 5 is going to change everything because it's the single biggest graphics engine advancement ever. Welcome to the future of graphics and it is looking bright for all content creator's. Thanks to everyone who work on the Unreal 5, you guys are freaking awesome.
I'm not into game design and I don't know a lot about it, but I'm still fascinated by this enough to watch all of it. Like some sort of automated and optimized LOD system that scales like crazy.
The technology is amazing thats for sure. It doesnt seem like normal baking will be a thing of the past, simply because a baked map still packs much more detail than a say 2m high poly mesh. Im pretty sure megascans high poly assets are not 'source scans' (which i assume are closer to the 50-100m polys range) Additional microdetail can be extracted from 16k albedo with good results. Also, good luck UVing those assets with millions and millions of polys if you ever want to do any map editing, which is impossible without custom UVs. It seems like we will continue to decimate and bake for the time being but simply being able to use more high poly assets with nanite is really a great improvement! Props to these geniuses!
I love that you call 128 tri patches edits, i brings mind to the Evans talk. I am blown away and you amazing work is really inspiring. And i am sold that you use your own rasterizer.
Thank you so much for this and your siggraph presentation a few years back! These two Videos/ papers along with the UE5 documentation has been a life saver for myself and my company! I am TA responsible for leading the deep-diving, documenting, and creating pipeline best practices for nanite across our entire company and without this and the siggraph presentation I'd be much further back in my deepr understanding of the concepts, logic, and our on the group tests.
2:08:09 it got me thinking. Masking is mostly an optimization for the general old way to draw triangles (since you can make an arbitrary-shaped-hole in a plane formed by just one triangle). but with nanite , you can just make the hole with triangles. it might or not be more issues for the artists, but in theory should work similarly to what a masked surface is. and i don't think it will be much slower. since nanite in theory is a relatively fixed cost per screen size (and materials). i prefer they don't add that support if it's gonna lower performance and you can work around it with actual geometry.
As far as I understand, all this "magic" is possible mainly (if not entirely) due to the new data structure and associated algorithms, rather than new hardware features. If so, this is such an amazing and remarkable invention as, for example, quicksort!!!
Thank you for this conversation. I am witnessing the gentlemen who are at the tip of the spear. Awesome to see the stewards are intelligent and powerful at their craft
Hello! I tried to get an indentations effect for my material on the floor but I realized there is no displacement in the material outline node. How can I get this effect in UE5? Thanks
So is there a feature in it that can cull all the overlapping geometry once you are satisfied with the placements something akin to light baking sort of baking of the geometry to optimize it or you have to do it in modeling program?
As a film environment artist, I've been flirting with UE4 for a few years now. Seeing the same type of lighting tech and mesh quality in UE5 we've had in film for over a decade finally has me diving in for real. Finally feels like a serious filmmaking tool.
I'm 30 minutes into this video, still waiting for the definitive moment when I realize on a basic level how Nanite works. It took me years and some experimentation in 3DS Max before I saw with my own eyes how Normal Maps differed from Bump Maps. I'm excited to realize how virtual geometry works.
Take a very detailed mesh with millions of tris. Now break it into tiny patches of say 1000 triangles each. For each patch, simplify it several times (reduce from 1000 down to 400, 200 or 100 tris) before hand. Save all to disk. So you basically have LODs but for each patch of the full mesh. Now, during render, for each patch you select a detail level that just covers the pixels best and dynamic load from disk. Render using a custom system specialized for very tiny tris.
@@ramakarl Thank you! I'm beginning to be able to visualize it! Especially when you said it's basically LOD but for each patch of the full mesh. But what do you mean by 'select a detail level that just covers the pixels best and dynamic load from disk'?
Maybe, to reduce overdraw in cases with many close slises, UE5 needs a way to bake a heightmap from the lowest visible heights at all points, so you can just cull everything beneath it?
Wouldn’t it be possible to “cut” the assets in storage so it only use the parts that are shown on any part of the project? This wouldn’t be so beneficial in terms of reducing overdraw but it would have the additional advantage of reducing storage space. After you finish the project you make nanite look for the parts of an asset that are shown at any moment (even if you use that assets multiple times) and make a version of that asset with basically no detail in the parts that aren’t used at any time whatsoever, and keep those as your assets. Make it a post process basically. You could probably merged these two systems, and end up with assets that are completely flat on the surface even at the cost of replicating some parts of assets.
Wouldn't it be cool if we could use eye trackers combined with nanite? That could even further increase performance for lower power systems by rendering tri's that the viewer isn't looking at a lower level.
Okay, how the heck do I enable the stats readout on the right side fo the screen they're using at 1 hour, 29 minutes, 30 seconds? I've been searching for that for over an hour now!
I see a problem in the triangle density when you combine Nanite with your temporal upsampling. The picture looks not as good as it should look because with the lower internal resolution less triangles seem to be rendered than the final, upsampled picture needs. Nanite should optionally render more triangles when temporal upsampling is active. The same should be true to DLSS.
That wouldn't make sense from what I would understand, the upscaling mechanism is still using a lower-resolution base image as its source, so rendering triangles smaller that the pixels in the base image would just be wasting draws. The expectation is that the upscaler is then supposed to "resolve" what the higher resolution image should look like from those lower resolution pixels. DLSS in particular does this through machine learning.
@@tharaxis1474 DLSS 1.0 was using ML upscaling but DLSS 2.0 is using the temporal component to gather more samples for the final image, so it has not "to dream up" more details, they are really rendered over time. It's done via a jitter of the sampling coordinates from frame to frame. Similar how UE5 is doing it. Therefore, the final image really "contains" way more internal samples than the internal resolution does. Even though the internal resolution is lower, through the jitter of the sampling coordinates, it often gathers even more samples than native rendering would do. Therefore, the geometric detail of the internal resolution is not enough because it is going to get resolved to a way higher output resolution with true sample information.
with the overdraw issue wouldnt something like the boolean modifier in blender work? where if you use combine the two assets the make up the ground it will just delete the interior geometry. or maybe say if the geometry is not being affected by light dont draw that geometry because its underground.
when I first saw people demonstrating and using Nanite, the very first "what is nanite and how to use it" that got out there, I was annoyed, because none of them were really able to explain how it works, just give some weird claims about it that made it seem like impossible magic. now that I've seen this whole GDC talk (because that's basically what it is)... Now I understand. And yeah, it basically IS magic. Amazing work. Amazing piece of tech.
So, nanite could be used for blades of grass right? Assuming they are static? So like, a game could have regular grass in areas the player will walk around in (for physics or whatever) but distant grass can be nanite instead of just disappearing from view? And I don't mean the grass will be swapped out, I mean like areas the player won't be walking in.
@Scotland Dobson That is reassuring. It will be interesting to see how long others will take to get similar results. Tim Sweeney mentioned that is was not easy.
It doesn't make much sense for tools like Blender to use tech like Nanite. Its trade off is sacrificing flexibility and editability (super important for Blender; nanite meshes are cooked before they can be rendered) for increased real-time performance (not that important).
@@kazioo2 The baking to nanite would be done just before rendering to keep editing flexibility. UE5 seemed to do importing as nanite really fast. When you render a animation that takes 10+ minutes per frame it should make sense, not to mention renders that take 10+ hours. It would make 4k+ rendering also possible cost/time wise.
Is there a limit you can put in the total of triangles? Like for the nintendo switch for example putting a limit and the nanite scales down in the background so the game can priorize the player model that cant use nanite and the game can go to a smooth 60 fps
can you/anyone correct me if I'm wrong (haven't tried UE5 yet) but according to the Documentation, you can't paint decals or vertex paint on meshes that has Nanite enabled, right? Has anyone tried it yet, trying to vertex paint or paint decals onto Nanite enabled meshes ?
Interesting - not sure how many of you guys have read Euclideon's patent, but some of the slides and their explanations sounds very similar to what is explained in the patent, hmmm. I've spent quite a bit of time to get to the level of understanding that I'm at and there's still so much to scratch, but it's kinda salty, but refreshing to know that someone else has done what Euclideon had demonstrated, except they were a bit more inclusive - not knocking it - I'm just saying that I'll follow where the technology is accessible, and the unreal engine seems to be what I'll be using!
What about using some form of boolean cut method as a type of 'bake' method, where the assets you are adding in can do boolean cuts to other assets, like in boxcutter for blender. Then when you are set on how it is and like the result, you compile it or bake it to make those boolean cuts final and remove any geometry that isn't visible and is instead hidden under meshes of other objects. This may not fully get rid of all of them, but it would easily remove over 90% of the hidden mesh in a way that benefits nanite the most by removing all the problematic geometry in one fell swoop without being destructive during the design processes, and while allowing the artists full freedom in design
Bravo to the people who made that shortfilm! id love to see some funding going their way to turn it into like a 30 minute thing. keep the claustrophobic and legit scary thing. just expand everything a bit. add one or two more scenes pre and after.
Does it do some raycasting for occlusion culling? When a polygon smaller than a pixel, do you draw just a point, or project the full triangle (with edges etc...) ?
I'm looking forward to seeing how people take advantage of nanite to improve assets that used to rely on transparency maps for performance. I bet we'll have much more convincing plants!
future triangle surfaces will have special types of depth mapping alongside special texture mapping which will be more like pattern mapping. To give each tetrahedron made from triangle bodies could have 2D surface control that controls how the truly 3D stuff inside works. All this will be generated in low accuracy and dynamically sampled ready for high accuracy/detail upscaling. Because a lot of the detail will be in a lower dimensional depth map data there will be more 3D data efficiency. Because of the pattern/texture 3D surface mapping you will be able to eventually zoom in on things like skin and see the wrinkles working. Using AI to control the sampling of a frame to minimize what needs calculating and maximize upscaling detail efficiency so you can do more with less. I personally believe for about 210p 15bit to match a high quality video of real life at 210p 15bit all you need is PS3 level computation but with a more optimal instruction set and far more efficient/complex software and a lot of pre-baking. You would need PS4 level processing for 300p 18bit and PS5 level computation for 340p 18bit. Final thought. We obviously are not making the most efficient use of our systems and the real question is as problem complexity increases what is the optimal use of avoidance and exclusion and how close to real are you prepared to work for and how much slack will common horse power allow you in the future when trying to get close to 4D real looking/playing games.
Unreal! Cant wait to get my hands on UE5! I am a metaverse builder specializing in gamified spaces. I'm saving up for a new computer just for building with UE5 in the metaverse. This is gonna be wild.
OH MY GOD! I can't wait to see and play MMORPGs with Unreal Engine 5. Imagine Aion 2, Cabal 3, Final Fantasy XVII or XVIII Online, WoW 2, World of Diablo (its possible) and many others... :O
Here's hoping that Nanite will somehow be implementable in last gen (PS4, Xbox One) consoles for one last wave of games before PS5 and XBSX become more available and we inevitably make that big jump to current gen.
Does what he says on 59:30 mean that with Nanite in order for the engine to make a single drawcall per father material it's no loger necessary to make a Blueprint of the mesh?
The problem with that is only textures size. As for geometry it is totally possible with old techniques, like quadtree Lod. I already doing it in my game "How do you like it, Elon Musk?" (using procedural texturing).
what bugs me in games are water edge and lack of seaside waves and foam the other is cloth such as skirts, kilts in that they don't flop and blow in wind.
Did you turn off reflections and fps cap as well? I think there was a command to turn off nanite for sure. If you're still having issues we'll it's in early access
Are you sure you don't have v-sync enabled? I've seen videos of some super detailed scenes running well above 60 fps in full screen in the editor. What specs are your pc and resolution of the monitor?
@@guri-hz5tw What do you expect? UE5 is a next gen game engine for the next 10 years. A 1050ti is very much a low end card these days. Even a gtx 970 significantly outperforms it. Consoles have more than double the power. High end pc's more than 5-6 times the power. Running in editor is also more demanding. It's also a very early build of the engine that will be optimised over time. There's also 0 graphics driver updates for the engine yet.
Off topic but can you please make a video on implementing the new iOS Ad Tracking Transparency Authorization request. It’s required for all new iOS apps