I think I've just found the solution for an ambitious Unreal 5 cinematic project, that was, until about 10 minutes ago impossible for one man to produce. Thank you for the insights.
Seems like nerfs could really benefit from transfer learning. On 2d object detection you can train a decent model with 100 images with good pretrained weights. I guess with common scenes sky/tree/house patterns can easily be learned
@EveryPoint Is this good enough to export a series of frames that could be used for photogrammetry? As if you didn't get enough data on site for whatever reason, and you could use this to fill in the blanks, so to speak?
Unsure if that’s possible. We would need to test that theory. However, Nvidia do have nvdiffrec that neurally renders geometry and PBR textures. That may be the answer
Should it? Or should software and hardware systems catch up to this technology where the Gltf format can't replicate what a NeRF can visualize? As Matt's first statement said: NeRFs are not meshes.
So I work in film and VFX, would this method work for creating a 3D render of a city to be used in a production lets say for use in "Unreal Engine or another 3D program?
@@EveryPoint Adobe Substance has the python API and nodes to allow an AI to generate 3d assets and textures. This is what I'm researching for this software which led me here....
That is an option. We find that matching the output video resolution to the source imagery works well. However, overall most NeRF output is a little "fuzzier" than real life.
Hi @EveryPoint. Its still not 100% clear if nerf is ready for commercials. How to start with own 3d games/walking experience. May i ask you to make some video/post explaining which instruments are free to use/commercials ready and which are not.
Great question. Stereo pair cameras can aid in providing depth and positional data. IMUs and other technology can be leveraged as well to provide a good SLAM based camera pose output for training NeRFs.
Therefore, both photogrammetry and NERF need a set of images with known posas input. However, modern photogrammetry technology does not require high accuracy of pos, and even does not need posas input at all. What I want to ask is, does NERF require high accuracy of input attitude?
I am not sure why AI researchers seem to ignore the fundamental advantage of our species. Our ability to understand & construct complex from simple. We are trained from babies to understand basic rules for everything we see and work out what & how for everything from shapes in the cot onwards. If I dont understand something I break it down until the rules kick in. If a Nerf or meshing algorithm just applies brute force it will never be really efficient, but if it understands what a car, column or crow is it surely it will then be able to adapt a 3D library element to its surface\field far quicker in much the same way the 2d AI image generators are doing.
there are other videos which are shorter and more general, but i was looking exactly for this discussion and exploration. youtube is a vast landscape :) something for everyone