Does this render uses less VRAM than training? I'm planning to train a really big scene via ssh on a GPU cluster (~90GB VRAM), but then I dont know how to visualize the results. Any help would be greatly appreciated.
The installation video uses the render.py file that was updated in this video. If you are still rendering animations with render.py, we suggest updating the ffmpeg code. If you follow the advanced tips video, we use run.py to render videos. That python script does not have the frame rate export issues.
Instant-NGP is not specifically designed to output photogrammetry quality meshes. We recommend you look into nvdiffrec for neural rendering based 3D object output: github.com/NVlabs/nvdiffrec
When trying to render I keep getting the message: DLL Load Failed while importing pyngp: the specified module could not be found. Would you know how to fix this?
Yes I’ve been trying to modify the render script to output EXR instead of jpg but there are too many different errors and I can’t overcome. It would be awesome if we could make a version of the render script that would render an EXR of the beauty/depth/world position all with alpha channels.
I would be curious if anyone's got a solution for taking the nerf and putting it online for sharing with others. like the nerf version of Sketchfab (I suppose you could also just generate a mesh from the nerf with the color data baked in. Whats the process for that as well?)