Тёмный

Hands on With Nvidia Instant NeRFs 

EveryPoint
Подписаться 7 тыс.
Просмотров 136 тыс.
50% 1

In this livestream recording, we get hands on sharing our experience with Nvidia's Instant NeRFs.
We cover:
-Tips on compiling the code
-What we learned about capturing good photos for NeRFs
-Training your own first NeRF and exploring the GUI!
-Breaking down the parameters that you can change to improve your results
-Rendering animations
We don't compile the codebase in this livestream. We suggest you give it a shot ahead of time following ByCloudAI's instructions: github.com/byc...
Official Nvidia NGP Repository can be found here: github.com/NVl...
#ComputerVision #NeuralNetwork #AI #NeRF #MachineLearning #3D

Опубликовано:

 

8 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 165   
@AnvABmai
@AnvABmai 2 года назад
Dear Author , please load 1080p ! THis is great and informative video explain me a lot regarding NeRF settings but it's make me sad to watch it in 720p .
@EveryPoint
@EveryPoint 2 года назад
Unfortunately the livestream was recorded in 720p. That was our mistake! We will have additional content soon at 1080p resolution.
@voidchannel1492
@voidchannel1492 2 года назад
@@EveryPoint Try to use some level of super resolution models in AI and see if that works
@alanmelling3153
@alanmelling3153 2 года назад
I really appreciate you sharing your insights and experience with this tool. Thanks, and I look forward to more
@EveryPoint
@EveryPoint 2 года назад
Thanks, Alan!
@cousinmerl
@cousinmerl 2 года назад
I wonder if this tool can be used for police work? if the police manage to capture multiple surveillance photos, they could also composite them with scenes after, the deep thinking could then evaluate how things have changed, highlighting changes and show clues to detectives.
@eliorkalfon191
@eliorkalfon191 2 года назад
Some thoughts about making it a real time and a good quality system: 1. For estimating camera positions you could use the loftr transformer from Kornia library (and not colmap) for key points detection and matching, I think it’s much faster 2. For smooth mesh maybe neural tsdf can do the trick if you aren’t using it yet;) 3. It could be great if you add normals estimation for the reconstructed 3d coordinates Good job!
@EveryPoint
@EveryPoint 2 года назад
Perhaps the NVIDIA AI team is reading these comments!
@fraenkfurt
@fraenkfurt 2 года назад
@Elior ... With your knowledge on that topic would it be theoretically possible to realtime-render this in vr or is this something out of scope in terms hardware-requirements or/and on how the rendering-engine works?
@eliorkalfon191
@eliorkalfon191 2 года назад
@@fraenkfurt With today’s methods near real time could be achievable, maybe 0.1 fps (each scene is a “frame” in this context) and faster in end to end product. Hardware limitations are crucial for sure. Recently I read a paper which called “Tensorf - Tensorial radiance field” , they said that a mixture between this and ngf could lead some interesting results. I don’t know exactly about what did you mean with rendering engines since I have worked only with 3d structures and in none real time environment.
@EveryPoint
@EveryPoint 2 года назад
@@eliorkalfon191 The fact that you would need to render the scene twice with slight offsets and at a high resolution would mean your hardware would have to be very very high end. Cost prohibitive at this point. The real-time rendering on our RTX 3080 is running at a very low resolution. At 1920x1080 we render 1 frame every 3 seconds.
@pabloapiolazza4353
@pabloapiolazza4353 Год назад
Does using more images improve the final quality? Or at some point it doesn't matter anymore?
@sethdonut
@sethdonut 2 года назад
the automatic bezier curves on the camera paths... THANK you
@EveryPoint
@EveryPoint 2 года назад
One reason we keep using Instant NeRF! The camera path tools are handy!
@ZAZOZH43
@ZAZOZH43 2 года назад
This video is amazing. I just found out about you last night and already watched all your videos. Your hilarious.
@EveryPoint
@EveryPoint 2 года назад
Thank you!
@carlosedubarreto
@carlosedubarreto Год назад
This is simply amazing. Thank you A LOT.
@EveryPoint
@EveryPoint Год назад
Glad it was helpful!
@JamesJohnson-ht4gi
@JamesJohnson-ht4gi 2 года назад
Seeing this stuff from start to finish caters to my learning style. Soooo flipping helpful! Thanks for the tutorial! Have you seen Nvidia's 'nvdiffrec' yet? Apparently it's like photogrammetry, but it spits out a model AND a complete PBR material set!
@EveryPoint
@EveryPoint 2 года назад
Yes, it uses neural networks to compute SDF and materials as separate flows into a solid model.
@0GRANATE0
@0GRANATE0 Год назад
is it "easy" to install/run it? what are the input data? also a video or just a single image?
@stephantual
@stephantual 7 месяцев назад
Subscribed - wish you made more videos, they are valuable and educative!
@rubenbernardino6658
@rubenbernardino6658 11 месяцев назад
Thank you Jonathan for a phenomenal and very effective tutorial. It could still be improved if it was made available in HD or higher resolution. Some of the fonts on the video content appear too small when I watch the video out of full screen.
2 года назад
I wish NeRF should be default in After Effects, Houdini, Unity and Unreal...definitely a revolution for XR!
@EveryPoint
@EveryPoint 2 года назад
We imagine it becoming part of the NVIDIA Omniverse
@ricksarq22
@ricksarq22 2 года назад
It worked.....Thank you soo much
@EveryPoint
@EveryPoint 2 года назад
Great to hear!
@SeriouslyBadFight
@SeriouslyBadFight Месяц назад
They need to implement the ability to render your instant nerf into a 3d rendering software. Something that’s not so gpu intensive. Something that could be modified to a mobile device.
@thenerfguru
@thenerfguru Месяц назад
I suggest you look into Gaussian Splatting.
@1985Step
@1985Step 2 года назад
An extremely well done video, congratulations! Could you please share the photos used for the bridge reconstruction? In case already done, where can I find them? Thank you.
@jeffg4686
@jeffg4686 2 года назад
Thanks for sharing.
@EveryPoint
@EveryPoint 2 года назад
You're welcome!
@ValloGaming
@ValloGaming 2 года назад
hey i need help i was getting python: can't open file 'F:\Tutorial gp\instant-ngp\scripts ender.py': [Errno 2] No such file or directory and i checked under scripts and render.py is not there is that way ?
@EveryPoint
@EveryPoint 2 года назад
You have 2 options: use bycloudai’s render.py script or use run.py ByCloud’s GitHub fork: github.com/bycloudai/instant-ngp-Windows Or, you can use run.py which is in our advanced tips video at the 1 hour mark: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-_xUlxTeEgoM.html
@antoniopepe
@antoniopepe 2 года назад
Great ... I have a question.. is possible to export a sort of point cloud? Would be great.
@EveryPoint
@EveryPoint 2 года назад
Not currently possible
@antoniopepe
@antoniopepe 2 года назад
@@EveryPoint thanks
@faizanuddin.
@faizanuddin. 2 года назад
first created the fox nerf but after that when i used my own images and gave it colmap commad it doesnt gives me a transform json file what should i do it says D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'
@EveryPoint
@EveryPoint 2 года назад
That is a new one for us. Are you using RAW files or HDR video?
@faizanuddin.
@faizanuddin. 2 года назад
@@EveryPoint i figured it out the problem was i used a video which was converted to image sequence bht the video was captured by a phone in 1080 due to my smartphone was forcing image stabilization some alit of images has some amount of motion blur and i was using colmao exhaustive matching which crashes sometime and its not good with image sequences another creator suggested me to use colmap sequential matching which works good and the final nerf was really good and clean with very less noise
@alvaroduran7444
@alvaroduran7444 2 года назад
Great video thanks for the information! I was wondering if you have had any experience with reflective surfaces, As you know that is usually the Achilles heel in photogrammetry.
@EveryPoint
@EveryPoint 2 года назад
They are also an Achilles heal for NeRFs. It creates a parallel world inside of the mirror.
@BenEncounters
@BenEncounters 2 года назад
@@EveryPoint That is actually interesting to know
@user-oj4hr5rh6i
@user-oj4hr5rh6i 2 года назад
Nice work! Although very expensive
@EveryPoint
@EveryPoint 2 года назад
Expensive hardware is needed for sure. However, that is the truth for photogrammetry and 3D modeling as well.
@user-oj4hr5rh6i
@user-oj4hr5rh6i 2 года назад
@@EveryPoint Thanks for your comments. Looking forward to see more amazing stuff from your channel.
@GhostMcGrady
@GhostMcGrady 2 года назад
Is there a way to take your first dataset and json and compile it with a second one? I.E. string multiple rooms of a house together from separate datasets?
@EveryPoint
@EveryPoint 2 года назад
Technically, you could do something like this. The limitation would be the total VRAM this project would take to run.
@GhostMcGrady
@GhostMcGrady 2 года назад
@@EveryPoint Right, after posting the question I came to find how limited in scale you can get. Thanks for the amazing tutorial & response.
@EveryPoint
@EveryPoint 2 года назад
We expect that the scale issue will improve over time. Also, a services could be built on cloud based service where hardware limitations could be overcome. Remember, it’s only a technology that first came out 2 years ago!
@techieinside1277
@techieinside1277 2 года назад
Great video mate! I was wondering how do you go about exporting the model + texture so as to use it with blender?
@EveryPoint
@EveryPoint 2 года назад
NVIDIA Instant Nerf does not produce a high quality textured mesh yet. It's primary use is for alternative view synthesis. We suggest keeping an eye on advancements as the technology is quickly advancing.
@techieinside1277
@techieinside1277 2 года назад
@@EveryPoint I see. Can we export the output we currently get , because some of my scans look great, and i wish i could just export it for use in blender.
@techieinside1277
@techieinside1277 2 года назад
I have ngp setup and it's working great so far.
@jsr7599
@jsr7599 2 года назад
@@EveryPoint Does it provide something to work off of? Is it possible at all to create a gltf / glb file with this technique? I'm new to all of this, by the way. Thanks for sharing.
@EveryPoint
@EveryPoint 2 года назад
@@techieinside1277 As you probably have noticed by now, the mesh output is not optimal. Currently, traditional photogrammetry will produce a better useable textured mesh model.
@astral_md
@astral_md 2 месяца назад
Awesome ! Is there a way to save RGB and depth and tother textures from a view ?
@aznkidbobby
@aznkidbobby 2 года назад
Can you export the file and take measurements on the 3D model?
@EveryPoint
@EveryPoint 2 года назад
You can export a mesh, however, it is lower quality than you would produce with traditional photogrammetry.
@Xodroc
@Xodroc 2 года назад
@@EveryPoint There goes the Unreal Engine Nanite dreams with this tech!
@jeffreyeiyike122
@jeffreyeiyike122 Год назад
Please i am having issues with the custom dataset. The rendering is poor.
@martndemmer6405
@martndemmer6405 2 года назад
i work myself as well as a non coder through all and have the same issues with python 3.9 and python 3.10 (what i use for another somehow important task for me) is there anyway to solve it without removing it ?
@EveryPoint
@EveryPoint 2 года назад
If you have build issues, we suggest editing the CMakeCache where 3.10 was used and rebuilding the codebase. We also suggest you can try adding the build folder to your python path in the environments editor. This may solve issues you have.
@martndemmer6405
@martndemmer6405 2 года назад
@@EveryPoint i managed in the end after i deinstall all ... and then install new ... and i started to create some nerfs ... instant NGPs but the results are terrible :( .... i used the same data sets which i used before for photogrammetry ... for example i used 700 pictures of an forest with bridge ... and in photogrammetry it all worked but in nerf ...it looks like mess ??? then i tried other more tiny sets but as well absolute disapointing results ... do i do something wrong ... it looks to me that colmap does all good and then when i start the instant NGPs it is not doing the job propperly ???
@PatrickCustoBlanch
@PatrickCustoBlanch 2 года назад
Do you know if it's possible to run a multi gpu set up ? Great video btw!
@EveryPoint
@EveryPoint 2 года назад
Currently, no it does not.
@tszkichin4538
@tszkichin4538 2 года назад
thx for soft mate
@EveryPoint
@EveryPoint 2 года назад
You're welcome!
@TheBoringLifeCompany
@TheBoringLifeCompany 2 года назад
I wonder when some sort of documentation will appear?
@EveryPoint
@EveryPoint 2 года назад
There is quite a bit of documentation on the GitHub Page
@sheidekamp2485
@sheidekamp2485 2 года назад
Hi! thank you for the great video. Is there a way to render a cropped scene? Because the entire background jumps back when I render or reopen the scene. I want to render without too many clouds
@EveryPoint
@EveryPoint 2 года назад
You have two options: edit the aabb scale in the transforms file. Or, you can hack the run.py script to render video cropped in the GUI. Perhaps this will be a future video.
@scyheidekamp1773
@scyheidekamp1773 2 года назад
@@EveryPoint That would be cool, because I changed the scale in transform.json, but the crop resets to 16 when opening the scene or rendering.
@eatonasher3398
@eatonasher3398 Год назад
Curious: could you mix in a couple higher definition images to increase the quality? If so, would you have to place different weights to get that better result?
@emotivedonkey
@emotivedonkey 2 года назад
Thanks for the breakdown, Jonathan! But how does one go about starting the GUI without initiating training for a new data set? I just want to be able to Load the .msgpack from a previously trained project.
@EveryPoint
@EveryPoint 2 года назад
Use ./build/testbed --no-gui or python scripts/run.py You can load the saved snapshot with Python bindings load_snapshot / save_snapshot (see scripts/run.py for example usage)
@jeffreyeiyike122
@jeffreyeiyike122 Год назад
@@EveryPoint Please i am having issues using customs datasets. The rendering is always poor with the customs dataset but okay when i use synthetic dataset from the vanilla nerf
@mmmuck
@mmmuck 2 года назад
Wonder if you can convert this to usable poly mesh
@EveryPoint
@EveryPoint 2 года назад
Look at nvdriffrec if you want to do that.
@_casg
@_casg Год назад
So like I can’t really use the mesh obj model?
@jeffreyeiyike122
@jeffreyeiyike122 Год назад
Good day, I am having issues putting the object inside the unit box. What are the parameters am i suppose to change?
@software-sage
@software-sage Год назад
If someone made an iOS app that allows you to upload a bunch of pictures and send it off to a remote server with a GPU, that would be a very popular app.
@wafaWaff
@wafaWaff Год назад
kiri engine app
@simonearatari8283
@simonearatari8283 Год назад
Hello and thank you for the video, I find it really helpful. Could you please recommend me a "step by step" guide to install "Nvidia Instant NeRFs" on my windows PC from scratch? It would be very helpful to me. Thanks
@EveryPoint
@EveryPoint Год назад
We suggest watching bycloudai's video where he does this. The steps have not changed since. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-kq9xlvz73Rg.html
@HKCmoris
@HKCmoris Год назад
:/ I'm getting 'colmap' is not recognized as an internal or external command, operable program or batch file. FATAL: command failed and I can't figure out why it makes me wanna tear my hair out
@EveryPoint
@EveryPoint Год назад
Our apologies for the late reply. COLMAP needs to be added to your PATH, assuming it has been installed.
@wafaWaff
@wafaWaff Год назад
Can you help me please to make a decision between using Nvidia Instant NeRFs and Meshroom from AliceVision
@EveryPoint
@EveryPoint Год назад
Depends on what you need as the results. If you need meshes and good surface data, then Meshroom is ideal. Instant NGP produces images.
@gozearbus1584
@gozearbus1584 2 года назад
Has there been an update to iNERFs?
@EveryPoint
@EveryPoint Год назад
There are updates just about weekly.
@beytullahyayla7401
@beytullahyayla7401 Год назад
Hi, is there any chance to export data that we obtained at .obj format ?
@rikopara
@rikopara 2 года назад
This stream was really helpful but for some reason my render.py script isn't exist. Also I've downloaded ffmpeg but can't find it destination to add to the path.
@rikopara
@rikopara 2 года назад
Oh, looks like i've solved it. Render.py was only in bycloudai's fork.
@EveryPoint
@EveryPoint 2 года назад
@@rikopara Yes! You can create your own render script too. However, bycloudai's version works great. As for ffmpeg, most likely it is here: C:\ffmpeg\bin
@svenbenard5000
@svenbenard5000 2 года назад
Hi! Did you find out how to add the script? I tried copying the one from bycloudai' but it still does not seem to work. I get the error "ModuleNotFoundError: No module named 'pyngp'". I tried installing his version, but only the newly updated version works for my PC.
@rikopara
@rikopara 2 года назад
@@svenbenard5000 Did you copy whole fork or just render.py file? Using of newest build with bycloudai's render.py file works for me.
@rikopara
@rikopara 2 года назад
@@svenbenard5000 Also check for "pyngp" files is /instant-ngp/build dir. If there's no any you probably skipped some installation steps
@nightmisterio
@nightmisterio 2 года назад
They should have easy online demos to use in allot of these kinds of things
@EveryPoint
@EveryPoint 2 года назад
Instant-NGP is not productized yet which is why there is a lack of a installer and full tutorials.
@TheBoringLifeCompany
@TheBoringLifeCompany 2 года назад
It's enough to make your first render in 6-8 hours even with entry skills. Setting up all the stuff consumes time but rather rewarding.
@TomaszSzulinski
@TomaszSzulinski Год назад
I have a problem "'colmap' is not recognized as an internal or external command," Can somebody know what is going on?
@EveryPoint
@EveryPoint Год назад
You may need to install and add it to path.
@ihatelink6658
@ihatelink6658 Год назад
Really work
@xrsgeomatics
@xrsgeomatics Год назад
Could you help me to fix this? thank you ERROR: Not enough GPU memory to match 12924 features. Reduce the maximum number of matches. ERROR: SiftGPU not fully supported
@EveryPoint
@EveryPoint Год назад
This is an issue with COLMAP. Did you install and/or compile the version with GPU support?
@mandelacakson8034
@mandelacakson8034 2 года назад
I Use Nvidia RTX 2060 Super. 32gig ram and AMD Ryzen 7 3800X 8-Core Processor. Will it be able to handle it?
@EveryPoint
@EveryPoint 2 года назад
Yes! Your limit will be the VRAM on the 2060. Keep your input image resolution to 1920x1080
@toncortiella1670
@toncortiella1670 2 года назад
Sorry if you said it in the video but, you can download that 3d? Like obj or mtl
@EveryPoint
@EveryPoint 2 года назад
A very poor quality one. This is not the NeRF you’re looking for.
@fatima.zboujrad7049
@fatima.zboujrad7049 Месяц назад
@@EveryPoint please is there another nerf implementation that produces good quality 3d in real-time((or close to)?
@vladimiralifanov
@vladimiralifanov 2 года назад
Thanks for the video. Is there any way to rotate th scene? When i try to do it with my mouse it just spins in wrong direction. I tried to align the center but coudnt make it work
@EveryPoint
@EveryPoint 2 года назад
No, you would have to modify your transforms. If the whole scene is sideways, sometimes deleting the image metadata and rerunning COLMAP will fix the issue.
@vladimiralifanov
@vladimiralifanov 2 года назад
@@EveryPoint thanks 🙏
@tasteyfoood
@tasteyfoood 2 года назад
@@EveryPoint what's the functional reasoning behind the lack of a rotate? It's a 3D object right? I feel like I'm missing something..
@EveryPoint
@EveryPoint 2 года назад
@@tasteyfoood rotating the scene in the GUI? Also, what you are seeing in a nerf is not a discrete object, it's a radiance field where every coordinate space in the field has an "object" but it may be transparent.
@tasteyfoood
@tasteyfoood 2 года назад
@@EveryPoint thanks it’s helpful to realize it’s not producing an “object”. I think my issue may have stemmed from trying to rotate a sliced section of the radiance field and being confused that it wasn’t rotating with the sliced section as the center point
@tuwshinjargalamgalan1966
@tuwshinjargalamgalan1966 2 года назад
how to save 3D model or point clouds?
@EveryPoint
@EveryPoint 2 года назад
You can save a mesh using marching cubes. However, the quality of the mesh lower than traditional photogrammetry.
@MrCmmg
@MrCmmg Год назад
one question, is the 1080ti gpu still compatible with the nerf ai technology? or do i need to have a RTX series gpu?
@EveryPoint
@EveryPoint Год назад
1080 ti works, however, training and rendering times will be lengthy. NVDIA suggest 20XX or greater.
@Nibot2023
@Nibot2023 Год назад
Are these instructions still relevant? Just curious if you still need all this. I downloaded the instant NGP.
@baconee7047
@baconee7047 2 года назад
lmao i was tNice tutorialnking ths sa tNice tutorialng then i saw ur comnt
@Shaban_Interactive
@Shaban_Interactive Год назад
I am getting Memory Clear Error. i havE RTX3080, i used 170 Photos (Nikon). I will try with lower resolution images tonight. i hope it works
@EveryPoint
@EveryPoint Год назад
Most likely you used too many high resolution imagery. NeRF is quite VRAM heavy. Try reducing the pixel count by half.
@Shaban_Interactive
@Shaban_Interactive Год назад
@@EveryPoint Thanks for the advice. I dropepd the picture count to 80 and it worked like a charm. Thank you again 🙏
@EveryPoint
@EveryPoint Год назад
Good to hear!
@jweks8439
@jweks8439 Год назад
Hi there, I was trying to implement a project using this and was wondering if there was a way to crop (min x,y,z and max x,y,z) without using the gui (using the command line preferably) I am using an RTX 3050 Ti would be a great help to me if u can guide me on how to do it or where to look since as far as I can tell u're the only one who actually helps me understand what's going on Thanks a lot
@jeffreyeiyike122
@jeffreyeiyike122 Год назад
Hi, How are you doing? I am having problems rendering customs dataset. The result is always poor. Is there a way one can get the image in the box and get a good rendering
@jweks8439
@jweks8439 Год назад
@@jeffreyeiyike122 try adjusting the aabb, the optimal value differs from scene to scene
@jeffreyeiyike2358
@jeffreyeiyike2358 Год назад
@@jweks8439 I have tried adjusting the aabb between 1 and 128 but the rendering and psnr isn’t improving.
@jweks8439
@jweks8439 Год назад
​@jeffreyeiyike2358 If you're getting bad rendering with only your custom data the problem might be with the custom data provided. so first try rending the sample data provided by the instant ngp in the data folder such as the fox and armadillo. If these data render fine then consider reading their transforms file to try and replicate the parameters preferred for such as scene. Moreover check your input images whether it be frames of a video or plain images and see whether there is any blurry or shaky ones and remove them to improve the quality if the render. It is worth notting as well that if u are using images and not a video with colmap, the images might be shot with an insufficient number of overlapping images which can lead to a loss in detail. From my testing as well, I found that u should avoid direct light as reflections tend to show on the rendered mesh, so a diffused light works best for retaining detail and accurate color and texture of the scene. Hope I was of some help😊
@jeffreyeiyike2358
@jeffreyeiyike2358 Год назад
@@jweks8439 I will be happy if I can set up a zoom meeting with you. For the fox and armadillo it works fine. I noticed the bounding box is not on the object. I used videos and not images because if there are no good overlap colmap would fail and not produce images and the transforms.json so I always use videos
@LifeLightLabs
@LifeLightLabs 2 года назад
Is this possible on a Mac M1?
@EveryPoint
@EveryPoint 2 года назад
No, this is only supported with PC and Linux machine with a NVIDIA GPU
@hanikhatib4091
@hanikhatib4091 2 года назад
​@@EveryPoint what about on Colab? p.s. I am unable to run NERF on my Mac M1. I have around 125 pictures of a nice art piece (4k resolution, 360 degree shots, around 400 MB of total data). I would love to complete this project but I am afraid compatibility might be the bottleneck.
@anthonysamaniego4388
@anthonysamaniego4388 2 года назад
Thanks for the straightforward directions. I got the app installed and it worked well, but now it says "This app can't run on your PC." Any ideas? Thanks
@EveryPoint
@EveryPoint 2 года назад
How are you launching the app? You should be launching it via anaconda. Perhaps try running in admin mode.
@anthonysamaniego4388
@anthonysamaniego4388 2 года назад
@@EveryPoint Tried anaconda and visual studio. Also tried running in admin and I get the same error. I read it could be related to windows secruity/antivirus protection, but no luck when I disable those.
@anthonysamaniego4388
@anthonysamaniego4388 2 года назад
Got it to work after a reinstall. Now I'm running into an issue when running the render.py script. I'm getting - "RuntimeError: Network config "data\into_building\base.msgpack" does not exist. Any ideas?
@EveryPoint
@EveryPoint 2 года назад
@@anthonysamaniego4388 did you save a snapshot after training? This is necessary to do prior to training. Saving the snapshot will generate that missing file.
@anthonysamaniego4388
@anthonysamaniego4388 2 года назад
@@EveryPoint That was it! Thank you!!!
@NoelwarangalYT
@NoelwarangalYT 11 месяцев назад
Do we need to learn coding
@DSJOfficial94
@DSJOfficial94 Год назад
damn
@ranam
@ranam Год назад
can i do the same on colab
@EveryPoint
@EveryPoint Год назад
We are not sure if there is a collab version of Instant NGP yet. There is a collab version of Nerfstudio though that can run instant-ngp.
@ranam
@ranam Год назад
@@EveryPoint thank you sir I will try it
@CarsMeetsBikes
@CarsMeetsBikes 2 года назад
Can I run this in Google collab??
@EveryPoint
@EveryPoint 2 года назад
Yes.
@maxpower_891
@maxpower_891 2 года назад
да сделайте вы нормальный человеческий интерфейс чтоб любой мог воспользоваться этой программой
@EveryPoint
@EveryPoint 2 года назад
That would be nice. However, this is still in research phase. Eventually we expect NVIDIA to productize it. In the meantime, check out Luma Lab's beta.
@thelightherder
@thelightherder 2 года назад
I've had success with this build and have been messing around with using it (here's a test: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-JbiCMN2lPAQ.html). I had some issues, mostly confounding and inconsistent, but I'll mention them all here in case it helps (I'm pretty new to this stuff, so it might seem obvious to some). I'm using Windows 10, NVIDIA GeForce RTX 2070. I followed bycloudia's Github fork (github.com/bycloudai/instant-ngp-Windows) and video (ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-kq9xlvz73Rg.html). The build went smoothly the first time, but I did have some trouble finding the exact versions of some things. I used Visual Studio 16.11.22 (not 16.11.9) and CUDA 11.7 (not 11.6). I used OpenEXR-1.3.2-cp37-cp37m-win_amd64 (not OpenEXR-1.3.2-cp39-cp39-win_amd64 - this one gave me "Does not work on this platform." I chose different versions until one worked). I'm using Python 3.9.12 (this is what is returned when python --version is used on Anaconda Prompt, but, on Command Prompt, it says 3.9.6 (at one point it said 3.7 - confounding)). Everything went smoothly, and I first tried my own image set of photos shot outwards around a room. When testbed.exe was launched, everything was extremely pixelated. This resolution can be changed by unchecking the Dynamic resolution box and sliding the Fixed resolution slider (the higher the number, the lower the resolution. Things might go really slow, and it will be hard to check the Dynamic resolution box again. It's easier to slide the slider to a higher number, then check that box). My image set, though, did not produce anything recognizable as a room. Apparently this works better when looking inward at a subject. I had success with the mounted fox head example. Using the command "python scripts/colmap2nerf.py --colmap_matcher exhaustive --run_colmap --aabb_scale 16 --images data/" creates the transforms.json file. There's some inconsistency from what bycloudai says about the aabb_scale number. He states that a lower number, like 1, would be for people with a better GPU, and 16 with a moderate GPU. But, the NVIDIA folks say "For natural scenes where there is a background visible outside the unit cube, it is necessary to set the parameter aabb_scale in the transforms.json file to a power of 2 integer up to 128, at the outermost scope." For my above youtube example, I used 128 - this looked much better than using 2. This number, though, needs to be changed in the transforms.json text file, because only a number from 1-16 is accepted in the above command. The Camera path tab window is hidden behind the main tab window. Reposition your 3D scene using the mouse and scroll button on mouse, then hit "Add from cam" to create a camera keyframe (after creating a snapshot in the main tab). To play the keyframes, slide the auto play speed to choose the speed of playback, and click the above camera path time slider (so intuitive!). You'll see the playback in the little window. If you click READ, it will playback in the big window, but it seems to mess up the axis of rotation or something (not sure what this READ is, but I don't suggest clicking it!). All was going well, but when I hit esc and tried to render out the video, I had a few problems. First, I hadn't copied the the render.py script from bycloudai into my script folder. Once that was copied, I got an error about the pyngp module not being present (this seems to be a common problem). But, that folder was there. I removed the .dir from that folder, and I didn't get that pyngp error anymore. I got another error (this is where things are inconsistent and confounding again). Completely by mistake I realized I could run the render command in the Developer Command Prompt, but not the Anaconda Command Prompt. Worked perfectly. But...at one point while I had another image set training, everything froze, had to do a hard reboot. When I tried to run testbed.exe again, I got "This PC cannot run this program" Windows popup. After trying several things to get this to run again, I realized the file was 0KB. No other exe files had this problem, and I ran a virus check and everything was clean. I started a new folder, and re-did bycloudai's compile steps. After that, everything worked perfectly, including the rendering out of the video file in the Anaconda Prompt, and keeping the .dir on the pyngp folder (go figure). Hope that helps some folks. Oh, and check out some other AI stuff I've messed with here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-MoOtNMgFOxk.html
@thenerfguru
@thenerfguru 2 года назад
100% of this makes sense. I believe a lot of the issues you ran into were because instant-NGP has been updated a lot since bycloudai’s fork and this video. Also, you were most likely not always working in the conda environment. I have quite a few updates going live on this channel tomorrow.
@thelightherder
@thelightherder 2 года назад
@@thenerfguru Cool. Are there tricks to getting a cleaner 3D scene? I’d love to use this to do moves around vehicles like in my test, but the image is a bit fuzzy still. In examples I’ve seen in other videos things are much crisper.
@EveryPoint
@EveryPoint 2 года назад
Start with sharp photos and the deepest field of view possible. Also? Keep the scene as evenly lit as possible. Take loops far enough away from the vehicle that you see it all in one shot. Remember that the view in the GUI does not look nearly as sharp.
@thelightherder
@thelightherder 2 года назад
Another confounding issue - after closing Anaconda Prompt and reopening, when using the command "python scripts/colmap2nerf.py --colmap_matcher exhaustive --run_colmap --aabb_scale 16 --images data/" I'm now getting, out of nowhere "File "scripts/colmap2nerf.py", line 19, in import cv2 ModuleNotFoundError: No module named 'cv2'", And weirdly, the command only works in Developer Command Prompt.
@thelightherder
@thelightherder 2 года назад
@@EveryPoint Do you know what the various training options are, and how they effect the final outcome? For instance, what is "Random Levels"? I notice when clicked, the loss graph changes drastically (the line gets much higher when clicked). Also, do you know how to read this loss graph? I know there's a point of diminishing returns - is this what this graph indicates, and is it when the line is high, or low (much of the time I'm seeing the line spiking up and down, completely filling the vertical space). Is there a number of Steps that, on average, should be achieved? I've let it run all night and gotten around a million steps, I'm not sure if the result was any better than a much lower number (and, I have a 2070 - I'm not sure if the 3090 gets to this number in a ridiculously shorter time period).
Далее
Ray Tracing: How NVIDIA Solved the Impossible!
16:11
Просмотров 794 тыс.
Nvidia CUDA in 100 Seconds
3:13
Просмотров 1,2 млн
Is Nerf The End of Photogrammetry
11:17
Просмотров 71 тыс.
Why Are Open Source Alternatives So Bad?
13:06
Просмотров 628 тыс.
NVIDIA’s New AI: Wow, Instant Neural Graphics! 🤖
6:21
Run your own AI (but private)
22:13
Просмотров 1,4 млн
Making a NeRF movie with NVIDIA's Instant NGP
11:13
Просмотров 16 тыс.
NeRF 3D Capture With Luma AI Is MIND BLOWING !
8:18
Просмотров 73 тыс.