Yes. Photogrammetry is like a brute force approach and has been around for quite a while. It will stay so, because NERF's data augmentation is not desired and can be against accuracy in some cases. Both are brilliant technical ideas and implementations@@wintorartour
The main interesting thing about NeRFs is the ability to capture view-dependent lighting (reflections). And then Luma Labs goes "look, you can export NeRFs to your favorite 3D software like Blender and Unreal!" The trick? They never mention that all reflection information is gone once you do that. A waste of time.
after watching a few videos I think I am finally understanding what NeRFs are. from the videos I have seen it is within Luma where it is real-time "3D models" like Unreal, with lighting and reflections
It's sort of surprising how far behind real 3d geometry generation is compared with raster generation from diffusion models. As this doesn't feel like a particularly hard problem compared with say MJ 5, I sort of suspect the lack of performance in 3d generation is because most AI researchers do not understand 3d geometry well enough to have smart ideas for an approach to ml model design, and I would guess specifically struggle for smart ideas for a loss function framework (probably the most critical), so they've probably only tried a million or so model ideas, rather than the billions of attempts at 2d that have led to modern diffusion techniques through much more trial/error.
Both actually nowadays. But Polycam would be a little easier to light and change later. But luma recently announced a new version which makes it easier to include in VR projects.
Thanks for the comment. With NeRF you actually don't measure anything. It's basically a trained AI model with the aim to show a 3D representation based in an image sequence as input. As the video tells.
@@wintorartour what NeRF measures is the radiance field, which is a continuous model of what parts of the volume emit or absorb light. The MLP acts as a fuction approximator, i.e. a convenient way to represent and fit the model, by measuring the photometric error between the observed training images and the predicted images generated by the radiance field that the MLP represents. NB 1) a voxel grid, a depth map, a radiance field, or a collection of smoothed particles are all models, which measure the geometry and properties of a space. NB 2) AI is an application of mathematics. There is no magic (even if it may feel otherwise ; ) .
The meta quest pro, pico and vive headsets are virtual reality headsets and DO NOT have any Augmented reality features Augmented reality devices are devices in which you can see and interact with the real world in such a way that if you where to be walking around your home you could look at a cupbord or fridge and see what you have in them or in the case of the MINI Ar glasses they are linked to cameras around the car so that you can see through your blind spots and can also be used in navigation when driving, Virtual reality headsets on the other hand are enclosed with no access to the real world you cannot see anything of the real world around you and have to interact within a virtual space as such you could not walk around (except in a virtual space) with these on as you wouldn't be able to see stairs or anything that is on the floor which could cause you to have an accident and I am sure if you where driving around with these on you wouldn't get anywhere with with a court as you wouldn't be able to see anything around you to look out for dangers
Hey Barry thanks for the comment. Actually, these devices do have AR functionalities and are also used for the cases you mention. Indeed VR is enclosed, but these devices now also use the camera for AR functionalities.
In order to create an AR tour, you use the website web.wintor.com to upload your content and manage tours. You use the app 'Wintor AR tours' from the stores to create the actual tour at a location! Hope it helps and thanks for the comment. There is a free trial if you want to try (just go to web.wintor.com)
@@wintorartour I think that kaedim3D is a scam, ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-wB_NudGKKxI.html ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-dnwDPLLzzEU.html
i am sorry if this is dumb but does this means we can use nerf created images and volumes to create a more detailed 3d model using the classic photogrammetry method? I am sure we will see more creative uses of it once its open for people to use but other that and potential social media usage, nerf's area of utilization seems pretty narrow compared to photogrammetry.
I was thinking that too and I believe the exporting tools to get a GLB for example is already doing that. However, photogrammetry algorithms don't work nicely with reflections and such. Therefore, it might not work as expected. One great use case is using Nerfs to create video shots!
I used the following apps: Polycam, LUMA (only on invite) and Wintor AR tours. Polycam and LUMA are only available on iPhone maybe, not sure. Wintor AR Tours is available on any device. Have a great day!
Interesting that they are not available on Android. I found this online, maybe that will help you: all3dp.com/2/best-3d-scanner-app-iphone-android-photogrammetry/
Great to hear Merijn. It’s definitely worth checking out and might be of huge help in some projects. Glad you liked the explanation. Definitely subscribe if you don’t want to miss other topics about AR and VR.
One day machine learning will become so good that you could feed it all the photos on the internet and you would get an accurate 3d model of the entire planet with all the people on it (at least those with facebook/instagram images..)