The sick part is, Luma is partnering with Poly cam, meaning we will get incredible photogramitory for geometry and crazy radiant feilds with reflections, transperency, roughness, etc
good to know that it takes 20-60m to complete. Guess it still requires colmap camera pose estimation behind the scene, there would be a huge speedup if this difficulity is cleared.
How does it handle transparency and specularity compared to photogrammetry and does it create textures besides diffuse (such as metallic, roughness, etc.)?
it uses neural networks and does yes. You could tell if you looked at a reflective material such as a tv while off or a chrome ball which Corridoor Crew did in their video.
Thank you for showing us this pretty cool tech! I would totally like to try it out when available and I'm positive it will be widely adopted, provided enough marketing/educational efforts like this! Keep up the great work guys and congratulations!
Get a rotational display ( usb cable ) and place any object on it. Place lock the camera on a height and angle adjustable tripod. Point the camera to a direction only with same background for all 360° images. This also applied for green or blue background for instant chroma key editing of an object. Additional : play with some studio light point to the object to match with the final background for such video/movie scenes 👌🏻 U do less hard work with much better and clear 360° images. Try it 👌🏻
This is just NVIDA NeRF which is open sourced can capture reflections but not very good for photogrammetry. Why is LUMA not releasing this? Something feels fishy here
I exported my scans and I have a lot more of the scene than I really wanted. Have you encountered this with other scans you have exported to use in a processing application. I uploaded the OBJ straight to Sketchfab and really wish I was more experienced at removing areas I don’t want.
Yes that’s normal since they currently don’t provide a slicing feature, you would need to bring the asset into Blender, Maya, or other similar tool to clean it up. I did submit a feature request to add a slicing type feature to their app but no ETA yet. Great question and thanks for watching !
I don't get this, other older photogrammetry software seems like a better choice. Why capture backgrounds? Is it able to capture other than small objects? Is it able to fill the gaps in models? I mean this was possible 8-10 years ago, luma software isn't out yet and it seems like it has fewer features, why not use software that's already there. Major game franchise already use photogrammetry (yes, it's not that simple just to take some photos with your phone), but I'm curious, why bother with this toy?
As far as I can tell from having done photogrammetry and looked at videos on what "NeRF" is, NeRF seems to let you generate a really-low-quality-compared-to-properly-done photogrammetry 3d model with just a few pictures (I'm sure it'll improve and maybe match photogrammetry at some point but it's still far from it if this video is anything to go by). As a wild guess, maybe with NeRF you can do 5 or 10 pics? to capture a complete object vs. say 70-100 for basic photogrammetry. The AI model fills in the gaps. As to the lighting Roy mentions, I've no clue. For engineering and 3d printing purposes I care about capturing high-resolution, surface-accurate meshes (which I find hard to do really well...maybe AI will help there at some point too). I'm glad this video creator showed the actual meshes. Too many photogrammetry videos show a final model with the texture/source images mapped to it and it looks fantastic but it hides the mesh underneath that's looks like it's been worked over with an ugly stick and could outdo oatmeal for being coarse and lumpy. I guess for game assets it's ok since you anyway reduce polygon count as much as feasible and only care how the object looks in the game?