Alex Cox is an FAA Part 107 Certified Drone Pilot and Civil Engineer interested in Drone Photography, videography, 3D mapping & modeling, drone surveying, photogrammetry, NeRF's (neural radiance fields), and gaussian splatting.
Was the largest videogrammetry test I’ve ever tried. It was really hot outside and it shortened the battery life. The Gaussian splat came out better. You can see the difference in processing in some of my other videos. As far as you question tho… TF u wanna know lol? Elaborate
maps.app.goo.gl/P9uRCmWg6kAa1VgM6?g_st=com.google.maps.preview.copy Huge area to cover with an air 2S. The videogrammetry didn’t turn out great but the Gaussian splat did
@@StevePaulSounds I go thru 3 different processing platforms in the video and compare each. The first was Gaussian splat with luma ai, then photogrammetry processed in Poly.cam, and then last I processed the photogrammetry in Reality capture. Did that answer your question? sorry if I'm misunderstanding.
@@coxdroneservices no, it's been removed but there's still remnants of it and this is the upgraded version of the rock-afire explosion that was at Odyssey fun world ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-XGPOT9Dpz2M.htmlsi=tVBn0IyDxLuRVVvD it's a shame it never took off
Pretty sharp! Point cloud looks like there may be some missing gaps in photo coverage and the program filled it in nicely. What's your image count for this one?
Yea I just ran a test using only my iphone and it came out with better results. lumalabs.ai/embed/4f93f22d-0b72-4001-ab5d-37d037a1a74a?mode=sparkles&background=%23ffffff&color=%23000000&showTitle=true&loadBg=true&logoPosition=bottom-left&infoPosition=bottom-right&cinematicVideo=undefined&showMenu=false
@@coxdroneservices That looks better but the result is clearly different from other iphone or 360 cameras. I have here sample splats from insta 360 and its way far than ricoh cam ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-mD0oBE9LJTQ.htmlsi=XgMf63zO6gbBzfeD Do you think the only problem in your shot was the lighting?
Wow that looks amazing! Yea maybe the lighting, but might also be the camera. How did you do the capture? Was it a video or a zip file of images (how many images also?)
@@coxdroneservices Ow, it wasn't mine. I just found it on youtube by searching "360 camera gaussian splatting". I also found video testing insta 360 camera on Luma Ai and it seems perfect ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-kV0OAvlXShk.htmlsi=LQ_AuxFQuZ_PdFLl
@@AlexeiDanilin this one was 129 shots using a Ricoh theta SC2. I’m reading and seeing that the thetas just produce poor quality images and therefore poor Gsplats. I’d really like to run the same test with an insta 360 to compare quality. I’ve also had better results with a simple video of the same space with my iPhone 13 Pro Max
Could you share something on your input images? How many images did you use? Did you train your 3dgs on your own pc or on the internet with luma etc? Thanks :)
I used about 40 total photos. They were all 360 degree photos taken about 3 feet apart around the first floor of my house. I made two laps around the floor (one at full height, and one at low height). Then took 2 extra 360 photos on top of the floating wall for extra coverage. I compressed all the images to a zip file and uploaded to luma with the equirectangular camera type selected. I'll do some more experiments with the 360 camera and see if I can get a more detailed model. This one still had alot of blur and artifacts. I'm wondering if I increase the image count to about 100 with closer spacing if I can get a much higher level of detail
@@coxdroneservices 40 images is not very much for so much details. you can easily go up to 100 to 200, even 300 images for so much details. and since you using luma you dont have to worry about resources ;) i am training on my own pc and do not have the best graphics card (1070 ti) but still able to get decent outputs. I like what you did there and would love to see you trying another round with your 360 camera. :)
Thanks so much, I am actually planning to do so in the next couple days. This was my first test using 360 photos, so I thought I might be able to get away with fewer of them. I may try and see if I can use a combination of 360 photos and standard photos at some point, but I think a test with a couple hundred 360 photos could yield some very cool/detailed results. Thanks for the feedback!
Did you attach an iPhone to the drone and use Polycam for the recording? If so, how do you check in real-time whether the scanning is being done? It's amazing that the entire area was scanned without any missing parts
No, that would be an interesting experiment though. I captured around 500 images with the DJI Mini 2se and uploaded them to polycam for photogrammetry processing. I captured the photo's during manual flight and tried to approximate an 80% overlap between each image to get the most accurate detail
Thanks so much! Drones and 3D modeling are my jam. Just kinda documenting the things I’m learning, software that works well, and techniques for capturing 3D models and cool drone footage
Yes you can. To create an orthomosaic in RealityCapture, you would typically follow these steps: 1.Capture and Import Photos: Take overlapping photos of your area of interest and import them into RealityCapture. 2.Align Photos: The software will align the photos based on common features and generate a sparse point cloud. 3.Create a Dense Point Cloud: Process the aligned images to create a detailed point cloud. 4.Generate the 3D Model (Mesh): From the dense point cloud, a 3D mesh is created, which represents the surface of the photographed area. 5.Texture the Mesh: Apply textures to the 3D model for realistic detail. 6.Create the Orthomosaic: Project the textured 3D model onto a 2D plane to generate the orthomosaic. This process corrects the images to be true to scale and geographically accurate.
RC is the clear winner. Its logical too unless you really want to zoom in close but then all of the techniques would break down, not only the photogrammetry. Luma/poly/nerf/GS is better when visualizing soft materials and thin lines. Photogrammetry will make anything that is supposed to be detailed and soft into hard egdes. I rarely use photogrammetry anymore but buildings and large, areas where there is no jeed for small detailed object recording is definitely the one situation i would still pick RC
Great synopsis! Thanks for the comment. Do you work in the reality capture space? I work for a company that is in the reality capture space but has traditionally relied on terrestrial scanners and I am trying to build out a drone division or just add drones and drone capture as a tool in our toolbelt. Any advice on hardware or software for those use cases would be greatly appreciated as I am evaluating a lot of options at the moment for what would work best for my company. Thanks again for the insightful comment!
I have a buddy that uses DJI hard and software to make nice flight paths and automate recording parameters. I myself am more of a VR creator so I do a lot of reality capturing for my projects but much more artistic and experimental stuff
If I were you I'd get a drone and a professional controller to make automated flight recordings. It will reduce headaches wondering if you made a shot of every meter you were planning to capture. I'd do top down once and a pass or two at 45°
I did all of these with the DJI air 2S. I wanted to use Litchi for some semi autonomous photogrammetry. Then I manually flew in closer for finishing shots to fill in some holes. Unfortunately the mini 2 se I have has the SDK issue so I can’t use third party software like litchi with it
I used a drone (DJI Air 2s) to capture about 700 photos and then I uploaded them on the polycam web app. Polycam's photogrammetry processing is really impressive.
@@coxdroneservices starting to wish I didn’t get the mini 4 pro due to lack of SDK but it still gets really good shots and the Polycam web app is pretty good.
Yea the lack of SDK is unfortunate. Using the Litchi app to plan waypoint missions and designate photo capture intervals is a gamechanger for photogrammetry. Does the lack of SDK also affect platforms like DroneDeploy or Skybrowse?
Yea sure. I’ve got a ton of models of it up there. Some can be saved. Some of the others I was thinking of cleaning up/editing and selling on sketchfab
Welcome to our New UE5 Plugin: "UEGaussianSplatting: 3D Gaussian Splatting Rendering Feature For UE5" ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-4xTEyz9bx5E.html
A huge number lol. I’m not a local. I was in town on business and flying recreationally, but the amount of new construction was wild. The city is amazing and really growing rapidly
Boa noite! Meus parabéns muito bonito o trabalho. Me tire uma dúvida por favor, essa fotogrametria foi feita diretamente com o vídeo feito pelo drone, ou foi feito com fotografias capturadas do vídeo? Outra pergunta, foi feito no PC ou no celular? O resultado foi excelente pelo número de fotos.
Muito obrigado! Isso foi realmente feito a partir de um vídeo que foi processado pelo software de fotogrametria da poly.cam. Agora eles permitem o processamento de fotogrametria para fotos e vídeos, assim como para Gaussian splats. Eu queria testar os vários métodos de processamento dos mesmos dados brutos. Acredito que a poly.cam funciona melhor com fotogrametria usando fotos, enquanto a Luma AI teve o melhor NeRF e Gaussian splat para vídeo e foto. Thanks so much! This was actually done from a video that was processed from poly.cam’s photogrammetry software. They now allow photogrammetry processing for photos and videos and the same for Gaussian splats. I wanted to test the various processing methods of the same raw data. I think poly.cam works best with photogrammetry with pictures and Luma AI had the best NeRF & Gaussian splat for video & picture
It came out pretty well. You can check out a video of it in this short: ru-vid.comk7pbNUUlGTo And here's a link to the actual model on Polycam: poly.cam/capture/f25ac4ae-a310-472f-9831-63f9ff4f100d Polycam Gaussian Splat link: poly.cam/capture/c8f0c370-7cfe-4c55-8dc2-63d4877074ce Luma AI Link: lumalabs.ai/embed/48c11cdb-0247-4585-ad5e-0a9bfed149df?mode=sparkles&background=%23ffffff&color=%23000000&showTitle=true&loadBg=true&logoPosition=bottom-left&infoPosition=bottom-right&cinematicVideo=undefined&showMenu=true
I was contracted by the owner through Droners.io to conduct this photography/videography. I secured LAANC authorization from the FAA and secured the site from civilian access with cones and caution tape. All FAA-107 rules and regulations were followed to the T with the letter .
I am a licensed professional and I strive to abide by every rule to the strictest adherence and provide the highest quality to my clients. I appreciate your concern. All rules and regulations were followed and the restaurant ownership contracted me to create this footage for marketing purposes