William, you are a gem of a human being. I began my photogrammetry journey with you after years of failure. My scans are now GORGEOUS after watching your videos and following your advice. Thank you so much for the time you put into these. If you ever wonder if they make a difference, let me assure you: they do. Thank you a million times.
Nice to see more traditional photogrammetry videos coming back up, great stuff as always William! I'm not familiar using RC as much as Metashape/photoscan, but there may be some image culling features to help cull overshoot or lowimage quality estimation issues that can also stagger scan results ( or cause misalinments) looking at the control point segment, there is an accuracy reading going on. Your 0.1-0.2px is accuracy to the matching control points so it may also be available at earlier photostages too, might be worth investigating. Keep it up! Nice to see you on patreon too. well deserved.
A useful tutorial, especially as it includes a lot of small important details from the photography process or using the software. And of course, presented in a clear and concise way. Thank you!
we going have just tired to model manually !! It is unimaginable how much this thing is *_full of help_* and saves a lot of *_Valuable_* time .. Just Awesome ..
Another solid video friend! I'll be getting final models up soon as I finish marketing planning. Then I'm building a database of 10k models per year and building experiences using these assets.
RealityCapture has a very good ability to auto detect "irrelevant" parts of scans on rotated objects and remove them when using a bright uniform background. I use a Foldio3 light box with a turntable for my small object scans and I can just do a full rotatiom of pictures, flip the object over, and repeat several times if needed and when RealityCapture builds my object it just ignores the bright white background/turntable automatically and gives me a perfect scan without any manual work needed or any multi-scans or object masks or multiple mesh building.
Yeah this works great when you have a controlled environment for it! And if I had taken a better dataset of images, even outdoors, it would have aligned perfectly on the first try! I still wanted to show people how to manually connect the two because sometimes, you need to. :)
Great tutorial as always! I know first hand how frustrating getting the alignment to work with control points can be. It's been hit and miss on my end. As you brought up in your video, just really try and take as many pictures as you can. When I saw the 258 pictures for the stump, I said, "yeah, that's about right".
Great tutorial, thank you! I didn't know you could do masking like that. Seems very useful. One could also use a featureless base and backround wich would be faster, but perhaps the results would not be as good?
You could add easy to remove geometry to the model which makes it asymmetrical, hence give the software only one way to align it. Like putting needles to the sides of the log with shapes as their heads. The needles would be so small and hard to capture the they might even not show up in the scan at all. (I did some photogrammetry as well as used artec scanners..)
You're not wrong! But that really isn't always possible depending on what you have available, and the object you're trying to scan. This method makes it accessible to anyone.
Hey William, I just want to thankyou for all the content you provide for the community! Another great video.:D They are awesome and your manner and personality make it easy to take in. One thing I really like is that you are coming from a vfx viewpoint. So helpful. One thing that I am having trouble understanding, and hoping you might know is to do with HDRIs. Can we not light a scene with a single HDRI and get proper(strong/sharp) shadows like in offline renderers? It seems like it is only possible in conjunction with a dir light. Also second question if you read this: Do you know if there will be a solution coming for capturing/exporting a shadow pass from movie queue? I have read some of your comments in other videos, highlighting how getting them from composure is clunky. I personally tried it for a upcoming project and definitely agree with you on that!. I super excited to see where UE5 is going and so desperately want to do some traditional set extension type stuff, but these HDRI and shadow limitations are holding us back. Thanks again anyway. Look forward to all your vids. Sorry I am off topic for my comment.
So this might be a bit dumb and I've only tested it in Metashape but i've had an easier time getting both halves to align automatically by just turning images of the bottom half upside down in lightroom. Otherwise reconstruction assumes both parts have the same side facing up. Thanks for the masking tip BTW, I keep forgetting that feature can be useful for alignment.
Thank you very much for all your advice, I am looking for the best technical solution for large buildings in photogrammetry, perhaps drone with 8 K camera ?
Awesome video, as always! Thank you, Sir. Do you have any tips for scanning soft objects? I'm working on a project right now where I'm going to need to scan a fight belt. I haven't gotten to see it in person, so I'm not sure exactly how "floppy" it is. I was thinking about trying to suspend it with string and tape.
This is sooooo helpful!! I know this is a long shot, but any tips for how to apply this to a full sized car? The only thing I can think of is a 2 post car lift, but I have no idea how I could shoot the areas that would be covered by the supports under the car. Thanks for a great video!
You've probably mentioned this along the way, but I'm curious about using a 35mm (slightly wide) focal length for capture as opposed to something like 50mm (assuming a full-frame sensor). Also, with the 35mm FL, you have a better native depth of field. But what aperture do you find works the best at that focal length? For smaller objects, do you feel the 35mm FL is the way to go for that too? Sry for all the questions!
I'm shooting with a 35mm in this video. The aperture depends on your subject, use whatever gets the most of your subject in focus, I avoid going past f12 though, because then you introduce diffraction which deteriorates your image as a whole. I was shooting at F10 for this object. Again, there's no magical settings that work for everything, you need to figure out for yourself what works best for a given object!
This is awesome . I would love to see a video on reality capture using huge dense point clouds way over the 40 million limit captured with a laser scanner. My scan is 713 million so simplifying it down that much makes it look so bad. I am struggling so bad right now haha.
You just want to use clipping box, to visualize how it looks on smaller segments. The 40M limit I think is due to VRAM limitations. It gets heavy real fast.
@@WilliamFaucher If I set a clipping box via a reconstruction region *which I already did to see the quality* would it only export the area that is selected in the clipping box? My concern is my Scan is of a large store with aisles and I'm not sure how I would set control points to realign something like this. Im also worried that maybe the person *freelance contractor * who did this scan should have taken photos to maybe project onto the lidar scan data rather than solely relying on the lidar scan photo detail. My end goal is to bring this scan of this store into Unreal Engine to use on a virtual production shoot haha.
@@tomrey5 The clipping box doesn't affect the output, it's purely for viewing purposes, to be able to show the actual 3d model across a smaller area. Do you actually need control points though? Did the entire scan align?
@@WilliamFaucher ahh ok that’s good to know . I have been wondering about that for like a day now. The scan is aligned . In my head I originally thought i would have to export each small clipping box area as a separate component and realign it. I guess I could just try to export it based on that information. I am just wondering how the LSPs or basically the laser jpegs would export as UDIMS into unreal. I guess I will give it a go and try to find out haha.
Awesome video William!! as always. Just one quick question - would this work for asymmetrical objects like a shoe (one that doesn't deforms) or let's say a collectible figure? Or it has to be practically symmetrical? Thanks!
One time at the project with 1000+ photos with litlle overlapp and without ability to take new pictures I had 163 control point! It took several days. And I think pretty importnt to describe about the control point weight parameter. To access them you should to select one of control point and in 1Ds window set target Weight. If the control point stay at the hight contrast party and you much shure about correct placement you can set 100 instead of 10 weight by default. It have strong effect when the picture contains shiny or reflection surfaces. In this cases the Reality Capture can't correct calculate camera positions, so you can help them with you hight weight control points.
Hey guys, I'm trying to work on recreating a whole room...it's quite a small room in fact but would I essentially be able to use this process to recreate a whole room in 3D and then export it into Unity to make it into an assetbundle? Thanks
Is it possible to do this with sunglasses, using a black turntable and beige background? Or will I need some of that spray foam that evaporates, and re create the textures later? Not sure how to deal with the glass elements 😅
I had a thought while watching this video, do you think it’d be at all possible to just flip the bottom half of the images in lightroom before exporting and doing them all as one reality capture? Only problem I can think of would be the lighting
hello my drone camera only has 12 megapixel , if i want to have greater result for 3d models do i need more megapixels ? i see a camera with 48 megapixels only for 100 dollar ? willi it be better with this camera you think ? please help me
I have a question. The polarizer is already composed of two polarizers. Rotating and crossing can effectively remove the highlights. Why do you need to control the direction of the light source?
It can remove the highlights from *polarized* light, but not from unpolarized light. That is why we polarize the light with the filter on the light itself.
Hi William, I can't figure out why I don't get the "Export Depth Maps & Masks" dialog box when trying to export Masks from Reality Capture. When I click the Depths & Masks button, I get the first dialog box asking me where to save the files, however, that's it; there is no second dialog box asking me to adjust my settings before the actual export. Any help you can give me would be greatly appreciated. Thanks, Mathew.
I was following along here and ran into trouble generating the depth and masks. It would fail that step in about 3 seconds. I even ran it over from the start a few times and it still failed. What solved that issue was generating a normal mesh. The preview mode wasn't playing well with generating masks.
Merge component only seems to duplicate the component I`m currently vieuwing, do i need to select 2 at the same time somewhere or will the scan simply not work ? Does it
I've been watching through your videos and they are all really great resources! I have mostly worked with Polycam and done a ton of post cleanup and optimization but RealityCapture seems like a much better route for professional level assets. Could you put a sticker or maybe a pen mark to find control points easily and then remove the mark in post in the texture? Thanks for the great content!
Hi, I'm having a problem with the alignment of my photos on Reality Capture. I have two different orientations which causes double alignment in the same model. Can you please help me. I can send you a screenshot of my object if you want. Thank you for your help.
Hi, I am in the process of learning Reality Capture. I was having a similar problem about a week ago. I was using (2) 36" softboxes from opposite sides to light my statue, (subject). When I brought the pictures into Reality Capture the model was facing in (2) opposite directions, and no amount of control points was going to fix that. My only successful scan so far was outside in the shade. I believe that the first one failed because there were 2 light sources, and that confused the software. I believe that the one outside in the shade worked because there were only one light source. Next, I am going to try one softbox inside with a 4x4 bounce opposite it. I am hoping this one works. Best Regards, Mathew.
It would reduce your effort quite a lot if you simply added a few sharp cube shapes to your shots. Depending on scale you might use Legos, Rubik's Cube, Post-it pad, etc. Legos in particular are nice since you can make them the size you need, they are cheep so you can easily have many of them strewn about, and you can get fancy with the colors to help you keep track of orientation when zoomed in for control point placement.
Well, yes and no. You'd need to have those tracking markers on the object you're scanning itself, because we mask out whatever isn't the object. If said trackers are on the object, then it means you need to do some cleanup in post, which kind of defeats the purpose of "reducing the effort". If your dataset is shot correctly, then you don't need control points at all, as both sides should align on the first go. Don't get me wrong, tracking markers are great, but not always applicable depending on what you're scanning.
You... can. But you have to be sure the person doesn't move. At all. Even moving 1mm can throw off the alignment and the results will be a bit messy. You'll get the rough shape of the face for sure, but it will need a lot of cleanup in Zbrush.
Pixel matching across sets of images isn't too difficult. Why can't the software use the control point as a hint, then procure its own best match, for us to accept?
RealityCapture is free to use, just has a licensing system in place, like most apps :) Not to mention it is extremely affordable. The log in this video would have cost $4 to export.