Тёмный
Capturing Reality
Capturing Reality
Capturing Reality
Подписаться
Welcome to the official Capturing Reality channel!

Capturing Reality photogrammetry solutions enable you to create ultra-realistic 3D assets and environments from photos and/or laser scans.

RealityCapture is a state-of-the-art photogrammetry software and the fastest solution for a wide range of industries. Create virtual reality scenes, textured 3D meshes, orthographic projections, geo-referenced maps and much more from images and/or laser scans.

RealityScan is a free-to-download mobile application that enables you to create high-fidelity 3D models using just your phone or tablet. Available for both Android and iOS.

www.capturingreality.com
RealityCapture 1.4
2:31
3 месяца назад
RealityCapture 1.3
0:57
9 месяцев назад
RealityCapture Community Spotlight | Sizzle Reel
1:03
10 месяцев назад
How to Use RealityScan
1:38
Год назад
How To Use RealityScan
4:39
Год назад
RealityCapture 1.2.1 Tarasque
1:05
Год назад
RealityCapture to Unreal Engine 5
25:26
2 года назад
Introducing: RealityScan
0:47
2 года назад
Showreel 2022 | RealityCapture
1:43
2 года назад
RealityCapture tutorial: Divider Script
12:23
2 года назад
Комментарии
@toyhunter9746
@toyhunter9746 22 часа назад
hi, in "mesh model" i can see "close holes" button, but can't find "open holes". Where i can find it? or may be in "reconstruction" settings somewhere hiding "close holes" dialog ? cant find it(
@CapturingReality
@CapturingReality 14 часов назад
Hi, there is no such tool as "open holes".
@toyhunter9746
@toyhunter9746 9 часов назад
@@CapturingReality thank you for fast answer !!! sad news(( and no chance to find things like this in settings (of "reconstruction" or other places?)? right? PS : old version make it by default, and then users must close\fill holes by himself. i think this feature (like open\close holes) can add a little bit more flex\possibilities to this grate software. Again thank you for your answers\communication !!!
@CapturingReality
@CapturingReality 7 часов назад
@@toyhunter9746 What exactly do you mean? There weren't any changes regarding this tool, so it should work in the same way as before. What for will be the "open holes"? To do that you can just filter a selected part of the model. Or am I missing something?
@toyhunter9746
@toyhunter9746 3 часа назад
@@CapturingReality Need reconstruct 3d model from photos, for HD remodelling (HQ production pipeline)... huge number of polys(100 000 000) building with a lot of mirror surfaces(for example). CR create model and automatic close all holes ( exactly this huge number of mirror surfaces) and i must cleanup all of this "closed holes geometry", coz i need remodel this model in good quality model (in 3dsmax software) and i'm doing this operation with cleanup every time when i got task like this... spending alot of work hours... if i can say to RC recostruct and dont close holes, this will be a huge joy\holiday for me and save a lot of time (this geometry in holes interferes with modeling in viewports) sry about my eng(
@DarienLingstuyl
@DarienLingstuyl День назад
Oh, i was so excited when i see there was an unreal photogrammetry software "for free" and then I looked at this tutorial and then i saw the "pay to render the scan" i got disapointed, i hate when people or companies say "free" but then there is a payment after you download and start using it.
@CapturingReality
@CapturingReality 14 часов назад
Hi Darien, it is free. This tutorial is older for old PPI model, which is no more relevant.
@Vihar-g3i
@Vihar-g3i День назад
Do we need both geometry and texture layers? or just geometry layer is fine
@CapturingReality
@CapturingReality 14 часов назад
Hi, geometry is enough if you don't want to work with layers.
@josephrevell3078
@josephrevell3078 3 дня назад
G'day Jakub, Another quick question as I am revisiting this series, what can cross sections be utilised for? I'm creating a port-folio of potential work I can do, though I am struggling to think where cross-sections can be used for data analysis. Again great series!
@CapturingReality
@CapturingReality 2 дня назад
Hi Joseph, it can be used everywhere, where you need to get some profiles. For example, it is used in surveying.
@josephrevell3078
@josephrevell3078 2 дня назад
@@CapturingReality Alright, I'm sure it can be used in surveying but I'm not sure how the profiles are actually utilised. No worries, I appreciate the response.
@darrinholroyd8203
@darrinholroyd8203 6 дней назад
Hi, can you put the object on a lazy Susan?
@CapturingReality
@CapturingReality 5 дней назад
Hi, you can. Then it depends on the background if you will need to use the masks or not.
@darrinholroyd8203
@darrinholroyd8203 5 дней назад
@@CapturingReality OK, so what does the background need to be for a lazy Susan to work?
@CapturingReality
@CapturingReality 5 дней назад
@@darrinholroyd8203 Basically featureless (like black).
@josephrevell3078
@josephrevell3078 9 дней назад
G'day Jakub, fantastic series so far! Quick question, I'm not sure if you have run into this issue before. Rarely, the filter selection/overide selection tool will not process properly. I will lasso tris, click filter selection or overide selection (AI Classify tool) and a new model will be created but nothing has changed. I cannot seem to replicate this on purpose, clearing the cache doesn't help, it seem saving and rebooting RC is the only solution I have found. Have you run into this glitch before?
@CapturingReality
@CapturingReality 7 дней назад
Hi Joseph, I am sorry, but we are not able to reproduce that. I suppose only solution for that will be the application reset.
@josephrevell3078
@josephrevell3078 7 дней назад
@@CapturingReality No stress, I appreciate the reply nonetheless 👍🏻
@mn04147
@mn04147 10 дней назад
thank you for great tutorial! did headscan dataset missing now? i can not find the. head scan dataset that used in this tutorial is hard to find…
@CapturingReality
@CapturingReality 7 дней назад
Hi, that dataset is no more available. You can try other free datasets.
@mn04147
@mn04147 7 дней назад
@@CapturingReality okay. thank you for replying! but can you recommend any free head scan dataset?
@CapturingReality
@CapturingReality 7 дней назад
@@mn04147 All available datasets are there: www.capturingreality.com/sample-datasets For similar case you can use that Christmas statue dataset. But is it not necessary to use only head dataset to test this.
@ryanwhitehead360
@ryanwhitehead360 11 дней назад
The latest Blender version has moved and changed tool names. An updated tutorial would be fantastic. :)
@inceemreince
@inceemreince 23 дня назад
Hello I watched all four of your videos. I thank you for your videos. I created a model with an obj extension by watching your fourth video. The model size is about 6 GB. Is it a problem that it is this big? I am trying to open it in the cloud compare software. But cloud compare runs for a very long time but does not respond. What should I do to reduce the size of the created model? Probably when the model size is reduced, the model precision will decrease. I am waiting for your suggestions. Good luck
@CapturingReality
@CapturingReality 23 дня назад
Hi, I suppose it depends on your needs, but such model could be too big. There is a tool called Simplify Tool, so you can use that to reduce the size of your model. The precision could decrease, but it depends how much you will simplify your model. In general, this decrease won't be so long (it also depends which precision are you considering).
@inceemreince
@inceemreince 23 дня назад
@@CapturingReality Hello, Thank you for your quick response to my question. I tried what you said right away. I want to ask another question. The number of triangles forming the model decreased with the Simplify tool. I immediately created a new 3D model with an obj extension from the newly formed model. Its size decreased to 1.2 GB, but when I opened it in the cloud compare software, all the objects on the model were created only in white. Where could I be making a mistake? or what should I do?
@CapturingReality
@CapturingReality 23 дня назад
@@inceemreince Is your model textured? If so, was it exported as with textures? I tested this and the textured model is colored in CloudCompare.
@inceemreince
@inceemreince 23 дня назад
@@CapturingReality I realized I didn't understand the subject. Please excuse me. I will ask another question. I exported the model I created with the process steps you explained in the 3rd and 4th videos in obj format. The 6GB model was opened in Cloud Compare software, albeit late, and it was in color. I subjected my model under Component in the 1D image in the Reality Capture software to the Simplify process as you said. I selected my new model with less triangular surfaces and converted it to a 3D model in obj file type. The image appeared white in cloud compare. In order for the 3D model to appear in color in Cloud Compare, will I apply an additional texture process to my new model created as a result of the Simplify process? Or will I need to change a setting during the export process from the new model with the 3D obj file type?
@CapturingReality
@CapturingReality 22 дня назад
@@inceemreince Basically both. You can use the texture transfer during simplification, you can reproject the texture on simplified model and you can also create new texture. Also, you need to set that you want to export the texture during mesh export.
@alirezahasani263
@alirezahasani263 25 дней назад
i dont belive its free .thank so much .its really useful app.
@D43vil
@D43vil 26 дней назад
A user interface is like a joke, if you have to explain it it's bad. Love the power of this program and I've done some cool stuff with it....but man this UI is painful
@michal5869
@michal5869 28 дней назад
Alignment points are also under the object how did you do that, There are no tips on video.
@CapturingReality
@CapturingReality 27 дней назад
The object was captured from both sides and there was used the mask's workflow. The workflow starts at ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-tw6wNNEbH_M.html
@nerosetsfire
@nerosetsfire 28 дней назад
you 1000% lose me when you start to change the colors without ever mentioning why!!!
@khanhminh7110
@khanhminh7110 29 дней назад
Great video, can you guide me on how to create a flight plan for Phatom 4, thank you very much
@CapturingReality
@CapturingReality 29 дней назад
You just need to download an app for that. I suppose there are more suitable.
@Canihelpyouz
@Canihelpyouz Месяц назад
how do you control the UE5 controls to raise the view up?
@rishabhpurohit2663
@rishabhpurohit2663 Месяц назад
What alignment settings are you using?
@CapturingReality
@CapturingReality Месяц назад
I suppose there were used pre-defined settings.
@faridmuradov9623
@faridmuradov9623 Месяц назад
Hello... a very nice training video. I am doing everything as you said here.but when I combine all the pictures and masks and do the calculation, unfortunately it models separately. how can i fix this problem?
@CapturingReality
@CapturingReality Месяц назад
Hi, do you mean that the sides of your object are not aligned into one component? How is your overlap between images? How many rows have you used to capture your object from both sides? Are the masks applied in your process? If not, do you have the proper naming convention?
@faridmuradov9623
@faridmuradov9623 Месяц назад
@@CapturingReality The number of pictures is at least 100 photos for each side. I do everything as in the video. When I load the images with the masks I press the tab key and check if everything is normal. When I ask it to detect the photos, it calculates the edges separately. It does not combine them and display them as a model.
@CapturingReality
@CapturingReality Месяц назад
@@faridmuradov9623 Not just the number of images is important, also the way how they are captured and what it the overlap between them. What kind of object have you captured? It would be good to see your project somehow... What do you mean by: When I ask it to detect the photos ? Is there something similar for both sides of your capturing on the object? You can also try to merge the components using the control points.
@kraney195
@kraney195 Месяц назад
this is really cool, where can i read the system requirements, alongside the Gear requirements to capture photos that the software could easily detect? wandering if my android phone could do it as well as a thousand dollar camera
@CapturingReality
@CapturingReality Месяц назад
You can find some basic info here: www.capturingreality.com/realitycapture#:~:text=RealityCapture%20DataSheet Other useful information can be find here: dev.epicgames.com/community/capturing-reality/learning The results won't be the same for such cameras.
@tahanagahi3852
@tahanagahi3852 Месяц назад
That was perfect! Thank you, thank you :)
@yzhou-s6e
@yzhou-s6e Месяц назад
what case the "A depth-map is corrupted or misssing" error when reconstruction?
@CapturingReality
@CapturingReality Месяц назад
Hi, please check this post: forums.unrealengine.com/t/a-depth-map-for-filtering-is-corrupted/1698231
@yzhou-s6e
@yzhou-s6e Месяц назад
hello, i used this tools in my project .But some problems arose during the reconstruction process, Cause the program to crash. When I restart this script , Some blocks become empty
@CapturingReality
@CapturingReality Месяц назад
What kind of crash have you received? What were your used settings?
@yzhou-s6e
@yzhou-s6e Месяц назад
​@@CapturingRealityLack of memory caused the problem, but I've fixed it. It's hard to export and merge them now. I will use them in my cesium project.
@yzhou-s6e
@yzhou-s6e Месяц назад
I want to know if the rebuild task will crash due to insufficient computer memory when the rebuild scene is large
@yzhou-s6e
@yzhou-s6e Месяц назад
I want to know if the rebuild task will crash due to insufficient computer memory when the rebuild scene is large
@CapturingReality
@CapturingReality Месяц назад
It depends on your computers parameters, the number and quality of the images and used reconstruction settings.
@Rick-m1b
@Rick-m1b Месяц назад
When I export the point cloud, following the video, I don't get an .las file, I get an DWG Trueview Layer State. When I open it in Cloud Compare I get a thin straight line of colored dots. Is there a setting I missed?
@ontheground_l22
@ontheground_l22 Месяц назад
Awesome project! Is it a feature provided by reality capture to visualize as if you are traveling along a road?
@CapturingReality
@CapturingReality Месяц назад
Not exactly. But you can create a walk-through video from computed model in RealityCapture.
@federicofelici9703
@federicofelici9703 Месяц назад
I have been trying to use it and it does not work very well
@CapturingReality
@CapturingReality Месяц назад
Hi, we are sorry for that. What kind of object have you scanned?
@federicofelici9703
@federicofelici9703 Месяц назад
@@CapturingReality A sculpture of a head, 15 cm tall, white colour. with a xiaomi redmi note 11.
@federicofelici9703
@federicofelici9703 Месяц назад
@@CapturingReality almost all tbe photos go red and is almost imposible to get a dots cloud.
@CapturingReality
@CapturingReality Месяц назад
@@federicofelici9703 As it is just white color it is possible that there is not possible to find the image features to align the images. Have you tried some other object with better surface texture?
@nerosetsfire
@nerosetsfire Месяц назад
why do you need to add colors to the views at the start?
@CapturingReality
@CapturingReality Месяц назад
To see more different images at once. But it is not a necessary step.
@darcy4143
@darcy4143 Месяц назад
Excellent tutorial clear and concise in its delivery, thank you for making it easy to understand the process.
@rodrigomartinelli741
@rodrigomartinelli741 Месяц назад
I wonder.. do you batch rename the images? or just do it by hand? also, if i want to use less images for texturing, should the name contain numbers or would that "confuse" the program? I'm barely starting but i love this..
@CapturingReality
@CapturingReality Месяц назад
Hi, it depends on the number of the images, but batch rename is more convenient. I suppose you just need to follow the naming conventions. And also disable those images, which won't be used for texturing.
@altavision8
@altavision8 Месяц назад
Hello, excellent results that the 3D capture device used
@inceemreince
@inceemreince Месяц назад
At the 9th minute and 19th second of the video, you import the CSV file containing the image coordinates of the images associated with the points. I also followed the steps you showed. At the end of the steps I took: Operation warning The file contains 42 images that are not in the current scene. Please check the console for the full list of images. [error: 18002] I get the warning. I don't see the additional points in the rest of the video. what should I do?
@CapturingReality
@CapturingReality Месяц назад
Hi, this is just a warning message that there are some images in CSV file which are not included in your project. You can continue ordinary in the process.
@inceemreince
@inceemreince Месяц назад
@@CapturingReality Thank you for taking into consideration the question I asked, being interested in it, and answering the question I asked.
@timerkiert4731
@timerkiert4731 Месяц назад
Also it would be good to have a similar video to show how this model can be exported into Revit?
@CapturingReality
@CapturingReality Месяц назад
I suppose Revit works better with point clouds, so all you need to do is create a model and export point cloud from it.
@timerkiert4731
@timerkiert4731 Месяц назад
Great video. This was just the right length and got to where I needed to be. You advised a photo every 10deg, but what is the rule of thumb for capturing buildings? A photo every meter?
@CapturingReality
@CapturingReality Месяц назад
For the building you need to watch the overlap images. As basic you can follow 1:4 rule (to 4m distance from the building move 1 m to a side) which should keep about 75% overlap between the images.
@paddysproductions
@paddysproductions Месяц назад
Hello. Would you mind sharing how you project textures onto her, what your setup is like? I have been trying a few variations but would like to know your hardare setup. Thanks!
@CapturingReality
@CapturingReality Месяц назад
Hi, for the texture projection there were some data projectors with the texture image.
@dainjah
@dainjah Месяц назад
about lens distorsions.. doesnt reality capture do that for you? Ive seen another video mentioning RC knows which lens you use and automatically "fixes" the images during aligning? Im confused
@CapturingReality
@CapturingReality Месяц назад
To which part of the video you are referring to? Yes, RealityCapture computes the distortion parameters during alignment, but you need to choose, which model will be used there.
@dainjah
@dainjah Месяц назад
@@CapturingReality The "adobe lightroom" part at the beginning. I adjust only highlights/shadows/exposure and color temp, I leave the rest to RC. Also: I've seen you are using .tiff files, are the results better than converting RAW files to high quality jpgs? Does it increase processing time?
@CapturingReality
@CapturingReality Месяц назад
@@dainjah Can you write a time? I am not able to find it. You can use various formats. I suppose TIFFs should be better than JPG. In that case you can also use the layers workflow - JPGs will be used for alignment and meshing, TIFFs for texturing.
@dainjah
@dainjah Месяц назад
@@CapturingReality 12:40. I just wondered if you used tiffs because they give you denser point cloud (no compression). Anyways, I need to do some testing even with .dng files. I hope my PC can handle it :)
@CapturingReality
@CapturingReality Месяц назад
@@dainjah OK, so I suppose you can compute the lens distortion parameters and use those in the alignment process, but it is not recommended to undistord the images before using them in RealityCapture. You can keep that for it. Regarding the format, it depends on your needs. but those TIFFs could take more time to process. Resulted mesh shouldn't be so different. It also depends on the way how the images were compressed.
@dainjah
@dainjah Месяц назад
I want to see "making of" video :O
@dainjah
@dainjah Месяц назад
Very informative, thank you!! most of the photogrammetry tutorials only show you the RC settings you should use.. for me the info how to capture the photos is essential !!!
@dainjah
@dainjah Месяц назад
Blender, RC, UE5 = my favorite trio
@zickiea172
@zickiea172 Месяц назад
I went to your website and because it didn't;t recognize my email as a 'business account" it wouldn't let me request a demo ... so .. I'm moving on to the next option ... shame, it looked viable
@albertszczesniak8578
@albertszczesniak8578 2 месяца назад
I have a question about how I can rotate an entire point cloud. Every photogrammetry model I create is skewed
@wallacewainhouse8714
@wallacewainhouse8714 Месяц назад
Scene 3d > tools > Scene Alignment > Set Ground Plane
@markl1478
@markl1478 2 месяца назад
Lasso will penetrate the model on the other side, often making me choose one side and then go to the other side to cancel the choices I don't want. It's too bad
@xsherlockpl
@xsherlockpl 2 месяца назад
I try to follow that tutuorial but it uses to many shortcuts, the model magicaly appears in the UE , What format did you use for format, an exactly how did you imported it into the UE. It is a hard task. Can you show how you did that? I tried FBX export for the model of 30M tris and the file was like 490 Mb. That does not import properly to UE on the machine with 12GB of VRAM and 64 GB of ram. UE just crashes. so ther must be some trick here.
@CapturingReality
@CapturingReality Месяц назад
Hi, you can check this tutorial, which is more educational: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-kRD0rgCnOWQ.html You can also try to simplify your model and then import into UE.
@zacharyh5027
@zacharyh5027 2 месяца назад
Is this any better than one of those small lidar scanners ?
@thevitabrevis
@thevitabrevis 2 месяца назад
any talk about just being able to do VIDEO and the software take the stills it needs? This would be much more efficient. AI to the rescue !!
@wallacewainhouse8714
@wallacewainhouse8714 2 месяца назад
You can do this in the software, you just select the interval the images are taken at and RC will make the stills for you. It can be useful in some very specific situations, but the results are far inferior to stills due to 4k being lower res than most stills and video compression artefacts. It is equally efficient to use interval shooting mode with stills, and there is no compromise on quality.
@TheMotoManiac
@TheMotoManiac 2 месяца назад
I love the federal government and I am so thankful to the heroes that browse the web protecting me from stuff ❤❤
@joelmulder
@joelmulder 2 месяца назад
I beg of you, hire a UI/UX designer.
@64kinemastudio2
@64kinemastudio2 2 месяца назад
Let they hire you guy 😅😅😅😅😅.
@tuqe
@tuqe 2 месяца назад
Looks like very powerful software, but you really really need to get someone in who understands UX. So many hidden interaction modes, so many tool bars and weirdly named options
@peterallely5417
@peterallely5417 2 месяца назад
Yup agree, I use Reality Capture and know my way around, but it’s not particularly intuitive.
@piyushmhatre2867
@piyushmhatre2867 2 месяца назад
i can make free in maya
@sanjus3514
@sanjus3514 2 месяца назад
please share the images for learning purpose
@CapturingReality
@CapturingReality 2 месяца назад
Hi, all available datasets can be find here: www.capturingreality.com/sample-datasets The principles of the processing are then the same.
@sanjus3514
@sanjus3514 2 месяца назад
@@CapturingReality Thankyou