"Yeah. . .but how does it work" Remember all that, "math", you believed you would never need in, "real life", well here it is on full display. That. . .that's how it works.
So basically what you’re saying is: The 3D model is unwrapped by the software like a piece of origami which transforms its x-y-z coordinates into just x and y, thereby allowing the now 2D image made from that unwrapped 3D model to be overlayed on top of another 2D image, which the software then uses the x and y coordinates of to determine how to stitch the 2D image onto the 2D image created from the 3D model, and then the final 2D image created by stitching the two together is folded back into a 3D model.
I still do not understand the concept of texture space....if we unwrap 3d model what are UV coordinates? Texture space is just square? So this transparent layer on top of actual texture is texture space containing flat representation of 3d model?
I have been searching for a very long time as to how to, both download and import UV Maps into blender. Why is this so hard to find? Does anyone out there have the answer?
We'll add simulations to the to-do list! For cinematics, we already have a free video on the topic here ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-foPviNon_jI.html