It's insane that a person with creativity, persistence, and a willingness to learn can make a high quality short film with almost no resources. A phone with some apps, a computer with some video processing power, and some free and/or inexpensive ai tools will give you realistic composited visuals. The same equipment will give you a soundtrack and overall sound design. The tools are at our fingertips. The skill is knowing which tool to use for which task and having the imagination to combine their power to create something great. It reminds me of something Deadmou5 said several years ago when referring to creating music electronically: "This can all be done with a minimal amount of software which is why a kid can make a dance hit on a laptop."
As a VFX artist with 22+ years experience I loved the line "if you shot on a green screen you can just simply key them" !! Very funny.. It seems vfx is finally having (or about to have) it's punk moment. Thanks for helping bring it forward!
many people still dont get it that they ll never get rid of the artists with the help of ai because of its own limitations, artists that are producing art, are never going to die
@@theonm.5736 I can’t emphasize this clearly enough, we used to think that AI couldn’t create “art”. Now that is highly suspect. The same thing will happen with the idea that we need artists to create high quality art. We are not special. We are bioorganic computers that are good at inference. That is all.
Haha yeah, I should have mentioned that! The problem is that at a midlevel of 1, the strength of the displacement effect could no longer be changed (it just scaled the sphere). The difference between maximum and minimum displacement was too weak for me, the effect was hardly noticeable from the inside of the sphere. Do you have an idea how to fix this?
@@mickmumpitz you could also use a color ramp node on the displacement map in the material to adjust the displacement map's range. Just make the white values grey.
Here’s a thought. What if one took an HDRI of the real filming location, whether in front of the green screen or outdoors, et cetera. Then depending on how feasible it is, make a depth map of the original video, and smooth it out, and then displace the keyed video plane in front of the camera so that we effectively have a 3D version of the subject instead of just normal mapped. And then take the HDRI of the original film location, but give its brightness a power of minus one or something negative, and make that light only interact with the displaced video plane in hopes that depending on how accurate the the depth map is it could subtract the lighting of the original filming location (like in the case it wasn’t feasible to film outdoors on an overcast day, and have more distance lighting indoors but it’s also in practice to light it any other helpful way) and then add other lights and the environment otherwise to re-light the video plane as if the subject were another object in the scene. Probably way too overthought and impractical itself anyway but it’s a thought anyway😆 (I just wondered if there’s a way to record LiDAR on one’s phone in the video to use as the footage displacement map instead of generating it with AI)
If you don’t mind my asking, how long did it take you to make a single virtual set? I’m wondering because I thinking of making some content with this method, but I wonder if the workload is actually manageable with what I have in mind.
I'm so grateful to you. I've been looking for a quick way to create virtual worlds for so long. And in this video you have revealed everything in such detail. I'm shocked, thank you!
Are you open to creating a video clip for a musical remix without it costing me too much? I'm just a musician and I don't have a producer behind. thank you
@@relaxmax6808 It's possible) Please send me your remixes. I want to hear them) And write, how you imagine the clip. My email is in description of channel. Thanks.
Sometimes I would like to see the name of the software used on screen……. Was it Blender, was it Davinci where you did the color correction? It would be helpful to see the names of the used plugins especially where you‘re a newbie to this whole process….😇
Incredible dude , I'm kinda disappointed because of the relight on the actor normal map trick didn't go as i expected , If i have to do so , I use photoshop and Ebysynth but that's a pain workflow.
Dude... these videos are incredible. You're finding ways of doing things I thought it wouldn't be truly possible to accomplish for another 5 to 10 years. Very cool! I have an ambitious short film I shot most of in 2019 that I've been wanting to finish but knew it would take a significant amount of money to create the fantastical world in the film's conclusion. This has given me hope that completing it might be feasible on a much smaller scale if I sit down and really focus on banging out in AI that would be very hard to do practically or with more traditional VFX techniques. Definitely spreading the word and keeping a close eye on your channel going forward. It's VERY EXCITING!! Thanks man, appreciate what you're doing here!
i knew we will come to this stage, this is the teaching age where those who were the early adopters of the technology will start educating the people who are just trying to work things out . AMAZING !!! thanks for the workflow and tying things toegther
Not so far from a classic workflow we do since 20 years only cheaper :D. The only thing who is really different is the possibility of AI image generation (start)..
it has been an amazing learning curve and it certainly made me watch a few times to really grasp the technique and workflow. Amazing work and super cool explanation with a mixture of cross working tool sets which again makes it fun int he learning process. Thanks
I think the AI revolution is to do everything without Adobe, since they've became a "you will own nothing and be happy" type of company. Thats why I use Davinci Resolve.
A.I never replace filmmaking the reason is very complex, A.I is not standart of filmmaking (except plugins on some software maybe will help and solve filmmaking problems) A.i is not profesional for filmmaking, you're not done by just clicking 1video to generate and compete with profesional company
For anybody having issues with your steps being visible and creating jagged lines and you can't figure out what it means to save in higher bit depth, check to see what kind of file you're saving from Auto1111. If you're saving JPG files, then those files have a lower bit depth than PNG files. Switch the save file type to PNG in Auto1111 settings and download the depth map as a PNG. This solved the issue for me.
Hi Mick! Amazing tutorial! I'm trying to reproduce the workflow but I'm stuck on the depth map white value alteration. I don't know how to do this (i have photoshop, but no skills) - also, I can't overcome the blocky appearance of the depth map. I save the images as 32 bit depth but the same blockiness is still there.
I'm having issues with the "image plane from visible ref" step. When I click on the "image plane..." button, set the shader as Principled and distance to 1 nothing happens. Am I missing a step to get the image plane in my scene?
I have 16 gb of ram, an rtx 3050, and a ryzen 5 2600. I create my 3d setting in Blender and add a displace and subdivision and it just tanks my system and Blender becomes unresponsive, if I try to add a 2nd subdivision it crashes, if I try to render it crashes. 100% ram and before I turned off GPU acceleration it was using 100% of that too. This doesn't seem like a very demanding thing that's being done. My setup seems to be plenty enough to do something like this so why is it crashing blender?
My thoughts greatly aligned with yours...I have been fantasising on using AI to produce a short film for a while now. Just yesterday I found blockade...and then boom 💥 today I found your page...please can we connect and interact one on one ?
Great video. Can you talk a bit more about preparing the 3D environment, particularly those last steps once you have the depth map from controlnet? I’m not familiar with the flow for how to reduce the black and white values in PS or change the file’s bit depth.
bonjour tout le monde j'ai cette erreur qui s'affiche Error code: 1 stdout: stderr: Traceback (most recent call last): File "", line 1, in AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check quelqu'un a une idée comment la régler
Some really cool tools highlighted here. I am really interested to see where this workflow goes as the AI tools improve. I am a professional compositor and I have to say the "re-lighting" method made me cringe a bit. The normals are way too inaccurate for that ever to work. One idea for a fix though would be an image to 3d model AI so you can project the video back onto a generate human 3d model. You can then use that Geo to catch the lighting of the scene. This will also solve the flickering problem from the normals
I never thought the day would come. Well, yeah, I did. I guess a couple of my stories from long ago cover it. But it seemed far away when I was trying to break into the Hollywood writer scene. It looks like even a single actor can play many roles.
Hello Brother, please consider covering the software options available for this process, the steps involved in converting 3D animation videos to 2D animation videos.
I am kind of scratching my head here. What the point of this all ? it seem to be subpar for loads of work. It seems like it would just be lots quicker and yield better results just making this all in blender or UE5, render a world in it or even pre load one into it. other than person's and or other objects that need a more realistisc direct interaction on screen. You don't all those other tools.
The volume has another trick, a stage that can rotate since the screens dont; go full 360 degrees. I'd be happy to shoot with a really good, large, short throw laser projector with UE5 ... you could do that for less than 6k (USD). Ryan Conolly did a few REALLY impressive tests with short throw projectors and Unreal. Was watching another video and a company sells a small LED wall setup for Virtual Production for as low as 10k (USD) ... not EXACTLY available to us indies but still a move in the right direction.
WOW! What a great video! I've been experimenting with stable diffusion for months on my VR generated worlds. This video takes me a big step further.... BIG FAT FANX! So far I have used SD to generate endless textures for creating VR brushes or to integrate Deforum clips into 360 degree clips with Davinci Resolve. But: My main interest in connecting VR and AI is the possibility of transforming a 360 degree clip (based on a custom VR art world) into something else using SD Deforum...unfortunately I haven't managed to get a decent workflow done yet...mainly because of the large amounts of data ...(?) Happy colored greetinx!
Thank you for the video tutorial, as always very informative. There is a question. If anyone knows, explain briefly. In ControlNet there is a Preprocessor and Models. In the models, for example, there is "control_canny-fp16" and there is "control_sd15_canny.pth". What are the differences between these models. The sd15 just weighs about 1.5 GB. But the fp16 is 720Mb. "control_sd15_canny.pth" an older model? Or what is the difference?
Cool idea. You should use the mid-level offset slider in the displace modifier to shift the displacement from center to outwards, this way you can keep the original depth map values and do not have to compress them. Also, instead of an uv-sphere using an ico-sphere helps with getting more evenly distributed subdivisions and this more predictable displacements on the sphere mesh.
🎸🎹🎶📺🧮🎥An advanced amateur among the commentators of this video to work on an animation clip for a musician? Low budget unfortunately for the moment. Leave your message here, to discuss more in depth. THANKS
I really love the concept of "virtual production". Bringing a mix of 3D-assets/scenes and (AI modified) real footage into a game/physics engine which serves as a studio environment. Game Engine - not Blender/Maya... ! This idea is huge. As you said: not everyone can afford a studio environment with LED screens, but VR/XR headsets can do a big chunk of the work. Flipside VR for Oculus would be a candidate to leverage VR for virtual production.
Wonderful work. I as a German Hobby-Artist, interested in Tracking and Matchmoving, was pretty amazed by your creativity and deeply explanations. I let an abo here. Good luck my friend.
Hi Mick, thank you for all the wonderful videos. I am trying to find the easiest way, as I am not technically savvy, to create a music video with green screen/3D/iphone. It looks as if this video can help. If you have any suggestions, I would greatly appreciate.
you're the man. thank you for sharing your workflow. I was going down this rabbit hole blind for a year and you sir are the rosetta stone. Muchas gracias.