This is extremely impressive, you have found a method to put a subject into any scene and make them look native, applications are much bigger for videos than for photos as changing the lighting of a photo can also be circumnavigated by just generating a new ai photo, or using generative fill etc, yet for video this is a game changer
This is a bit beyond what I'm comfortable attempting, but it's refreshing to see a young tutorial creator on RU-vid that a) really knows his shit and b) is innovating and experimenting, not regurgitating the same basic info.
'blew my mind' after 20 years of traditional compositing. It's not about replacing Hollywood's high-end VFX, but about democratizing access to quality visuals for indie creators and RU-vid producers like myself
That' just plain false lol. We already use them as part of bigger, more complex, comps. The main difference is that we train these models on custom datasets, sometimes even per show...
@@Daniel_Bettega_52 i dont think so. someone should use the new ai tools and they will do it better than people which are not familiar with compositing
your videos are so exciting and very easy to follow! If you are a VFX artist or supervisor with a team that is evolving frequently. THESE are the solutions for lower budgets, shorter turnarounds ... Thanks for all of your hard work Micky! You are trusted and valued! 🫡
Bro, I'm working on a horror series rn. Was going to try and create my own workflow like this after I was done. You saved me so much time. Thank you, G!
there's a SAM2 model which natively takes video input. It can make temporally stable masks with much better control (can take positive and negative points as well as prompts as input), and it's much faster too, I'd recommend you check it out!
I would love to see an Image version of this, Alot of the current image compositing workflow lack things like edge fix and keeping the person in the image looking like they do.
@@eccentricballad9039 Boris is better for manual rotoscope, still not very good to automatically roto out subjects perfectly. MatteAssistML is still quite jittery
Man, this comfy UI thing looks wild! It reminds me a lot of BMD'S Fusion but like from another planet. The node-based UI feels familiar, but all its various functions and associated technical jargon are completely incomprehensible to me. Might be fun to learn, but I already spend way too much time plugging things into other things "to see what happens" in other programs xD
Such a wonderful video! 👏 Will you please consider to update the simple workflow for subject image (PNG / JPG) and Background (which you already show how it works) instead of only video? I've tried it myself via "Load Image" and some bypass video related nodes, but something always broken in the way. (it works fine with video) It will be great to have it as a workflow if you'll be willing to share of course, thanks ahead 🙏
Node-based interfaces always look intimidating until you actually just look at each node individually (and learn what it's doing) and take a step back to see the logic of the system.
Thank you very much. Honestly, I didn't even expect to be able to get this scheme up and running quickly. The only problem is the blurred face. Could you please tell me in which module to solve this?
Your video are Amazing, i would like to ask you a question. Is it possibile to generate the model sheet of the character without using the prompt? For example if i draw a character myself in front view, is it able to create the rest of the model?
Thank you for sharing. Unfortunately the build is not working for me. Error in Ksample, I have not been able to solve it yet. No solution found on the forums.
Think about how intelligent this gentleman is... Now think about how our politicians can't hit the mute button on the zoom call. Seems a little backwards huh?
i am entirely new to ai ...where i have to start creating video content through ai? what is the very basic workflow to get to image generation and consistent character creation for video content
so far oen of the best tutorials on this..i need to try it in the cloud now...BUT..i have a question..what if I wanted to animate the entire video to cartoon but keep one object (like a table) realistic..how do i do this? Can it be done with this workflow?
Hello and thank you very much for this tutorial. I have just installed everything and set up my files, but the background is not rendering in the output. Is there a limit to the image size? I am using a still image in the background and it keeps showing an error on the image input of the repeatimagebatch node.
Assuming this all being done locally? What is the maximum number of frames that can be rendered and will you get consistent results if you have to do this in batches?