Тёмный

AnimateDiff Tutorial: Turn Videos to A.I Animation | IPAdapter x ComfyUI 

MDMZ
Подписаться 309 тыс.
Просмотров 173 тыс.
50% 1

The first 500 people to use my link will get a 1 month free trial of Skillshare skl.sh/mdmz01241
Transform your videos into anything you can imagine.
⚙️Setting Files:
bit.ly/3vJNaOZ
ComfyUI: bit.ly/3LM1hbN
ComfyUI Manager: git clone github.com/ltdrdata/ComfyUI-M...
Guide: bit.ly/3ubx4gw
Important! use this base workflow if you're having issues with IPAdapter: bit.ly/3ITvSlQ
Models:
ProtoVision XL: bit.ly/3U8ps9l
DreamShaper XL: bit.ly/3Sa9h8W
CounterfeitXL: bit.ly/3OkptmL
SDXL VAE: bit.ly/4b6NXtv
IPAdapter Plus: bit.ly/3vLiI78
Image Encoder: bit.ly/42aaDoC
Controlnet model: bit.ly/42bmGC2
HotshotXL bit.ly/3HxJRx0
How to find prompts: • How to Write A Prompt ?
➕Positive Prompt: ((masterpiece, best quality)), Origami young man, folding sculpture, wearing green origami shirt, blue origami jeans, white origami shoes, depth of field, detailed, sharp, 8k resolution, very detailed, cinematic lighting, trending on artstation, hyperdetailed
➖ Prompt: (bad quality, Worst quality), NSFW, nude, text, watermark, low quality, medium quality, blurry, censored, wrinkles, deformed, mutated
🔗 Software & Plugins:
Topaz Video AI: bit.ly/3t04Otl
©️ Credits:
Sock videos from ‪@PexelsPhotos‬
Dancing man video: www.pexels.com/video/energeti...
⏲ Chapters:
0:00 Intro
0:24 Install ComfyUI
1:31 Base Workflow
1:54 Install missing nodes
2:22 Models
4:23 Settings
10:36 Animation outputs
Support me on Patreon:
bit.ly/2MW56A1
🎵 Where I get my Music:
bit.ly/3boTeyv
🎤 My Microphone:
amzn.to/3kuHeki
🔈 Join my Discord server:
bit.ly/3qixniz
Join me!
Instagram: / justmdmz
Tiktok: / justmdmz
Twitter: / justmdmz
Facebook: / medmehrez.bss
Website: medmehrez.com/
hashtags...
Who am I?
-----------------------------------------
My name is Mohamed Mehrez and I create videos around visual effects and filmmaking techniques. I currently focus on making tutorials in the areas of digital art, visual effects, and incorporating AI in creative projects.

Кино

Опубликовано:

 

14 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 700   
@MDMZ
@MDMZ 4 месяца назад
Need help? Check out our Discord channel: discord.gg/ztHKU2bgsD I've added some solutions and tips, the community is also very helpful, so don't be shy to ask for help 😉
@bhabasankardagar5810
@bhabasankardagar5810 3 месяца назад
Thanks, It works with me select_every_nth : 15 and 480-1080, but it is taking too long, I have CP Config with 20GB RAM, Core i3, and Win 11. Let me know if there is any process to fast, I want to create 20-second video, can I upload the image segment in "Keyframe IPAdapter -Load Image" to speedup process?
@bhabasankardagar5810
@bhabasankardagar5810 3 месяца назад
It's taking too long to create videos, so I'm considering generating animated sequence images instead. I'll merge these sequence images using Premiere Pro and create the video myself.
@MDMZ
@MDMZ 3 месяца назад
@@bhabasankardagar5810 the process relies heavily on your GPU's VRAM
@MDMZ
@MDMZ 3 месяца назад
@@bhabasankardagar5810 that's a good workaround
@devonandersson300
@devonandersson300 3 месяца назад
@MDMZ Can confirm as of 12/03 that following your tutorial steps work perfectly. Was not a ComfyUI user (InvokeAI) - but I needed a solution that can work with video. I will try to combine it with the new 4 step SDXL Lightning or JuggernautXL Lightning models. Seem a PERFECT fit for good quality vs speed IF it works.
@MaximusProxi
@MaximusProxi 4 месяца назад
Thanks for the video! Most creators forget, to show which models they got and where to put them in the ComfyUi folder. This step by step video helped a lot.
@MDMZ
@MDMZ 4 месяца назад
glad it was helpful
@GuyXotic
@GuyXotic 4 месяца назад
Your tutorials are one of the best and even beginners can become almost like pros by seeing your videos 🙌🏻
@MDMZ
@MDMZ 4 месяца назад
Happy to hear that!
@randy2d
@randy2d 3 месяца назад
Wow this is a great tutorial. It's taking its sweet time on my PC LOL but none the less it is actually working! I've seen so many vid to vid confyui videos, and everyon is jumping from left to right, with no coherency, no explaination about what model, and nodes do what, thanks for being super clear about those things. You single handedly just made this whole thing easy!
@MDMZ
@MDMZ 3 месяца назад
that's really great to hear. Thank you 🙏
@Aryannnnnn217
@Aryannnnnn217 Месяц назад
Will it work on rtx 3060?
@randy2d
@randy2d Месяц назад
@@Aryannnnnn217 well mine is a 3060 but the 12 gig version, also I kinda improved a few bits on the workflow and it is actually really good now
@Aryannnnnn217
@Aryannnnnn217 Месяц назад
@@randy2d mine is also 12 gig version, but i just shifted to comfyui, in A1111 my 3060 couldn't do controlnet and hires fix in sdxl models.. so im wondering if this workflow will work on my system? thanks for reply
@randy2d
@randy2d Месяц назад
@@Aryannnnnn217 I never ran videos higher resolution than 960x512 because the upscaler I use I can just set the size I want to upsale to and than send it to the video combine to export
@SinnerSam_
@SinnerSam_ 2 дня назад
I want to try this, thanks man, love your channel
@VirtueArts
@VirtueArts 4 месяца назад
Excellent explanation! Kudos bro... You deserve millions of subscribers!
@MDMZ
@MDMZ 4 месяца назад
Thank you so much 😀
@SENAC0R
@SENAC0R 4 месяца назад
Incredible man!! Thanks, I was waiting for this! I prefer Comfyui than deforum. you are the best!💪
@MDMZ
@MDMZ 4 месяца назад
Glad you like it!
@Fixerdp
@Fixerdp 4 месяца назад
Best lesson ever. It's a pleasure to listen you)
@MDMZ
@MDMZ 4 месяца назад
glad you liked the video
@handlenumber707
@handlenumber707 Месяц назад
It's interesting, but without even paying much attention I can tell it takes a level of involvement comparable with traditional methods. Until so called generative AI offers simplistic prompting, nothing changes. You'll end up having to pay experts to use these systems. I see no benefit to anyone apart perhaps for those owning severs, sifting through endless input codes, searching for some kind of pay-dirt. It's a hard ask. A.I. systems (a fad) will NOT replace traditional techniques.
@MDMZ
@MDMZ Месяц назад
@@handlenumber707 You didn't just write "you'll end up having to pay experts to use it" and "i see no benefit to anyone" in the same sentence 😅
@handlenumber707
@handlenumber707 Месяц назад
@@MDMZ Wasn't the whole idea to avoid paying people to do things?
@AD-jl5mv
@AD-jl5mv 3 месяца назад
Such a useful video, thanks heaps for putting this together.
@MDMZ
@MDMZ 3 месяца назад
Glad it was helpful!
@sameeramin822
@sameeramin822 4 месяца назад
You are amazing ya akhi 😎 as always awesome and creative videos 👍🏻❤️
@chouaibphenix1082
@chouaibphenix1082 4 месяца назад
frr mahabtch tslahli ma3lich fb tfhmni kimah psk dert kifo mhbtch tmchi mlwl
@MDMZ
@MDMZ 4 месяца назад
🙏
@HeyItChien
@HeyItChien Месяц назад
Thanks for sharing looking forward to trying it out!
@jemyt6466
@jemyt6466 3 месяца назад
your the best sir
@TheJPinder
@TheJPinder 4 месяца назад
Thanks for the clarity.
@MDMZ
@MDMZ 4 месяца назад
Glad it was helpful!
@FCCEO
@FCCEO 3 месяца назад
4:20 after download everything. Thank you for amazing tutorial!!
@MDMZ
@MDMZ 3 месяца назад
Glad it helped!
@KodandocomFaria
@KodandocomFaria 4 месяца назад
One idea to improve vídeo background. Try remove background first. Then apply a specific node only for background generation to avoid flickering. if you see flickering on hands you may use a technique by creating a boundering box that stylizes only hands and uses any hands detailers tools (lora, or node).
@MDMZ
@MDMZ 4 месяца назад
great tips!
@bdnwfantaziedreams
@bdnwfantaziedreams 4 месяца назад
very nice and love using AI for animated features. Great sharing
@MDMZ
@MDMZ 4 месяца назад
Thank you! Cheers!
@florentraffray1073
@florentraffray1073 3 месяца назад
the future of art... downloading the newest hard to find files Thanks for the tut, it was helpful I don't know how I would have figured out all those steps
@handlenumber707
@handlenumber707 Месяц назад
Think of it as modern day gold mining, nothing more than just another scheme to get people to hand over their ideas for free.
@AntoninaAndrushko
@AntoninaAndrushko 2 месяца назад
Thank you very-very much !❤️👍👍
@MDMZ
@MDMZ 2 месяца назад
You're welcome 😊
@omarnawar5497
@omarnawar5497 4 месяца назад
thnaks mdmz for yr effort :)
@Paulie1232
@Paulie1232 4 месяца назад
Good information thanks 😊
@Hidden-Story-Storage
@Hidden-Story-Storage 4 месяца назад
Buckle up, creators! This tutorial featuring ComfyUI IPAdapter + HotshotXL is your ticket to a whole new dimension of video wizardry. Transform your content with the power of A.I., and let the magic unfold! 🌟🤖
@handlenumber707
@handlenumber707 Месяц назад
Let me get this right. You take a video of someone moving around. Then you upload the video, paying money to use this service. Then you type in some prompts, and you get an animated character back? You can do this for free on your own computer, without the middleman, and without sharing your ideas. It's called motion capture.
@yuyuanwang4785
@yuyuanwang4785 3 месяца назад
Thanks so much for the tutorial! just wondering how to keep the character and background to be consistent and a bit stable? I had kept similar setting multiple times but the outcome of human and background still changing a lot.
@MDMZ
@MDMZ 3 месяца назад
you can try playing with the main settings I mentioned. there's no exact formula, so try different combinations, I've tried to explain what every one of those settings does in the video
@ehsankholghi
@ehsankholghi 4 месяца назад
thank for ur great tutorial.is there any limition for frame rendering? i use ur workflow for a 32 seconds video file and its like 30 frames(1000 png) and i got this error after 1 hours render time on my 3090ti: numpy.core._exceptions._ArrayMemoryError: Unable to allocate 6.43 GiB for an array with shape (976, 1024, 576, 3) and data type float32
@MDMZ
@MDMZ 3 месяца назад
probably running out of VRAM, make sure your GPU is not being overused by other apps during the process
@madcatlady
@madcatlady 4 месяца назад
seeing this awesome stuff will eventually wear down my distaste for node based systems, still clinging to A1111 for now, as an aside, this same reason I cannot abandon Carrara 3D for Blender 😛it even uses it's lovely shader tree in Octane render keeping those awful nodes hidden unless I choose to torture myself with bricks and spaghetti
@MDMZ
@MDMZ 4 месяца назад
I was scared of it at first too, just like any other tools, ComfyUI gets easier the more you used it
@soyeltama
@soyeltama 4 месяца назад
Do you need a video card for this? or can it run on Google Colab? Thank you
@MDMZ
@MDMZ 4 месяца назад
for this method, you need a video card, if you don't have a decent one, you can run it on the cloud(for a fee): ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-XPRXhnrmzzs.html
@AeroGamingArc
@AeroGamingArc 3 месяца назад
hi, thanks for the tutorial it was a great help for a beginner like me. how can I Add my own custom SDXL Lora to the prompts here? like where do I connect em? thanks in advance
@MDMZ
@MDMZ 3 месяца назад
This would be a seperate tutorial on its own, did you try finding other videos on youtube ?
@amarboldbazarsuren
@amarboldbazarsuren 24 дня назад
thanks for helping
@nitinburli7814
@nitinburli7814 4 месяца назад
Hi! Thanks for the tutorial. One question, which controlnet model are you using? depth, openpose etc...thanks.
@MDMZ
@MDMZ 4 месяца назад
hi, it's depth, as mentioned in the video 😉
@SarovokTheFallen
@SarovokTheFallen 4 дня назад
Fixes for current version - June 2024: IPAdapter won't work in folder 'ComfyUI_IPAdapter_plus' => Go to 'ComfyUI\models' folder and add a folder named 'IPAdapter' and place the IPAdapter plus there. (Now you can find the IPAdapter) Next you'll still have two red nodes in video reference and keyframe. Change those with 'IPAdapter Advanced' nodes (double click to search for nodes) and link the lines to these new nodes then remove the old ones. Make sure all connections are made like on the broken nodes.
@MDMZ
@MDMZ 3 дня назад
thanks for sharing
@bartekvena413
@bartekvena413 4 месяца назад
Great tutorial! Thank You! Could You please help me with one thing? My "Video Combine VHS" nodes are missing video formats - only "image/gif" and "image/webp" is available. WHat did I miss?
@MDMZ
@MDMZ 3 месяца назад
are you using the same workflow from this video? in any case, you can export to webp and convert later
@user-kx5hd6fx3t
@user-kx5hd6fx3t 3 месяца назад
the node "animatediff combine" change to "video combine" . there are two nodes look like the same one . but different. try it again.
@dejuak
@dejuak 4 месяца назад
Hey really nice video i have watched it like 10 times on the last month. I have a question, is there a way to only animate the character but keeping the background static? Would be really awesome
@MDMZ
@MDMZ 3 месяца назад
you can rotoscope the subject out of your video first, and run it separately from the background, there may be a way to do it directly on ComfyUI but I haven't looked into it
@MedAmineTN
@MedAmineTN 4 месяца назад
Nice 🤩😍
@MDMZ
@MDMZ 4 месяца назад
Thank u bro
@hongyongzhang2301
@hongyongzhang2301 3 месяца назад
In order to Apply IPAdapter, can you provide the reference image of Origami
@wagnerfreitas3261
@wagnerfreitas3261 2 месяца назад
brilliant
@KardinalMoses
@KardinalMoses 3 месяца назад
Thank you very much; you explained it really nicely. I'm currently at 50% and curious to see what comes out of it :) PS: Can the same thing be done with plants? For example, modeling how a plant grows and continues to change over time?
@MDMZ
@MDMZ 3 месяца назад
have fun! I'm really not sure about your question, sounds like you're talking about generating video from scratch?
@erdbeerbus
@erdbeerbus 4 месяца назад
great!! thank you! to integrate a selfmade lora file: is it the best way to put it between the checkpoint and positiv text prompt or what would be your suggestion? thx in advance!
@MDMZ
@MDMZ 4 месяца назад
hi, tbh I'm not entirely sure, I will need to look into it
@600baller
@600baller 3 месяца назад
Thank you for the video. Is there a SD1.5 alternative of HotshotXL?
@MDMZ
@MDMZ 3 месяца назад
yes, mm-Stabilized_high is a good one
@NikitinaYulia
@NikitinaYulia 2 месяца назад
Thank you very much 👏🏼 One question. Is it possible to make a 15-minute video like this? Or is it only suitable for short videos of a few seconds? Thank you in advance
@MDMZ
@MDMZ 2 месяца назад
I haven't tried a video that long, I haven't encountered restrictions on video duration, but a 15 mins video will surely take so long to process if it works, why not give it a shot ?
@shizool2359
@shizool2359 2 месяца назад
15 minute or 15 sec .15 minute destroy ur pc bro😂😂😂
@jamilmalas
@jamilmalas 2 месяца назад
Hi, please I need your help, I just updated the comfyui, did update all, and I lost apply ipadapter within the video reference, and also the ipadapter from the keyframe adapter section.
@jamilmalas
@jamilmalas 2 месяца назад
I just found your comment about the update, thanks a lot, شكرا يا حبيبي
@MDMZ
@MDMZ 2 месяца назад
u r welcome, glad it worked
@kewlorsolutions
@kewlorsolutions 2 месяца назад
Great tutorial. Can we do sizes like 1920x1080 and how long would that take ie 5-10 seconds. Is there anyway to have it create a sequence instead of a mp4 incase it fails to continue?
@MDMZ
@MDMZ 2 месяца назад
you can definitely go with other resolutions, time is almost impossible to predict, give it a shot
@MrReefxl
@MrReefxl 3 месяца назад
This guide is so cool. What do I need to change so it will have better result for celebs?
@MDMZ
@MDMZ 3 месяца назад
a model trained on celebrity pics would probably help, but sometimes using the person's name in the prompt works fine
@MrReefxl
@MrReefxl 3 месяца назад
I have tried that, but I get many artifacts on face and cloths@@MDMZ
@davidlartigue
@davidlartigue 2 месяца назад
great tutorial! well done! In the Video Combine window, I don't have any video formats like your video, only 2 image formats, do you know what that's about?
@MDMZ
@MDMZ 2 месяца назад
make sure you update ComfyUI and all the nodes
@lee_sung_studio
@lee_sung_studio 4 месяца назад
Thank you. 감사합니다.
@techendofficial
@techendofficial 4 месяца назад
First comment ❤❤❤
@biggo1261
@biggo1261 Месяц назад
Can you share how much time did it take to train and GPU used? I used A6000 for several hrs suddenly saw Kill.. (May be my computer went to sleep) I should have chose 16 versions, Is there anyway to save progress on each steps? Thank you so much!
@MDMZ
@MDMZ Месяц назад
what do u mean by 16 versions ? Anyways, it's normal for this process to be a little slow, if it stops running, check the cmd window for errors
@lucho3612
@lucho3612 4 месяца назад
I have only one problem with this workflow: Following all your settings, the time from when I hit "queue promt" until I see my first frame is too long. What settings can I tweak to make it faster, without affecting it too much? When I find the look I like I'll go back to the high settings.
@MDMZ
@MDMZ 4 месяца назад
you can try setting the extract_every_nth to 10 or something higher, this way you'll process less frames and get to see what it looks like in much shorter time
@x7pictures
@x7pictures 2 месяца назад
ksampler is not running for me before that the que gets stopped also i can't see any preview video how to fix this ???
@MDMZ
@MDMZ 2 месяца назад
could be a memory issue, check the pinned comment
@glebandreychuk1117
@glebandreychuk1117 4 месяца назад
Man you are awesome, thanks for your time and effort❤❤❤ do you know is it possible to use multiple controlnets in this pipeline? Depth+edge detection? I tried to use multi controlnet node but I got error with ip adapter then😢
@MDMZ
@MDMZ 4 месяца назад
theoretically, it should be possible, I haven't tried it myself.
@bboyhafan
@bboyhafan 2 месяца назад
i tried to with multiple control net, but it work just with about 20 frames, but when i try to make more frames of video there is error
@sylvansheen8598
@sylvansheen8598 Месяц назад
Thank you for your clean and helpful video. I tried to run this on my local machine but unfortunately I do not have enough vram. Do you have any recommendation on any cloud service?
@MDMZ
@MDMZ Месяц назад
Thinkdiffusion is one of them, but I can't guarantee that all nodes are available on online services
@deama15
@deama15 26 дней назад
How long does it take to process that e.g. dancing video? Which card did you do it on?
@blaspayri
@blaspayri Месяц назад
I am on Mac and this seems to be PC windows only .... but interesting to know about its existence. How would you rate this tool for video stylisation/transformation compared to RunwayML video to video?
@MDMZ
@MDMZ Месяц назад
this in my opinion is much better than RML, there are ways to run it on MAC, but it's huge difference when using an NVIDIA GPU
@laviebreslav4112
@laviebreslav4112 Месяц назад
Is there any way to do it with image to video pose instead of prompt to video pose?
@AldoNkmj
@AldoNkmj 3 месяца назад
I have done this installation like you do in the video but it is not working
@choboruin
@choboruin 3 месяца назад
same
@MDMZ
@MDMZ 3 месяца назад
I wish i could help, but it's impossible for me to tell why, feel free to share more info on discord, someone might be able to help
@OkanSoyluu
@OkanSoyluu 4 месяца назад
KSampler stays at 33% Although I waited for 4.5 hours, it still did not work at the same steps 30, I tried at 25, it is the same again, the last time I was able to run it at 9, it also stayed at 33% Is there a solution? System: Ryzen 5 3600/gtx1070Ti 8GB/16GB 3200mhz Ram/500gb SSD
@MDMZ
@MDMZ 4 месяца назад
8GB might be a little low for it, but it could also be happening for another reason, did you try setting a lower resolution ? maybe 4080p
@user-yg9bk5wr2c
@user-yg9bk5wr2c 3 месяца назад
Thanks for the video! But why do I have a lot of paper cranes on my background, the original video is clean white, how do you make sure that the background is rarely affected
@MDMZ
@MDMZ 3 месяца назад
that shouldnt happen, try reinstalling comfyUI, could be a software issue
@infectioussneeze9099
@infectioussneeze9099 3 месяца назад
great video, I just want to know how could I train my own model data from my own art set 2:30 and use that as the style reference?
@MDMZ
@MDMZ 2 месяца назад
yeah you can do that, I don't have a video on training your own model, but there are several tutorials on youtube
@Cruse1914CODM
@Cruse1914CODM 3 месяца назад
Amazing Video. I love it. 😍 Bro, can you please make a video where I can change the character in the video to my character and transform it ? I am actually looking for somethings like this for long days.
@MDMZ
@MDMZ 3 месяца назад
great idea, I will look into it
@Cruse1914CODM
@Cruse1914CODM 3 месяца назад
Thanks a lot! You are so kind 🤗. I am very happy that you read my comment and replied. I will stay tuned.@@MDMZ
@inanckalayc4923
@inanckalayc4923 3 месяца назад
First of all thank you for your efforts for this great information and video. i am a mac user. i am using m2. zsh: killed, TypeError: Failed to fetch. does this mean that the RAM is not enough? What are the minimum computer specs I should have. I would really appreciate your help. Sincerely,
@MDMZ
@MDMZ 3 месяца назад
Hi, I can't tell for sure, but I know that it's challenging to make this work on Mac, have you followed the installation guide for MAC on the official guide ? and also, what's the full error text
@julianmartinezvfx
@julianmartinezvfx 2 месяца назад
I get this error and I don't know how to solve it : 'T2IAdapter' object has no attribute 'compression_ratio'
@sebbo1008
@sebbo1008 2 месяца назад
Go to manager and press Update ComfyUI. Fixed it for me. After that, I got "ModelPatcherAndInjector.patch_model() got an unexpected keyword argument 'patch_weights'", which I fixed by once again going to the manager and pressing "Update all". Now it works and a few warnings I was getting also disappeared 😁
@MDMZ
@MDMZ 2 месяца назад
I haven't encountered this one yet
@AlessandroMDC
@AlessandroMDC 2 месяца назад
bien explicado felicitaciones solo una duda , me sale: When loading the graph, the following node types were not found: IPAdapterApply Nodes that have failed to load will show as red on the graph.
@MDMZ
@MDMZ 2 месяца назад
check the pinned comment, just shared a fix
@AlessandroMDC
@AlessandroMDC 20 дней назад
@@MDMZ gracias ahora si me salió
@user-ft9oz3si2p
@user-ft9oz3si2p 2 месяца назад
Thanks for sharing. I have a question I would like to ask. What are the minimum requirements for a graphics card?
@MDMZ
@MDMZ 2 месяца назад
I recommend atleast 12GB of VRAM
@PercuSoundSystem
@PercuSoundSystem 4 месяца назад
Thanks a lot
@seosevilla
@seosevilla Месяц назад
Tx 4 video. One question: is it possible to have different animations but with the same character that I'm designing? If I filmed my little reference videos to animate my character, I can have this character with these sequences filmed for a short film ¿?
@MDMZ
@MDMZ Месяц назад
the best way to get the same character is to train a model on a set of images of that character
@seosevilla
@seosevilla Месяц назад
@@MDMZ Ok. Have you or know good one video tutorial for that¿? Many thanks
@GALAXIADELNORTE
@GALAXIADELNORTE 4 месяца назад
excelente🥥
@kunalverma185
@kunalverma185 4 месяца назад
Is it alternative of warpfusion because I was going to buy that one, should I use this or wrap fusion
@MDMZ
@MDMZ 4 месяца назад
I find this much more consistent, warpfusion is getting better too
@azalaka22
@azalaka22 27 дней назад
great tutorial,...!!!! i wanna try, where i can find the original video of dancing man?
@MDMZ
@MDMZ 24 дня назад
just added in the description
@azalaka22
@azalaka22 23 дня назад
@@MDMZ thank you and it work ,.... thank you,...
@jemyt6466
@jemyt6466 3 месяца назад
hello sir it says cuda out of memory error can u please help me i only have 6gb vram
@MDMZ
@MDMZ 3 месяца назад
unfortunately that's a little too low for this, check the pinned comment for some tips
@dollarproduction24
@dollarproduction24 4 месяца назад
Does it need an external GPU to run ComfyUI IPAdapter + HotshotXL? Or it can be used without an external GPU?
@handlenumber707
@handlenumber707 Месяц назад
It sounds painful.
@user-wq1je7wk1r
@user-wq1je7wk1r 4 месяца назад
My graphics card is 4070ti 12GB, and it takes 1.5 hours to generate a 6s long video. Is this normal?
@MDMZ
@MDMZ 3 месяца назад
a bit too slow, what resolution are you generating at ? maybe you can try to lower it
@FUTJunction
@FUTJunction 2 месяца назад
Hi, i keep getting failed to fetch error tab, what does that mean? how can i fix it?
@withassuan
@withassuan 10 дней назад
Hi! When loading the graph, the following node types were not found: IPAdapterApply Nodes that have failed to load will show as red on the graph. Any hints?
@MDMZ
@MDMZ 10 дней назад
Yes, use the fix workflow instead, it's in the description
@lenny_Videos
@lenny_Videos 4 месяца назад
Awesome video 🤩
@MukeshKumar-eo1vf
@MukeshKumar-eo1vf 2 месяца назад
When loading the graph, the following node types were not found: IPAdapterApply Nodes that have failed to load will show as red on the graph.
@MDMZ
@MDMZ 2 месяца назад
check the pinned comment, just shared a fix
@RizzlePrivate
@RizzlePrivate 4 месяца назад
I got this error: "could not be loaded with cv" pointing to the image_encoder. After downloading the encoder recommended in the IPAdapter Plus page I got it working. The link in the description points to a G model, while the IPAdapter Plus is an H model. Not sure if this is important, but it seemed to be in my case.
@MDMZ
@MDMZ 4 месяца назад
thanks for sharing, I will look into it, I'm using the same exact files in the tutorial and it works fine for me
@ValeryIvanov
@ValeryIvanov 2 месяца назад
Somebody, plzz adviсe what should i do with this error " Error occurred when executing CheckpointLoaderSimple: 'model.diffusion_model.input_blocks.0.0.weight' "?
@jaroslavprokop4280
@jaroslavprokop4280 4 месяца назад
Thanks for the video! Tell me, is my 4070 video card with 12 gigabytes of memory suitable for this configuration? Because according to your configuration, the video memory is fully loaded and the processing of the video for 100 frames stops at 5% of the progress.
@MDMZ
@MDMZ 4 месяца назад
12Gb should be alright, try rendering at a lower resolution, or lower steps
@NEW-BLACKPINK
@NEW-BLACKPINK 3 месяца назад
Sir help me, how to fix this Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([8192, 1280]) from checkpoint, the shape in current model is torch.Size([8192, 1024])
@NEW-BLACKPINK
@NEW-BLACKPINK 3 месяца назад
Reply me
@MDMZ
@MDMZ 3 месяца назад
Hi, please check the pinned comment
@rhyswynn
@rhyswynn 3 месяца назад
I had a similar issue, it was because I didn't select the image_encoder in the Load CLIP Vision node
@user-pi5zy7ud4b
@user-pi5zy7ud4b 5 дней назад
Thank you for the excellent explanatory video. Due to my low PC specs, I got an error about memory limitation and could not proceed any further. Is there any instructions or notes on how to use the Google Colab version of comfyui?
@MDMZ
@MDMZ 3 дня назад
not sure about google collab, but you can run this on ThinkDiffusion (paid online solution)
@menwiththemask
@menwiththemask 4 месяца назад
Great tutorial man, but please next time tell us we need to install git from the official page in the first place xD
@MDMZ
@MDMZ 4 месяца назад
oh, I didn't realized it was necessary to do manually, may I know at which step you realized that and how you found out that u need to install it? I will pin the solution fore everyone else who runs into the same issue, thanks a lot!
@menwiththemask
@menwiththemask 4 месяца назад
@@MDMZ Ehi there, I needed to install Git when I first run cmd and pasted the link
@musicspartan9905
@musicspartan9905 3 месяца назад
I thought I messed up on the first step, thanks bro
@AraShiNoMiwaKo
@AraShiNoMiwaKo 4 месяца назад
I'm getting into this error: "Error occurred when executing ControlNetApplyAdvanced: 'NoneType' object has no attribute 'copy' and a bunch of other stuff, anyone had this problem?
@AraShiNoMiwaKo
@AraShiNoMiwaKo 4 месяца назад
yeah fuck it, im out. requirements are high as fuck. yiou should point that dude.
@Xavi-Tenis
@Xavi-Tenis Месяц назад
iwonder why or where is the controlnet pose ? so how can this get the tracking pose? thanks to anybody to answer.
@Chernenko_Motion
@Chernenko_Motion 3 месяца назад
Thanks, this is an awesome tutorial. 😎 But I have a question, how did you make the glass man? I've already tried a lot of options, but I can't get such a polygonal glass person 🥲
@MDMZ
@MDMZ 3 месяца назад
the prompts are available for patreon subscribers, for the glass one, you can use "crystal" keyword
@sebbo1008
@sebbo1008 2 месяца назад
Well it runs without any errors for me, but starting at the Video Combine Node all it generates is noise. Fractions of the prompt are even visible, but only so barely. Triple checked all models and settings but still it's like that. img2img same model is fine. ControlNet Preview Image seems fine. Any ideas?
@MDMZ
@MDMZ 2 месяца назад
this happened to me before, I solved it by reinstalling from scratch
@efarmogeskostas
@efarmogeskostas 3 месяца назад
Thanks for the tutorial bro! Yet i can't run Comfy UI from the batch folder. It shows an error about Nvidia old version and Cuda drive not compaticle with pytorch or something like that. Can you give me tip to solve this.
@MDMZ
@MDMZ 3 месяца назад
make sure your NVidia GPU is up to date, you might find more help here: discuss.pytorch.org/t/cuda-versioning-and-pytorch-compatibility/189777/9
@bevisualart
@bevisualart 19 дней назад
How do you adapt this flow the the newer comfyui animate diff loader ? "When loading the graph, the following node types were not found: AV_IPAdapter Nodes that have failed to load will show as red on the graph."
@MDMZ
@MDMZ 18 дней назад
Hi, make sure to use the updated workflow (it's in the description)
@DreamerSour
@DreamerSour 2 месяца назад
I tested your workflow and noticed, that it works only with Protovision checkpoint. Can you explain what unique specifies it has ? And what other chekpoints works with that workflow?
@MDMZ
@MDMZ 2 месяца назад
make sure you use an SDXL checkpoint, I tested it with atleast 5 models other than protovision, shouldnt be an issue
@choboruin
@choboruin 2 месяца назад
I remember you mentioned a cloud site I could use to run Comfy since my PC takes forever. What was that site? I couldn't find it.
@MDMZ
@MDMZ 2 месяца назад
ThinkDiffusion
@ryco_music
@ryco_music 3 месяца назад
i get this error when it gets to the sampler : Error occurred when executing KSamplerAdvanced: Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead. any idea what should i do ?
@MDMZ
@MDMZ 3 месяца назад
hi, you can check the pinned comment
@shirogamestudios
@shirogamestudios 2 месяца назад
Where can I add Lora block? Thanks in advance.
@luisgregori3817
@luisgregori3817 3 месяца назад
I have a rtx 16go, 80go ram, and a i9 9900k, confy ui Works grate when i Just do pictures and its very fast, but with this workflow at the ksampler step it start to be very slow and take an hour to render in the ksampler. Comfyui start to be very laggy and freeze a lot at this moment… do you know why ??
@MDMZ
@MDMZ 3 месяца назад
that's unfortunate, seems like it's heavier on GPU VRAM, although 16 should be enough, you can try reducing the resolution, make sure your GPU driver is up to date, and make sure the GPU is not being overtaken by another application
@edwardtranai
@edwardtranai Месяц назад
What's the most effective way to change the background/scenes?
@sylvansheen8598
@sylvansheen8598 Месяц назад
I also have the same question!
@NEW-BLACKPINK
@NEW-BLACKPINK Месяц назад
how i can fix this ( When loading the graph, the following node types were not found: IPAdapterApply Nodes that have failed to load will show as red on the graph.
@MDMZ
@MDMZ Месяц назад
there's a solution in the pinned comment
@NEW-BLACKPINK
@NEW-BLACKPINK Месяц назад
@@MDMZ sir, I want use my GPU for Rendering in comfyUI, GPU is getting 5 so 14% and Ram is getting 80%
@Dharmatography
@Dharmatography Месяц назад
nice one . .,how long it take u to render,. why in my setting in low v ram its too slow. although i have good gpu. ,2080 super
@MDMZ
@MDMZ Месяц назад
if am not mistaken, the 2080 super has 8GB of VRAM ? which is considered a little low for this, you need atleast 12, it won't be blazing fast even with 24GB
@Dharmatography
@Dharmatography 21 день назад
@@MDMZ thanks bro. Can you make that popular rose or plants dancing animation tutorial
@luisgregori3817
@luisgregori3817 4 месяца назад
actually i have a i9 9900k and 16go ram, i dont have a graphics card. Do i need to get one ? i launch a comfyui video and its still not finish after 3hours. How quick is it with a graphics card ?
@MDMZ
@MDMZ 3 месяца назад
you definitely need one to run without issues, the speed depends on so many factors
@denislavrov7766
@denislavrov7766 2 месяца назад
Hi How can i fix that "When loading the graph, the following node types were not found: IPAdapterApply Nodes that have failed to load will show as red on the graph."?
@MDMZ
@MDMZ 2 месяца назад
there's a solution for this in the pinned comment
@taylorrowson3961
@taylorrowson3961 2 месяца назад
Maybe a dumb question, but is this at all possible on Mac machines? I have an M3 Pro with 36G of shared RAM and would love to try this out
@MDMZ
@MDMZ 2 месяца назад
the installation process is different, technically, it works, but the power of an NVIDIA GPU is unmatched when it comes to AI processing
@stufigol
@stufigol 2 месяца назад
Thanks for the step by step tutorial. I am almost there... Can't figure out the below error tho: Error occurred when executing MiDaS-NormalMapPreprocessor: name 'midas' is not defined
@MDMZ
@MDMZ 2 месяца назад
can you share your workflow on discord ?
@Nibot2023
@Nibot2023 3 месяца назад
When using open pose and hed - does that lock you into not changing the style or look of a character? You way seems more creative friendly in design.
@MDMZ
@MDMZ 3 месяца назад
hmm I'm not sure, I haven't tested that
@Nibot2023
@Nibot2023 3 месяца назад
@@MDMZ *Edit solved it. I did a git pull on the ipadatper for an update and I made a ipadapter folder in the comfyui/models area and it worked. Awesome Tutorial - Going back throw and following along but for some reason I have the Ip adaptor in the same spot but for some reason the node is undefined. What would be the work around for that node to load the ip-adapter-plus_sdxl_vit-h.safetensors?
@MDMZ
@MDMZ 3 месяца назад
@@Nibot2023 Nice! thanks for sharing how you solved it
@amitkhadse7087
@amitkhadse7087 4 месяца назад
please help with mentioned bellowed errors, 1070 8gb gpu and ram also 8gb
@Fixerdp
@Fixerdp 4 месяца назад
I have a 3060 12 gb, 5 times I get an error that there is not enough memory.
@MDMZ
@MDMZ 4 месяца назад
8GB VRAM might be a bit too low for this
@choboruin
@choboruin 3 месяца назад
Great guide, my PC is fairly decent 3060 GPU and it takes forever to make a video, anything to speed it up? Ty
@MDMZ
@MDMZ 3 месяца назад
update your GPU driver, make sure your GPU is not being overtaken by other software
@biggo1261
@biggo1261 Месяц назад
@@MDMZ We can do with 3060 if so why not COLAB T4!! what about 3050TI 4GB RAM
Далее
Did We Just Change Animation Forever?
23:02
Просмотров 3,3 млн
CONSISTENT VID2VID WITH ANIMATEDIFF AND COMFYUI
20:10