Тёмный

How to Create Morphing Animations Animatediff Comfyui IPIV Tutorial Img2Vid 

goshnii AI
Подписаться 4,5 тыс.
Просмотров 6 тыс.
50% 1

A walk-through of an organised method for using Comfyui to create morphing animations from any image into cinematic results
Obtain my preferred tool - Topaz: topazlabs.com/...
IPIV's Morph Workflow: civitai.com/mo...
Motion Animations: civitai.com/po...
Helpful Videos:
• Create Morphing AI Ani...
• ComfyUI: Master Morphi...
Get Motion Array optical illusions: tinyurl.com/97...
Best Music & SFX for Creators: bit.ly/3TdAqIA (get 2 extra months free)
Free Downloads: goshnii.gumroa...
Disney Pixar Checkpoint Model: civitai.com/mo...
#animatediff #comfyui #ai #ipiv #img2vid #lcm #stablediffusion
*There are affiliate links here, which means that when someone makes a qualifying purchase, I get rewarded. You won't pay anything, and it supports this channel.

Опубликовано:

 

2 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 40   
@bearhead-ai
@bearhead-ai 4 месяца назад
Nice and Big Thank!!
@goshniiAI
@goshniiAI 4 месяца назад
thank you for stopping by.
@PixelsVerwisselaar
@PixelsVerwisselaar Месяц назад
Nice video thank you. My video turned out a bit grey. You know what im doing wrong?
@goshniiAI
@goshniiAI Месяц назад
it might be due to the initial image settings, also experimenting with different sampler settings can also help.
@lkey7758
@lkey7758 Месяц назад
Hi! Great video, great channel. I subscribed to you. If I don't touch the default workflow settings, everything works fine. But I made the settings like in your video, and used realistic images and a juggernaut model. And the output video is always muddy and sand-colored, faded. So for now I left the default workspace settings, I need to look into the settings in more detail
@goshniiAI
@goshniiAI Месяц назад
Thank you for the SUB! and your kind words. I’m glad to hear the default workflow settings are working well for you. When using realistic images with the Juggernaut model, changing the settings might be difficult, particularly with the colour output. I will suggest finding the right balance, and testing different configurations to compare the results. Hopefully, the correct settings will give you some great results.
@PYcifique
@PYcifique Месяц назад
Hey, Thank you a lot for this tutorial. The workflow works for me, except that the generated video is too fast, not smooth, as if there is no interpolation but just a rapid succession of images. Thanks in advance for your help.
@goshniiAI
@goshniiAI 18 дней назад
Hello there, glad to hear the workflow is working for you! For the speed issue consider the QR code you might be using to influence the animation. Also, make sure the input images are matching to the same sizes, either 1x1 or 9x16, respectfully. i hope this helps.
@benasido7621
@benasido7621 28 дней назад
hey can you put the prompt here of the cartoon character ? or lead me to website to get prompts like this ?
@goshniiAI
@goshniiAI 17 дней назад
Hello there, you find the prompt right here - " 3DMM, (Masterpiece, best quality:1.2), 3d render, Pixar style of ninja sub-zero character, in mortal combat video fighting game, wearing a ninja fight costume, detailed eyes, in a fight pose, standing on an icy road in a freezing storm, action scene, snow frost background, cold and detached atmosphere, falling snow flakes, frosty air, natural lighting. Checkpoint Model: civitai.com/models/65203/disney-pixar-cartoon-type-a Lora used. civitai.com/models/73756?modelVersionId=78559
@suetologPlay
@suetologPlay 3 месяца назад
where the file ip-adapter? Where is it? I don't even have that folder.
@goshniiAI
@goshniiAI 3 месяца назад
Hello there, this means you have not yet installed an IP Adapter. You can learn how to install it here. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-YYIIkgvOZ3M.html
@suetologPlay
@suetologPlay 3 месяца назад
@@goshniiAI I installed it through the manager, I do everything as you do in the video, but for some reason I have no IPAdapter folder and I get an error. Error occurred when executing IPAdapterUnifiedLoader: What am I doing wrong? After installing IPAdapte I restarted, updated and rebooted. I get the same error
@suetologPlay
@suetologPlay 3 месяца назад
@@goshniiAI I think I figured out the IPAdapter folder must be created by myself. Now I will try to do it and then I will write to you. :)
@suetologPlay
@suetologPlay 3 месяца назад
@@goshniiAI Thank you so much! It worked! Great lesson!
@goshniiAI
@goshniiAI 3 месяца назад
​@@suetologPlayAwesome!! I am also pleased to read about the solution, thank you for sharing the update.
@thedevo01
@thedevo01 2 месяца назад
Thank you for the great video! I wish it were possible to add a batch of images at once and for the process to just morph them into each other sequentially. I have 30-50 images I want to do this with, not just 3.
@goshniiAI
@goshniiAI 2 месяца назад
I completely understand the need to process a larger batch of images. IPIV currently limits the workflow to 4 images, however, you may be able to experiment with 5 or more by adjusting the workflow nodes. To see what kind of outcomes you can achieve, it's worth a shot!
@tallantchou2613
@tallantchou2613 4 месяца назад
is this workflow works on animatediffXL or hsxl? I have tried, but failed
@goshniiAI
@goshniiAI 4 месяца назад
The workflow performs best with SD1.5 models. If SDXL or Hotshot failed, you could use the workflow's upscale nodes.
@JOHNNYGATTZ
@JOHNNYGATTZ 4 месяца назад
when i press prompt queue im only getting black boxes at the bottom
@goshniiAI
@goshniiAI 4 месяца назад
If the black boxes are far below the canvas of comfyui, simply click cancel next to them. additionally Check that all of your nodes are up to date. A version mismatch may cause this kind of problem.
@itanrandel4552
@itanrandel4552 3 месяца назад
Excuse me, master, is it possible to make a short animation in sequence with the reference images or is there a better method for that?
@goshniiAI
@goshniiAI 3 месяца назад
Yes, that is achievable but the workflow now limits us to only 64 frames of time, so I believe this could be done in batch sequences. However, different workflows could make things much easier.
@itanrandel4552
@itanrandel4552 3 месяца назад
@@goshniiAI ty,for you time master
@Injaznito1
@Injaznito1 4 месяца назад
Nice!! Thanx!! I tried it and it works great. I have a question though.... is there any way to make the transitions and the overall video longer? I changed the batch from 96 to 120 but didn't notice a difference between the time for each image nor the overall video length. Thanx!
@goshniiAI
@goshniiAI 4 месяца назад
Yes, that is a strong workflow and you're correct. I am aware that using the default 96 frames produces the greatest outcomes from the workflow. unless you are certain of what you are doing, you may encounter some difficulties.
@697_
@697_ 4 месяца назад
increase the multiplier on RIFE and increase the frame rate. The most that you can do without it looking bad is 15x multiplier with 60fps. That will give a smooth 24fps feel to it.
@goshniiAI
@goshniiAI 2 месяца назад
@@697_ thank you for the additional information.
@odonrutven001
@odonrutven001 4 месяца назад
i like your channel, thanks bro
@goshniiAI
@goshniiAI 4 месяца назад
i appreciate you
@GamingDaveUK
@GamingDaveUK 4 месяца назад
Depressing to see these still focusing on 1.5 models and sdxl. The difference in prompt cohesion and image quality is night and day between 1.5 and sdxl. Sadly all the time content creators focus on 1.5. I thank you for the video, and i know the time that goes into these, but please consider doing one for sdxl.
@goshniiAI
@goshniiAI 4 месяца назад
Thank you for your thoughtful feedback! The advancements in SDXL are indeed remarkable, and I appreciate your suggestion. I'll definitely consider creating SDXL-related videos to help everyone get the most out of its possibilities.
@GamingDaveUK
@GamingDaveUK 4 месяца назад
@@goshniiAI Appreciated. At the minute 1.5 is on a bit of a feedback loop: 1) People are using 1.5 more than sdxl as there are more guides for it. 2) Content creators focus on 1.5 as more people are using it, so they make guides for 1.5. 3) tool devs see that 1.5 is being used more than 1.5 so focus energies on 1.5. 4) and were back to 1) When we finally get SD3 I am hoping it breaks this loop.... but I must admit, I doubt it.
@sudabadri7051
@sudabadri7051 4 месяца назад
You can try change this to hotshotxl animate diff and and ipiv workflow perhaps, while sdxl has lots of benefits. Pose recognition is not one of them sd1.5 still is viable for a few things and since its 512x512 it has lots of cool research models because training is easier at that res. Guides are also a part of that cycle.
@goshniiAI
@goshniiAI 4 месяца назад
@@sudabadri7051 entirely correct that there are advantages to each kind of stable diffusion. While SDXL has many advantages, SD1.5 is still more enjoyable, particularly for certain purposes such as pose recognition and 512x512 resolution training. The guide cycle you mentioned is spot-on, too.
@goshniiAI
@goshniiAI 4 месяца назад
@@GamingDaveUK I’ll do my best to create content for both versions to help bridge the gap, It's a difficult cycle to break, but the chance of SD3 is interesting. hopefully the focus will be shifted and encourage more exploration with SDXL.
Далее
НОВАЯ "БУХАНКА" 2024. ФИНАЛ
1:39:04
Просмотров 173 тыс.
New Incredible AI to 3D workflow! (3D AI Studio)
12:30
NEW AnimateAnyone LOCALLY. MIND BLOWN!
13:36
Просмотров 57 тыс.
Relight and Preserve any detail with Stable Diffusion
19:02
НОВАЯ "БУХАНКА" 2024. ФИНАЛ
1:39:04
Просмотров 173 тыс.