Тёмный
No video :(

ComfyUI - Live Portrait | Animate Character Face (Video to Video) 

CG TOP TIPS
Подписаться 17 тыс.
Просмотров 7 тыс.
50% 1

In this video, we first show how you can easily animate a portrait photo realistically using a simple video.
Note: To watch the tutorial on animate a face in a video using another video, please go to second 03:44 of the video.
Note: Make sure to install Live Portrait from the publisher shadowcz007!
00:20 - Live Portrait (Video to Image)
03:44 - Live Portrait (Video to Video)
***********************************
Comfyui tutorial, Учебное пособие по Comfyui, Comfyui ट्यूटोरियल, Tutoriel Comfyui, Tutorial Comfyui, Comfyui 튜토리얼
Comfyui stable diffusion, Install comfyui, comfyui video, controlnet comfyui, comfyui animateddiff, comfyui sdxl, comfyui upscale, comfyui video to video, comfyui manager, comfyui inpainting, comfyui ipadapter, comfyui faceswap
***********************************
🤯 Get my FREE ComfyUI workflows: openart.ai/wor...
------------------------------------
🌍 SOCIAL
/ cgtoptips
/ cgtoptips
📧 cg.top.tips@gmail.com
------------------------------------
#ComfyUI
#LivePortrait

Опубликовано:

 

24 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 55   
@gregosfr
@gregosfr Месяц назад
great video and workflows, please don't forget youtube subtitles
@CgTopTips
@CgTopTips Месяц назад
Sure, soon the videis will have voiceovers to make them easier to understand :)
@yuzhang9052
@yuzhang9052 2 дня назад
谢谢,很棒的教程,学会了~
@WiLDeveD
@WiLDeveD Месяц назад
Very useful Tutorial. thanks.
@CgTopTips
@CgTopTips Месяц назад
I am glad it was usefull
@davimak4671
@davimak4671 Месяц назад
thanks, bro
@DeMaddin81
@DeMaddin81 Месяц назад
Hi. I had a look at the temporary files. I noticed something: With your method, it seems that for EVERY frame of the source video, a complete pass with the facial expressions of the target video is made. This means that at 30 fps and 1 second (for the result), for example, 30x30 = 900 images are generated per second (at 30 fps of the source video). However, only 30 images are needed for the result, the other 870 images are discarded. I understand that comparison images are needed for consistent movement. But wouldn't 5 comparison images be enough, for example, instead of the full second? If I render 10 seconds of video, for example (at 30 fps), the result for 300 video images is 10*30*30=9000 generated images. That's crazy and needs a lot of time. Do you have a solution for this.
@ingeniosoleonalmeida5877
@ingeniosoleonalmeida5877 Месяц назад
very nice tutoriales Real Time Camera to AI in ComfyUI
@thehanspoon
@thehanspoon Месяц назад
Thank you
@selenegarcia6321
@selenegarcia6321 11 дней назад
Error occurred when executing LivePortraitProcess: LivePortraitProcess.process() missing 1 required positional argument: 'crop_info' Can anybody help please???
@daja74
@daja74 Месяц назад
Great work. One difference I see in my ComfyUI setup is that each frame requires the whole Live Portrait step to complete. So for a 50-frame video, the workflow will run the Live Portrait step 50 times. Is that correct? Because in the You Tube Video, it looks like only one step of the Live Portrait completes the whole video render. I want to make sure I have not messed up somehow. I did need to install your version of Live Portrait and MixLab directly from your Git repos rather than through the ComfyUI manager.
@DarienLingstuyl
@DarienLingstuyl 18 дней назад
I am trying the LivePortrait Video on ComfyUI on pinokio, there is one thing I don't know how to fix, if the original video has other expressions like smiling, and in the driving video smile too, I get a really weird "joker" smile, like doubling the expressions between the original and the driving video, is there a way to fix that?
@CgTopTips
@CgTopTips 18 дней назад
You need to pay attention to two things: 1. The facial expression in the first frame of both videos should be the same. For example, if the mouth is closed in one video, it should also be closed in the second video, and so on. 2. Live Portrait works better with source videos where the face doesn't change much.
@DarienLingstuyl
@DarienLingstuyl 18 дней назад
@@CgTopTips Thanks!, i knew that the first frame was important and I read that it has to be an standard expression, but the video target started smiling, so I started the driving video smiling and it got better, only that when target smiles, I still get a little overreacting of smile. But thanks for your suggestion!
@LeePreston-t1d
@LeePreston-t1d Месяц назад
Anyone know if this can work on mac M3, I am recieving this error code currently if anyone knows how to help. "Error occurred when executing LivePortraitVideoNode: Torch not compiled with CUDA enabled File "/Users/leepreston/Desktop/AI/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/leepreston/Desktop/AI/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/leepreston/Desktop/AI/ComfyUI/execution.py", line 65, in map_node_over_list results.append(getattr(obj, func)(**input_data_all)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/leepreston/Desktop/AI/ComfyUI/custom_nodes/comfyui-liveportrait/nodes/live_portrait.py", line 468, in run live_portrait_pipeline = LivePortraitPipeline( ^^^^^^^^^^^^^^^^^^^^^ File "/Users/leepreston/Desktop/AI/ComfyUI/custom_nodes/comfyui-liveportrait/nodes/LivePortrait/src/live_portrait_pipeline.py", line 67, in __init__ self.live_portrait_wrapper: LivePortraitWrapper = LivePortraitWrapper(cfg=inference_cfg) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/leepreston/Desktop/AI/ComfyUI/custom_nodes/comfyui-liveportrait/nodes/LivePortrait/src/live_portrait_wrapper.py", line 29, in __init__ self.appearance_feature_extractor = load_model(cfg.checkpoint_F, model_config, cfg.device_id, 'appearance_feature_extractor') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/leepreston/Desktop/AI/ComfyUI/custom_nodes/comfyui-liveportrait/nodes/LivePortrait/src/utils/helper.py", line 99, in load_model model = AppearanceFeatureExtractor(**model_params).cuda(device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 915, in cuda return self._apply(lambda t: t.cuda(device)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 779, in _apply module._apply(fn) File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 779, in _apply module._apply(fn) File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 804, in _apply param_applied = fn(param) ^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py", line 915, in return self._apply(lambda t: t.cuda(device)) ^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/torch/cuda/__init__.py", line 284, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled")
@SupermanRLSF
@SupermanRLSF 12 дней назад
Is there a way to get a webcam live feed into the first video section?
@CgTopTips
@CgTopTips 12 дней назад
Yes, for instructions, you can find the tutorial videos on RU-vid, Webcam Portrait Live
@user-jh8zy7oy5b
@user-jh8zy7oy5b Месяц назад
tell me what the problem is, it takes a very long time to render for more than an hour and then such a result and there are no errors. the video that turned out to have a black square instead of a head.
@CgTopTips
@CgTopTips Месяц назад
Prolonged render time could be due to one of the following options: 1. You are using the CPU instead of the GPU. 2. The program is downloading its required files for the first time (though the render should not be lengthy on the second run). 3. Your settings, such as video duration, size, or the number of steps, are high!
@timemirror_
@timemirror_ Месяц назад
Thanks!! I have an issue btw. The "insightface" folder didn't appear in my "models" folder. I am sure I downloaded the nods you mentioned at the beginning of the video. Maybe I'm doing something wrong. What do you think?
@CgTopTips
@CgTopTips Месяц назад
Manually create that folder and put the model
@user-ng3bi6xn2u
@user-ng3bi6xn2u Месяц назад
I am getting the "No face detected in the source image" error while trying the Video-to-Video method.
@CgTopTips
@CgTopTips Месяц назад
@@user-ng3bi6xn2u The face must be clearly visible, meaning the video quality should not be too low, or the character's head should be facing the camera
@fatiheke
@fatiheke Месяц назад
error not install "LivePortraitVideoNode"
@CgTopTips
@CgTopTips Месяц назад
Always check the following points: - Ensure you install the requirement files for each custom node (pip install -r requirements.txt). - Download the necessary models for each custom node. - Verify that all custom nodes needed for the workflow are installed without issues (check through the manager panel). - Ensure there are no version conflicts between models; for example, if the checkpoint is SD1.5, the ControlNet should also be SD1.5. - Always follow the installation steps for each custom node precisely through its GitHub page. - The best way to understand an issue when you see an error message is the ComfyUI Terminal Panel. For example, sizes might not match, or you might not have selected the settings for a node correctly, and so on. - You can copy the error message and search for a solution on Google. Note: If the problem is still unresolved, please share a screen shot of error with your workflow via email so I can check it.
@user-pn6ey5dn4y
@user-pn6ey5dn4y Месяц назад
Do you know of a way to crop/resize the video to a square shape that will work in 'this' workflow without distorting the original image? Usually, I'd use image resize or prepare images for clip vision, but they don't work here because of the connectors.
@CgTopTips
@CgTopTips Месяц назад
Use ImageCrop node
@user-pn6ey5dn4y
@user-pn6ey5dn4y Месяц назад
@CgTopTips thanks for a suggestion. Could you send a picture please? I tried to connect "load video and segment" to two different image crop nodes, but they don't connect. I could crop a video if I use a different load video node, but then I can't connect into the 'drive video' connector in the live portrait node. Thank you
@DeMaddin81
@DeMaddin81 Месяц назад
Hi. Great video. Where do I get the source video? I mean the dancing woman.
@CgTopTips
@CgTopTips Месяц назад
www.pexels.com/search/videos/dancing/
@DeMaddin81
@DeMaddin81 Месяц назад
@@CgTopTips Thank you! 😀👍
@DeMaddin81
@DeMaddin81 Месяц назад
I noticed something: When I tried with another dancer, her face was covered by her hand for a few frames. I immediately got an error message (like) "Face not recognized". The process aborted and the entire result was discarded. That means the entire rendering time was wasted. Do you have any idea how to make the tool continue rendering despite the error message? Perhaps an additional box in the ComfyUI that catches the error?
@CgTopTips
@CgTopTips Месяц назад
Unfortunately, in this method, the face must be visible in all frames, and there is currently no solution for this issue !
@DeMaddin81
@DeMaddin81 Месяц назад
@@CgTopTips THX!
@user-tn3nc1mz3q
@user-tn3nc1mz3q Месяц назад
I get an error: (IMPORT FAILED) comfyui-liveportrait so the liveportraitforvideo node can't work. It did't success to fix it automaticly. What can I do?
@CGITanous
@CGITanous 16 дней назад
I have the same issue, were you able to fix it?
@user-tn3nc1mz3q
@user-tn3nc1mz3q 14 дней назад
@@CGITanous Then I found another workflow that worked for me without using this node. But I can't remember where it was... the file is called liveportrait_video_example_02.json
@user-go5vl9rv4p
@user-go5vl9rv4p Месяц назад
Set notebook specs?
@inteligenciafutura
@inteligenciafutura Месяц назад
I have never used the tool, can it be installed with pinochio?
@CgTopTips
@CgTopTips Месяц назад
You can install A1111 with Pinochio. ComfyUI is portable, and Live Portrait is a custom node and should install through comfyui
@inteligenciafutura
@inteligenciafutura Месяц назад
@@CgTopTips (IMPORT FAILED) comfyui-liveportrait :(
@Alehantro
@Alehantro Месяц назад
probably a stupid question, but i just downloaded comfy ui and i can't see the mixlab, the manager and the share tab. Any ideas on that?
@CgTopTips
@CgTopTips Месяц назад
The easiest way is to download the Manager (github.com/ltdrdata/ComfyUI-Manager/archive/refs/heads/main.zip) and extract it into the custom node folder. Search for and install Mixlab through the Manager panel, or download it and extract it into the custom node folder.
@Alehantro
@Alehantro Месяц назад
@@CgTopTips Thank you so much for that!
@Alehantro
@Alehantro Месяц назад
@@CgTopTips I'm trying to add the liveportrait models, and but i can't see the folder "insightface" that you show
@CgTopTips
@CgTopTips Месяц назад
@@Alehantro You can manually create that
@Alehantro
@Alehantro Месяц назад
@@CgTopTips Once again, thank you so much for both your answers and your tutorial! It worked!!!
@kiya573
@kiya573 Месяц назад
Can I use in colab
@CgTopTips
@CgTopTips Месяц назад
Sorry, I have no information
@mylittleheartscar
@mylittleheartscar Месяц назад
What's the song tho?
@CgTopTips
@CgTopTips Месяц назад
Which song ? I used 3 songs in the video :)
@kneel.downnn
@kneel.downnn Месяц назад
Whats your pc specs btw
@CgTopTips
@CgTopTips Месяц назад
RTX 4060, 8GB VRAM
Далее
ComfyUI - Fast, Simple & Accurate Face Swap
9:29
Просмотров 8 тыс.
🤪Школьники ОЦЕНЯТ🔥
00:30
Просмотров 137 тыс.
ComfyUI FLUX - Accurate & Easy Inpainting Technique
8:02
Free AI : Image Animation with LivePortrait
2:40
Просмотров 12 тыс.
This Free AI Video Tool Brings Characters to Life
10:32
AI Character Acting and Relighting Is Crazy Good
10:51