Тёмный

Your personal Avatar from a single photo (2D/3D/Images/Animations/LipSync) 

Render Realm
Подписаться 2,9 тыс.
Просмотров 1,1 тыс.
50% 1

Create a consistent Avatar in ComfyUI from a single photo with just 4 Rendering Steps, using SDXL-Lightning.
This tutorial will guide you through the whole process of creating high-quality avatars of yourself or any other person, using several IP-Adapters, LoRAs and Controlnets and a pretty fast 4-step SDXL-Lightning model.
We will then turn this avatar into a fully animated 3D-Model using Avaturn and Blender, bring it back to ComfyUI and animate it with AnimateDiff.
In a next step we will improve the quality of the animation even further with FaceFusion and create a talking-animation from a single image plus an audio-track with DreamTalk.
In the final chapter I'm going to explain how to download the workflows and models. I've created a little Python-script, which will automatically download all missing checkpoints and models used in my workflows and put them into the right ComfyUI model folders.
I've tried to make this tutorial as comprehensive and practical as possible and hope it can be useful to you. Only free tools are being used and everything should run on a PC, as well as on a Mac, though you will need a reasonably powerful machine for some of the showcases, if you want them to be in high-resolution.
Have fun!
Chapters:
=========
00:00 Intro & Scope of this tutorial
00:45 Basic Workflow - creating an avatar from a single photo
08:39 Turning the avatar into a 3D-model with Avaturn
10:32 Preparing the facial input images for Avaturn in ComfyUI
12:26 Creating the animated 3D-avatar with the 3 facial images of our avatar
13:53 Import the 3D-avatar into Blender, set up a scene and render it
18:25 Create an animation of the avatar with AnimateDiff in ComfyUI, using the Blender output
22:36 Installing FaceFusion with the help of the Pinokio browser
23:33 Using FaceFusion to enhance the quality of our avatar-animation
25:51 Creating a lip-sync animation from a single image using Dreamtalk
26:32 Using the workflows and the custom download script, troubleshooting
Download Links:
===============
First Workflow - Creating your avatar in ComfyUI:
drive.google.com/file/d/178vg...
Second Workflow - Preparing the avatar images for 3D-Modeling with Avaturn:
drive.google.com/file/d/18hJ1...
Third Workflow - Animate the Avatar with AnimateDiff:
drive.google.com/file/d/1JyyG...
Images.zip - the workflow- and controlnet-images and the input-animation used in this tutorial:
drive.google.com/file/d/1Iwwm...
Avatar_Walk.blend - the Blender file used for the 3D-animation
drive.google.com/file/d/1bF0U...
Download_Models.zip - Python Script + JSON file for downloading the models/checkpoints:
drive.google.com/file/d/1ACkS...
The use of the python script will be explained at timestamp 26:32 of this tutorial and there's also a Readme-file included.
Unzip the get_models.py and models.json files into the ComfyUI/ComfyUI subfolder (one level ABOVE the models folder. Then open a Command-Window there, type: python get_models.py and hit ENTER. When asked for the filename, type models.json and hit ENTER again, then follow the instructions on the screen. (For Mac-Users you might try to type: python3 get_models.py instead).
If you encounter a workflow error with InsightFace, you may need to install the InsightFace components (timestamp 28:52).
Download the file insightface-0.7.3-cp311-cp311-win_amd64.whl from
github.com/Gourieff/Assets/tr...
into the ComfyUI Update-folder. Then open a Command window there and execute the following command:
..\python_embeded\python.exe -m pip install insightface-0.7.3-cp311-cp311-win_amd64.whl
Other Useful Links:
===================
Avaturn Website for creating the 3D-Avatar:
avaturn.me
Juggernaut SDXL-Lightning model:
civitai.com/models/133005?mod...
Sketchfab, for downloading 3D-Scenes:
sketchfab.com/search?type=models
Get the Pinokio browser for running FaceFusion and Dreamtalk (and many more AI-apps:
pinokio.computer
Get ComfyUI:
github.com/comfyanonymous/Com...
Get Blender:
www.blender.org/download/
Elevenlabs, where I created the shot voice sample for the avatar-lipsync
elevenlabs.io/
How to create Controlnet Images/Animations in Blender:
• Animated ControlNets u...
(well, you don't really need to watch my tutorial on this topic, because I've included the Controlnet images needed in the Images.zip download above... but why not!)
#stablediffusion #comfyui #sdxl #animatediff #ipadapter #faceid #avaturn #blender #pinokio #facefusion #dreamtalk

Опубликовано:

 

12 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 5   
@yuvish00
@yuvish00 3 месяца назад
Hi, After installing the missing nodes, I am still getting some red nodes like "Prep image for insight face" :(
@GoldmaniaBella
@GoldmaniaBella 2 месяца назад
Thanks for this detailed video. One question: what minimum GPU is needed for the complete workflow, and which of the individual steps need the most GPU RAM?
@GggggQqqqqq1234
@GggggQqqqqq1234 3 месяца назад
GREAT VIDEO!!! THANK YOU.
@wagnerfreitas3261
@wagnerfreitas3261 3 месяца назад
brilliant, thanks
@becky8194
@becky8194 3 месяца назад
Great work!! Thanks! Have one question, what's the difference between the video generated from Blender and the one using ControlNet?
Далее
Reviewing & Rating 50 SDXL models
18:32
Просмотров 6 тыс.
Stop using Blender!! - Blender Art - #shorts
0:54
Просмотров 3,8 млн
Picture Yourself with Stable Diffusion
8:56
Просмотров 29 тыс.
AI vs Artists - The Biggest Art Heist in History
44:23
Просмотров 334 тыс.
Quick Tutorial - Mixamo to Stable Diffusion
11:06
Просмотров 15 тыс.