Тёмный

Reimagine Any Image in ComfyUI 

How Do?
Подписаться 1,1 тыс.
Просмотров 15 тыс.
50% 1

This video provides a guide for recreating and "reimagining" any image using Unsampling and ControlNets in ComfyUI with Stable Diffusion.
Links
Custom Workflow: comfyworkflows.com/workflows/...
ContolNet Aux Preprocessors: github.com/Fannovel16/comfyui...
ComfyUI Noise Nodes: github.com/BlenderNeko/ComfyU...
Stable Diffusion 1.5: huggingface.co/runwayml/stabl...
Improved VAE: huggingface.co/stabilityai/sd...
ControlNet Models: huggingface.co/comfyanonymous...
Chapters
0:00 Overview
0:15 Installing Custom Nodes
1:28 Workflow Walkthrough
4:23 Recreating an Image
5:00 Reimagine an Image (Alter Subject)
7:20 Reimagine an Image (Alter Style)
9:18 Wrap Up
*Narration created using Elevenlabs' SpeechToSpeech Synthesis

Хобби

Опубликовано:

 

3 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 104   
@abdellahla6159
@abdellahla6159 4 месяца назад
You can't imagine how much this tutorial has helped me. Thank you so much ❤
@breathandrelax4367
@breathandrelax4367 6 месяцев назад
keep it live ! pretty clear smooth , i say thank you ! 🙏
@edwhite207
@edwhite207 6 месяцев назад
Great workflow and excellent demo of it. I used your workflow to convert the cartoon back to a photo.
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Thank you! I love how versatile it is. You can do a lot of different things by switching out preprocessors + controlnet models, adjusting the controlnet strength, and playing with the cfg/guidance. I have another video coming out soon that uses this workflow to recolor and restore old images. (Thanks to @uk3dcom for the idea.)
@NeonXXP
@NeonXXP 5 месяцев назад
Brilliant, I'm new to ComfyUI and this is really interesting.
@snordtjr
@snordtjr 4 месяца назад
Very cool, will try this out
@MultiSunix
@MultiSunix 3 месяца назад
This is great and helpful, thank you!
@PallaviChauhan91
@PallaviChauhan91 5 месяцев назад
Amazing workflow and exceptional teaching style. You explained your workflow really well, even a beginner like me understood it and it worked in my images :)
@HowDoTutorials
@HowDoTutorials 5 месяцев назад
Thanks for watching and for the kind words!
@aliyilmaz852
@aliyilmaz852 3 месяца назад
thanks for the great explanation, hope you do videos like that more.
@alecubudulecu
@alecubudulecu 6 месяцев назад
Pretty cool. Just stumbled on your channel and I LOVE how you explain things. Good job!
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Thanks for watching!
@AgustinCaniglia1992
@AgustinCaniglia1992 6 месяцев назад
Wouw this is amazing
@jensenkung
@jensenkung 6 месяцев назад
Half through the video and I thought it's just another workflow, but when you started remove the beard, you got me there... I never thought that this workflow is so powerful...
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Thanks for sticking with it! I was pretty excited when I figured this out and it's been nice to share the discovery.
@GetawayFilms
@GetawayFilms Месяц назад
After following along, I literally paused the video to comment. But I am a little late to the party. Anyway, when the beard actually disappeared, and you could see the stubble of a clean shave, I knew this was going to become the starting point of a great workflow
@mssuxmyass
@mssuxmyass 4 месяца назад
I'm new to Comfy UI, workflows like this are the reason i switched from automatic1111.... now I just need a workflow to rep[lace deforum... Thank You for sharing this though! seeing others work inspires me.
@pn4960
@pn4960 6 месяцев назад
this is excellent ! thank you for sharing
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Happy to share. Thanks for watching!
@uk3dcom
@uk3dcom 6 месяцев назад
I have been looking for a way to keep the original image and yet apply a style, (I'm using IPAdapter rather than a prompt) and your workflow is helping me get there. In case you didn't realise it make a very good b/w to colour converter. Add b/w image then in destination prompt add color, and boom a nice photo coloriser. Thanks for sharing.😁
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Happy you found it useful! I'll have to play around with the colorization a bit, that's a great idea!
@joelheath8456
@joelheath8456 6 месяцев назад
Very good stuff, subbed and look forward to more videos!
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Thanks Joel!
@joelheath8456
@joelheath8456 6 месяцев назад
One thing I would as a word of caution (to the viewers like myself) is to make sure that you have added the correct location for the control net models to the extra model paths yaml if you are using 1.5. I was trying to use SDXL controlnet models and had to suss it out for myself as the 1.5 models weren't added properly! @@HowDoTutorials (if you have used A1111 before comfy)
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Good call!
@fancypantzz
@fancypantzz 5 месяцев назад
Great tutorial. Subscribed. I appreciate your clear diction and explanation. Y
@HowDoTutorials
@HowDoTutorials 5 месяцев назад
Thank you!
@RuinDweller
@RuinDweller 3 месяца назад
After I discovered ComfyUI, my life changed forever. It has been a dream of mine for 5 years now to be able to run models and manipulate their latent spaces locally. ...But then I discovered just how hard it is for a noob like me to get a lot of these workflows working - at all - even after downloading and installing all of the models required, in the proper versions, and all of the nodes loaded and running together normally. This, was one of about 3 that actually worked for me, and it is BY FAR my favorite one. It was downloaded as a "color restorer" and it works beautifully for that purpose, but I was so excited to see it featured in this video, because it already works for me! Now I can unlock its full potential, and it turns out all I needed were the proper prompts! THANK YOU so much for making these workflows, and these video tutorials, I can't tell you how much you've helped me! If you ever decide to update any of this to utilize SDXL, I am so on that...
@HowDoTutorials
@HowDoTutorials 3 месяца назад
I loved reading this comment and I'm so happy I could help make this tech a bit more accessible. Here's a version of the "Reimagine" workflow updated for SDXL: comfyworkflows.com/workflows/4fc27d23-faf3-4997-a387-2dd81ed9bcd1 You'll also need these additional controlnets for SDXL: huggingface.co/stabilityai/control-lora/tree/main/control-LoRAs-rank128 Have fun and don't hesitate to reach out here if you run into any issues!
@RuinDweller
@RuinDweller 3 месяца назад
@@HowDoTutorials I thought I had already responded to this, but apparently I didn't! Anyway THANK YOU for posting the link to that workflow! It's running, but I can't get it to colorize any more, which was my main use for it. :( Oh well, it can still edit B/W images, and then I can colorize them in the other workflow, but I would love to be able to do both things in one. I can colorize things, but not people. I've tried every conceivable prompt. :(
@HowDoTutorials
@HowDoTutorials 3 месяца назад
@@RuinDweller I've been having trouble getting it to work as well. Seems there's something about SDXL that doesn't play with that use case quite as well. I'll keep at it and let you know if I figure something out.
@Radarhacke
@Radarhacke 4 месяца назад
Magic! Thank you! I added a HiRes Fix instead your image upscaler, so i can upscale my low res Photos to much better qualitiy.
@solidkundi
@solidkundi 5 месяцев назад
Thank you thank you thank you!!!
@leskuelbs9558
@leskuelbs9558 6 месяцев назад
Very nice. I am interested in experimenting with this with the blip node.
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
I’d be interested to see what you come up with. I was thinking it’d be nice to have images auto-described, but never got around to including it.
@neamedia
@neamedia 6 месяцев назад
Custom magnific & krea alternative right in our local SD 💎- thank you!
@icedzinnia
@icedzinnia 6 месяцев назад
Useful for when i correct/modify generated images in psd then need to apply stable diffusion meta data to the new concoction so i can generate again with the original control.
@strong8705
@strong8705 6 месяцев назад
Some manager update, some manual, some reading field content from video clip, and it worked.
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Glad you got it to work! I'm planning on putting together a video to cover common troubleshooting tips that will hopefully help make things smoother in the future.
@bobwinberry
@bobwinberry Месяц назад
Thanks for your videos! They worked great, but now (due to updates?) This workflow no longer works, seems to be lacking the BNK_Unsampler. Is there a work around for this? I've tried but aside from stumbling around, this si way over my head. Thanks for any help you might have and thanks again for the videos - well done!
@8561
@8561 6 месяцев назад
Awesome workflow and video!! One addition to the workflow that would be awesome would be add some nodes that take the Source Imagine Dimensions and have the ability to decrease (divide) the output images pixels while keeping the Aspect ratio. A lot of my images are over 1,024 x 1,024px and it runs very slow on my m2pro 16gb ram. I'm sure these nodes exist, would you have any suggestions for where they would plugin?
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Good idea! You can use the Image > Upscaling > Upscale Image By node to resize the image while maintaining the aspect ratio. Despite the node’s name, it can be used to reduce or increase the resolution.
@Blackkspot
@Blackkspot 5 месяцев назад
I also have "AIO_Preprocessor failed to load" . Do I need to install ControlNet as a whole before the Aux Preprocessors or is that enough? I've just installed ComfyUI from scratch. Getting "ImportError: DLL load failed while importing cv2: The specified module could not be found." in ComfyUI loader.
@HowDoTutorials
@HowDoTutorials 5 месяцев назад
If you are running an N version of Windows, you may need to install the Windows Media Feature Pack. Someone else had a similar issue with A1111: www.reddit.com/r/StableDiffusion/comments/14sa3u0/ive_reinstalled_windows_and_all_dependencies/
@RonnieMirands
@RonnieMirands 6 месяцев назад
Man, thanks a lot for your time on preparing and explaining this amazing tutorial! I just wanna ask you, what slider must i play for get a less customized output?
@RonnieMirands
@RonnieMirands 6 месяцев назад
I meant, an output more like the original, cause i felt my output is not like your, its much more stylized image
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Thank you for watching and for the kind words! You’ll want to adjust the cfg value in the KSampler (Advanced) node - the purple one just to the left of the Destination Image. A value of 1 works well for very subtle changes.
@RonnieMirands
@RonnieMirands 6 месяцев назад
@@HowDoTutorials wow, it wqs fast reply. Thanks a lot again, i will try to play with that values :D
@3dcgphile
@3dcgphile 5 месяцев назад
What is the technical difference between the latent output from the Unsampler and the latent output from a VAEEncode?
@HowDoTutorials
@HowDoTutorials 5 месяцев назад
VAEEncode simply converts the image to latents. In order to do img2img you then need to add noise which is random and not influence by the base image. Unsampling essentially runs the sampler in reverse, generating an approximation of the noise that would create the image. Using that noise to generate an image with the same sampler used in unsampling then results in a much closer match to the original image while also allowing much more flexibility since it's starting from pure noise.
@3dcgphile
@3dcgphile 5 месяцев назад
@@HowDoTutorials I think I understand, now. Thank you very much for taking the time to explain that, and for making the video. Subscribed
@siriusgamez2758
@siriusgamez2758 6 месяцев назад
Is there any issue with the aux control net processors and mac? Because I have issues installing it through manager
@burnhardo
@burnhardo 6 месяцев назад
same
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Sorry it’s giving you trouble. I unfortunately don’t have a ComfyUI capable Mac to test it myself, and I’m having trouble finding info on it, but I’ll keep looking into a solution.
@StrengthCoachFelix
@StrengthCoachFelix 6 месяцев назад
Might you know why the unsampler runs very slowly? Could be I just need to restart the server but it's been 20 minutes and it's on step 13 :/
@StrengthCoachFelix
@StrengthCoachFelix 6 месяцев назад
figured it out; was using a 4mb sized photo
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Glad you got it working! The limit is different depending on the GPU being used, but usually best to keep the resolution to a max of 2048x2048.
@hc5843
@hc5843 6 месяцев назад
Hi, I just downloaded the node, but my comfyui says that when loading this graph, the AIO preprocessor was not found. Could you tell me how to fix this ?
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
You’ll need the controlnet preprocessors. If you have ComfyUI Manager installed, you should be able to install them by opening up the manager and clicking “Install Missing Nodes”. Otherwise you can find the original repo here: ru-vid.com?event=video_description&redir_token=QUFFLUhqbDN6Q1lnN0pMNlJNRllzbXotc2puT2swV0kyQXxBQ3Jtc0trMXBGTGp0Z0p6TW5XN2VWTFRYdWZrVkFHYlZTTWpJa1dvQVJQeVZXLXBIaGUyOF9JUmxTR3lfblhLR093aGRyWjBlZUNEbS1lS2dsc0VGYkpTMkdmZTdnOTdaTXR6OG9MdzlEak00b1RBbmVIQmVGZw&q=https%3A%2F%2Fgithub.com%2FFannovel16%2Fcomfyui_controlnet_aux&v=CRURtIltf58 If you get stuck, try checking this video out as well: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-guqwYQNWTuk.htmlsi=xMLTzqF-X8dDnN2i
@swannschilling474
@swannschilling474 6 месяцев назад
Great tutorial!! What's that voice elevenlabs or another app?
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Elevenlabs speech2speech. I just record the video with my normal voice and then convert it, so it keeps all the same pauses, inflections and such but I get this nice sounding voice with way better quality than my mic has.
@swannschilling474
@swannschilling474 6 месяцев назад
@@HowDoTutorials Oh, that's a good idea better than converting TTS...beause yours is very good to listen to! ☺
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Yeah I’ve found that, as good as TTS has gotten, it still sounds a little lifeless. This injects some life into it, but you still get a high quality voice/recording.
@Queenbeez786
@Queenbeez786 6 месяцев назад
do you have a discord or sumthing? i have few questions. also, when i click instyall missing nodes/models theres a longgggg list of things to install.. should i install all of them?
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
No discord at the moment, but I may get one going soon. Currently working on a video that walks through the install process, including custom nodes, which should help make things clearer. The workflow only has a few custom nodes, so there shouldn’t be too much extra to install.
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Hopefully this helps a bit: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-guqwYQNWTuk.html. Feel free to drop any questions you have in the comments.
@ericbarr734
@ericbarr734 6 месяцев назад
Could you share the original image? I want to make sure I've got everything set up right but when i pulled in your workflow it grabbed the cartoon image as the input image. I tried to convert that back to a realistic face but it didn't go very well.
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Apologies for the delayed response! Here's the original: drive.google.com/file/d/1zGFIT7D2QNV7RaI1s2Ztgm6tg-4Gp_2w/view?usp=sharing.
@ericbarr734
@ericbarr734 6 месяцев назад
@@HowDoTutorials awesome thanks! The tutorial is great, I'm excited to try this stuff out!
@Queenbeez786
@Queenbeez786 6 месяцев назад
damn boy
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
😁
@julioluengo1128
@julioluengo1128 5 месяцев назад
I get stuck at KSamplerAdvanced. I get this error: 'ModuleList' object has no attribute '1'. Even if I execute your actual .json untouched any thoughts?
@allthingsai8166
@allthingsai8166 5 месяцев назад
check if you have all the necessary models in your local and in proper directory. also can you share the json?
@HowDoTutorials
@HowDoTutorials 5 месяцев назад
After a bit of digging it looks like it may be one of the following: 1. Since ComfyUI and third party models are constantly being developed, things can break at times and you may have just installed at a time where one something was broken. Try using ComfyUI manager to update and see if that fixes things. OR 2. There’s a mismatch between the controller model and the checkpoint such that one is for SDXL and the other is for 1.5. Double check the checkpoint you’re using and make sure it aligns with the control nets you have set.
@Moony_ultimate
@Moony_ultimate 6 месяцев назад
Could this workflow be used to recreate the image and increase the resolution of an image? (Upscaler)
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
It can! You’ll just want to add an upscaler after the Destination Image. I actually have an updated version of this workflow that includes an upscaler that I’ll be sharing to comfyworkflows today. I’ll come back here and drop the link when it’s uploaded.
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Here's the updated version with upscaling. I also changed the preprocessor nodes to make it easier to switch between preprocessors - just make sure you update the ControlNet model to match. comfyworkflows.com/workflows/94b32ebe-e2be-4f44-b341-bc4793fe4941?version=83c77f1d-d16b-41b2-b581-1fb8d596ed0d
@SixFt12
@SixFt12 5 месяцев назад
AIO_Preprocessor failed to load and failed to import when using the manager to install missing custom nodes. Is there an alternative for this? How do I download it if there's an alternative? Thanks for any help on this.
@HowDoTutorials
@HowDoTutorials 5 месяцев назад
Seems like this is a common issue. I haven’t gotten a chance to troubleshoot it yet, but here is some thing to try: Rather than “Install Missing Custom Nodes” try going into “Install Custom Nodes”, searching “ComfyUI’s ControlNet Auxiliary Preprocessors” and installing it directly.
@SixFt12
@SixFt12 5 месяцев назад
@@HowDoTutorials Thanks for the response. I did try that though and got the same errors after restarting.
@SixFt12
@SixFt12 5 месяцев назад
@@HowDoTutorials After much trial and error, I discovered ReActor Node for ComfyUI conflicts with ComfyUI's ControlNet Auxiliary Preprocessors. Once I disabled ReActor Node, it worked.
@bobbyboe
@bobbyboe 6 месяцев назад
Very interesting approach, to unsample. I am looking for a way to prompt a figure into a given enviroment (Image). Do you have some Idea for me? For example lets say: you load a picture of park with a empty bench and make AI generate a person sitting on the bench by using a text-prompt.
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
I’d say the best approach there would be inpainting. Combining with controlnet (like OpenPose) may yield even better results. Luckily enough, that’s actually one of the videos I’m working on and I should have it released this weekend some time.
@bobbyboe
@bobbyboe 6 месяцев назад
@@HowDoTutorials thanks for your answer. I have already inpainting in my workflow... but for this task, I was looking for something more "conceptual"... I would like the model introducing my figure (prompt) "by itself" into my background... I dont want to exactly position it... so I hope for a best fitting figure into the environment. I wonder if your unsampling-method can help with it, also I was thinking about revision or even IPadapter... It is difficult to communicate what I mean... but in the hope that among comfyUI-creators we understand certain things, I would express it as: I don`t want to inpaint the figure at a certain position, but to "inthink" it into the scene.
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
@@bobbyboe That's an interesting concept! I'll have to play around with it a bit and see if I can come up with anything. I think this workflow could, in theory, do something like that, but it might work better without the controlnets.
@gregb0t
@gregb0t 6 месяцев назад
I checked the Unsampling, and it seems it doesn't work with SDXL. Do you know about that?I didn't find it anywhere. Perhaps it doesn't work only because of the controlnet model which are 1.5. Thanks for this video, you got a new follower :-)
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Yeah this workflow is only set up for 1.5 based models. I did create a SDXL version, but the results were consistently worse than the 1.5 version so I never shared the workflow. Thanks for watching!
@TerrylolzBG
@TerrylolzBG 6 месяцев назад
Where can I contact you for business inquiries? We're working on a project and your skillset might be a good fit.
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
I'd love to learn more. Feel free to send a message to howdotutorials@gmail.com.
@soljr9175
@soljr9175 10 дней назад
Your workflow link doesnt work. It would have be nice if you included in Hugginface.
@photoelixir
@photoelixir 6 месяцев назад
Any idea why am I getting black images instead of the expected results?
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
It’s hard to pinpoint the cause exactly as it could be several different things. First thing I’d check is to make sure the photo resolution doesn’t have any dimensions smaller than 512 or larger than 2048 with 768-1024 being the sweet spot. If it keeps giving you trouble you can try following this guide to Install ComfyUI from Scratch ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-guqwYQNWTuk.html
@Kaleubs
@Kaleubs 6 месяцев назад
I'm having the same issue and changing the photo dimension didnt help much, appreciate your support and the new video you just posted@@HowDoTutorials
@Andee...
@Andee... 6 месяцев назад
controlnets seem to deepfry my image for some reason
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
I’ve noticed this can happen when image resolution is below 512. You might also try lowering the guidance a bit.
@introvert64978
@introvert64978 6 месяцев назад
how to install controlnet on mac? Please anyone tell me. or suggest me a tutorial.
@HowDoTutorials
@HowDoTutorials 6 месяцев назад
Installation on Mac isn't quite as easy, but it is possible (at least for Apple silicon based macs.) Follow the instructions here to get going: github.com/comfyanonymous/ComfyUI#apple-mac-silicon Best of luck!
@burnhardo
@burnhardo 6 месяцев назад
no problem with ComfyUI - just controlnet? @@HowDoTutorials
@entertainmentchannel9632
@entertainmentchannel9632 4 месяца назад
I get this error(AttributeError 16 KSamplerAdvanced): ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch n\modules\module.py", line 1688, in __getattr__ raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'") AttributeError: 'ModuleList' object has no attribute '1'
@HowDoTutorials
@HowDoTutorials 4 месяца назад
It's likely you are attempting to use an SDXL checkpoint with SD1.5 control nets. To fix, either switch to a sd-1.5 based checkpoint or use control net models for SDXL. You can find links to the SDXL controlnets here: huggingface.co/docs/diffusers/v0.20.0/en/api/pipelines/controlnet_sdxl
@SumNumber
@SumNumber 3 месяца назад
This is cool but it is just about impossible to see how you connected all these nodes together so it did not help me at all. :O)
@HowDoTutorials
@HowDoTutorials 3 месяца назад
Yeah I’ve been working on making things a little easier to parse going forward. There’s a link to the workflow in the description if you want to load it up and poke around a bit.
@goactivemedia
@goactivemedia 2 месяца назад
When I run this I get - The operator 'aten::upsample_bicubic2d.out' is not currently implemented for the MPS device?
Далее
PhotoMaker - better than IPAdapter?
12:51
Просмотров 39 тыс.
Two Methods for Fixing Faces in ComfyUI
15:55
Просмотров 7 тыс.
Infinite Variations with ComfyUI
16:25
Просмотров 16 тыс.
Forget CrewAI & AutoGen, Build CUSTOM AI Agents!
45:28
ComfyUI ReActor Face Swap Image & Video
12:17
Просмотров 16 тыс.
Умеют рыбки половить 🤣
0:27
Просмотров 3 млн
Recycled Car Tyres Get a Second Life! ♻️
0:58
Просмотров 22 млн
Умеют рыбки половить 🤣
0:27
Просмотров 3 млн