Тёмный
ControlAltAI
ControlAltAI
ControlAltAI
Подписаться
Welcome to Control Alt Ai. We provide tutorials on how to master the latest ai tools like StableDiffusion, MidJourney, Blue Willow, ChatGPT, and Everything AI basically.

Our Channel provides simplistic and practical tutorials and information that are easy to understand for everyone from a tech enthusiast, a developer, or someone curious about the latest advancements in AI.

We are committed to sharing our knowledge and expertise with our viewers. So if you're looking to stay informed on the latest AI tools, News and expand your knowledge, subscribe to our channel.
ComfyUI: IP Adapter Clothing Style (Tutorial)
19:01
4 месяца назад
ComfyUI: Face Detailer (Workflow Tutorial)
27:16
5 месяцев назад
ComfyUI: IP Adapter Workflows (Tutorial)
27:27
5 месяцев назад
ComfyUI: Image to Line Art Workflow Tutorial
43:52
6 месяцев назад
InvokeAI 3.30 for Stable Diffusion (Tutorial)
23:57
7 месяцев назад
ComfyUI ControlNet Tutorial (Control LoRA)
39:29
7 месяцев назад
ComfyUI: Upscale any AI Art or Photo (Workflow)
11:39
9 месяцев назад
Комментарии
@ignaciofaria8628
@ignaciofaria8628 14 часов назад
Anyone else struggling with the command python -m pip install inference==0.9.13, try using py -m pip install inference==0.9.13 instead.
@goodie2shoes
@goodie2shoes День назад
Amazing tutorial. I need a couple of viewings to take it all in because there is soo much usefull information!
@rei6477
@rei6477 День назад
I tried attention masking again, similar to what you showed in this video(not same cause of the IP Adapter update), but when I generated a wide horizontal image with a mask applied to the center, I only got borders on the sides and the background didn't expand to fill the entire image size. Has this technique stopped working after an update, or could there be a mistake in my node setup? Would you mind checking this for me? 10:13
@controlaltai
@controlaltai День назад
Sure email me the workflow, I will have a look. mail @ controlaltai . com (without spaces)
@rei6477
@rei6477 День назад
​@@controlaltai Sorry, I was using an anime model(anima pencil), which is why it only output images with the background cropped out. When I switched to juggernaut it worked correctly!Sorry for the quick comment and Thank you for going out of your way to provide your email , offering to help
@petrino
@petrino 2 дня назад
Error occurred when executing Yoloworld_ESAM_Zho: cannot import name 'packaging' from 'pkg_resources' (C:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu 1\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pkg_resources\__init__.py)
@petrino
@petrino 2 дня назад
this is also after i followed this step: Command to Install Inference: python -m pip install inference==0.9.13 python -m pip install inference-gpu==0.9.13
@controlaltai
@controlaltai 11 часов назад
I am looking at this. I don't think it's an inference problem. Something to do with the python version or the latest comfy update. Will get to you if I find a solution.
@theiwid24
@theiwid24 3 дня назад
Nice work mate!
@itanrandel4552
@itanrandel4552 3 дня назад
Excuse me, do you have any tutorial on how to make a batch of multiple depth or softedge images per image?
@controlaltai
@controlaltai 2 дня назад
You just connect the load image node with the required ore processors and the save nodes.
@nocnestudio7845
@nocnestudio7845 4 дня назад
Great tutorials. Good Job 💪... 28:12 very heavy operation. 1 frame is taking all life...
@JustAI-fe9hh
@JustAI-fe9hh 4 дня назад
Thank you for this wonderful video!
@stijnfastenaekels4035
@stijnfastenaekels4035 5 дней назад
Awesome tutorial, thanks! But i'm unable to find the visual area composition custom node when i try to install it. Was it removed?
@controlaltai
@controlaltai 5 дней назад
Thanks and no you can find it here....github.com/Davemane42/ComfyUI_Dave_CustomNode
@cyberspider78910
@cyberspider78910 5 дней назад
Brilliant and no fuss work. Keep it up bro. With this quality of tutorial, you will outgrow any major channel...
@Senpaix3
@Senpaix3 5 дней назад
It's installing Optimum on mine, and it's stuck for awhile now. What should I do?
@divye.ruhela
@divye.ruhela 8 дней назад
Wow, the details are unreal! Trying this for sure and reporting back!
@sup2069
@sup2069 8 дней назад
Mine only has 2 tabs. How to enable the 3rd tab? Its missing.
@jamesharrison8156
@jamesharrison8156 8 дней назад
A wonderful tutorial! I learned a lot more about ComfyUI. Thank you so much for taking the time to create this. Also, showing how you created the workflow and following along myself works much better as a learning tool.
@controlaltai
@controlaltai 7 дней назад
Thank you!!
@ai_gene
@ai_gene 9 дней назад
Thank you for the tutorial! I would like to know how to do the same thing as in the fourth workflow but using IPAdapter FaceID to be able to place a specific person in the frame. I tried, but the problem is that the inputs to MultiAreaConditioning are Conditioning, while the outputs from IPAdapter FaceID are Model. How can I solve this problem? I would appreciate any help.
@controlaltai
@controlaltai 9 дней назад
Okay but this area conditioning in the tutorial is not designed to work with ip adapter. That's a very different workflow. To place a specific person in the frame we have not covered that tutorial but involves masking and adding the person and then using lc light and a bunch of other stuff to adjust lighting as per the scene. Processing it through sampling and then changing the face again.
@ai_gene
@ai_gene 9 дней назад
@@controlaltai Thank you for your response! 😊 It would be great if you could create a tutorial on this topic. I'm trying to develop a workflow for generating thumbnails for videos. The main issue is that SD places the person's face in the center, but I would like to see the face on the side to leave space for other information on the thumbnail. Your tutorial was very helpful for composition, but now I need to figure out how to integrate a specific face. 😅
@controlaltai
@controlaltai 9 дней назад
Unfortunately due to an agreement with the company who owns the insight face copyright tech I cannot publicly create any face swapping tutorial for RU-vid. Just search for a reactor and you should find plenty of RU-vid. I am just restricted for public education not paid consultations or workflows which are private. (For this specific topic)
@Noobinski
@Noobinski 10 дней назад
That was extremely helpful indeed. Thank you for showcasing how to do it. Not many do (or even know what they're talking about).
@controlaltai
@controlaltai 10 дней назад
Thank you!!
@user-ey3cm7lf1y
@user-ey3cm7lf1y 11 дней назад
Thanks for excellent Video. However i wonder why my Blip Analyze Image is different with in the video . And also in my Blip Loader , there is no model name "caption". I already downloaded all in the Requirements sections
@controlaltai
@controlaltai 10 дней назад
Blip was recently updated. Just input the new blip node and model and use whatever it shows there. These things are normal. Ensure comfy is updated to the latest version along with all custom nodes. Only caption will not work any more.
@user-ey3cm7lf1y
@user-ey3cm7lf1y 10 дней назад
​@@controlaltai So i already applyed your workflow json. But when I click the queue prompt, I get an "allocate on device" error. However, if I check it and then click the queue prompt again, it works fine without any errors. So, I searched for the "allocate on device" error related to ComfyUI, but my error log was different from the search results. My error log only mentions "allocate on device" without any mention of insufficient memory, and below that, it shows the code. However, other people's error logs mention insufficient memory. Despite this, could my error also be a memory issue?
@controlaltai
@controlaltai 10 дней назад
Allocate on device error is running out of vram or system ram. If you can tell your vram and system ram and what is the size of the image you are trying to fix the face for and with what box settings or anything else you are trying to do i can guide you on how to optimize the settings as per your system specs.
@user-ey3cm7lf1y
@user-ey3cm7lf1y 10 дней назад
@@controlaltai This is my system when i run comfy ui server. Total VRAM 6140 MB, total RAM 16024 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4050 Laptop GPU : cudaMallocAsync VAE dtype: torch.bfloat16 Im using Fix_Faces_Extra workflow json. And my images size is under 1mb jpg files. And process is stopped at FaceDetailer node. I think i should optimize the FaceDetailer Settings. Thanks
@controlaltai
@controlaltai 10 дней назад
Images size does not matter resolution does. Change face detailer from 1024 or 767 setting to 256. That should work for you. Try that
@user-pg9wy3qn4c
@user-pg9wy3qn4c 11 дней назад
Does this method work with videos?
@controlaltai
@controlaltai 10 дней назад
It does indeed in my testing but the workflow is way way different. I took a plane take off video and removed it completely (the plane that is) and re constructed the video. I did not include it in the tutorial as it was becoming too long.
@AlanLeebr
@AlanLeebr 11 дней назад
With your method its possible create an 15 or 30 seconds scene?
@controlaltai
@controlaltai 10 дней назад
With clip extension method yes indeed.
@user-ey3cm7lf1y
@user-ey3cm7lf1y 12 дней назад
I tried to install inference==0.9.13 But i got error. Should i downgrade my python version to 3.11 ?
@controlaltai
@controlaltai 12 дней назад
I suggest you backup your environment then downgrade. Wont work unless on 3.11
@user-ey3cm7lf1y
@user-ey3cm7lf1y 11 дней назад
@@controlaltai Thank you i solve the problem on 3.11
@swipesomething
@swipesomething 12 дней назад
3:37 After I installed the node, I had errors "cannot import name packaging from pkg_resources", I updated the inference and inference-gpu packages and it was working, so if anybody has the same errors try to update the inference and inference-gpu
@ApexArtistX
@ApexArtistX 13 дней назад
Best tutorial.. all other shitubers sells their workflow on patreon greedy bastards..
@ApexArtistX
@ApexArtistX 14 дней назад
too many processs dont try it out. wait for final release.. stupid cuda 11 build
@controlaltai
@controlaltai 14 дней назад
There is a better one released. Planning a tutorial for that. More stable. It's called muse pose.
@ApexArtistX
@ApexArtistX 13 дней назад
@@controlaltai thanks for workflow tutorial.. best youtuber
@Nrek_AI
@Nrek_AI 14 дней назад
you've given us so much info here. Thank you so much! I learned so much
@TuangDheandhanoo
@TuangDheandhanoo 15 дней назад
It's too much info at the middle. You lost me when you doing the upscale, downscale, and setup the swiches and such. I think setting things neatly is nice but it's personal preference and in this case it's not about SUPIR at all. Let say I only want to know upscaling 2X I would have to scrub your video back and forth try to go through your switches connection and oh yeah where does that height and width go again???
@controlaltai
@controlaltai 15 дней назад
Well, watch the video right at till the end where the techniques used especially the cases where downscale upscale and downscale are used repeatedly to upscale a single photo. These switches were not added for personal preference only. To show some of the supir techniques you require them. You can however skip just go to the next section. Don't add the switch only the 2x upscale. Connect the height and width from the bottom to the upscale factor input. Wherever the switch connects to.
@niemamnicka
@niemamnicka 15 дней назад
in the current version of the web-ui v1.9.3-4 fixing the xformers (from 13:00) using the "Xformers Command:" breaks the environment -> resulting in popup windows again (entry point). I get back to the step where you remove venv and extension, then I skipped installing xformers with command in (venv) and added --xformers argument to the webui-user.bat - this installed correct version for me after running it. Web-ui is starting without errors and the current as of today is versions are: version: v1.9.3-4-g801b72b9  •  python: 3.10.11  •  torch: 2.1.2+cu121  •  xformers: 0.0.23.post1  •  gradio: 3.41.2 Cheers
@Artishtic
@Artishtic 16 дней назад
Thank you for the object removal section
@Tecturon
@Tecturon 16 дней назад
1. In the Load CLIP Vision you're loading SDXL\pytorch_model.bin. What model would that be as of today? The ComfyUI Manager does not show this model. I figured it should be CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors. This should fit to die IPAdapter Model ip-adapter_sdxl.safetensors. Is this correct? 2. Also, IP Adapter had an update rendering the "Apply IPAdapter" node deprecated. I'm using IPAdapter Advanced, now. Weight_type ease in-out (found a comment by you concerning this), but the other parameters? combine_embeds=concat, start_at=0.0, end_at=1.0, embeds_scaling=V only? Problem: The KSampler gets an lineart image from the ControlNet Group (black lines, white background), but the result is always a black image with no error message involved. Why would the Image Generation only create black images?
@controlaltai
@controlaltai 16 дней назад
For the Ip Adapter _SSDXL.safetensor model you should use - Clip-Vit_bigG-14 as clip vision, as you correctly said. Black images means issue with VRAM. For IP Adapter Advance, Weight type ease in ease out, no other changes.
@Tecturon
@Tecturon 11 дней назад
@@controlaltai Thanks for your reply! I solved the assumed VRAM issue by switching from Windows to Linux (for that purpose). Getting my RX 6750 XT running was not trivial, but I finally got around that. I ran the exact workflow that failed under Win and got a result, finally. Thanks for pointing out the possible reason for failure, as it indicated a configuration problem. Your hint motivated me to finally move to Linux for SD generations. It's faster, too.
@valorantacemiyimben
@valorantacemiyimben 17 дней назад
Hello. thanks a lot. Can we add make-up to a photo we added ourselves? How can we do?
@controlaltai
@controlaltai 17 дней назад
Yes, Load it in image to image instead of text to image.
@markmanburns
@markmanburns 17 дней назад
Amazing tutorial. So much value from the time invested to watch this.
@pressrender_
@pressrender_ 18 дней назад
Hey guys! Is there a way to change the output name on the image on the filename_prefix? I was able to do that with the %KSampler.seed% for example. But... It worked only on the filename_prefix on the Video Combine node only. I can't make it work with the Save image node. I would love to have a custom name with the model+cfg+steps. Or even better, a custom node that prints that information on the images. So, I could know that information without opening each image on Comfy. Thanks a lot!
@controlaltai
@controlaltai 17 дней назад
Try the save node from WAS node suit.
@SilverEye91
@SilverEye91 19 дней назад
Heads up! The ApplyIPAdapter node no longer exists. It appears as the replacement is now known as "IPAdapter Advanced". A lot of the others have also changed names, including Load IPAdapter Model which is now called "IPAdapter Model Loader". That's easy to find, but just be aware that just searching for the same names as in this video may not work anymore.
@semenderoranak2603
@semenderoranak2603 19 дней назад
When I download animate anyone evloved through manager, I get an error saying "ImportError: DLL load failed while importing torch_directml_native" asfter restarting
@stephaneramauge4550
@stephaneramauge4550 19 дней назад
Thanks for the video But slowdown twice at least !!! It's really painfull to pause, rewind to see where you're clicking It 's a tutorial not a formula one race !
@controlaltai
@controlaltai 13 дней назад
Thank you for the feedback. Will take it into consideration.
@SyamsQbattar
@SyamsQbattar 21 день назад
There is no Manager button on my ComfyUI, please help.
@controlaltai
@controlaltai 20 дней назад
You have to install the manager manually
@SyamsQbattar
@SyamsQbattar 20 дней назад
@@controlaltai how to install it?
@user-mw4jm5fx3w
@user-mw4jm5fx3w 21 день назад
I use a tiles node, but the generated image is still in tiles and will not be merged into a single image. What is the reason for this? Is the size of the segmented image appropriate?
@controlaltai
@controlaltai 21 день назад
Has the process completed at the end, what is your tile size and stride value, along with vram and system ram?
@user-mw4jm5fx3w
@user-mw4jm5fx3w 21 день назад
@@controlaltai 4090 graphics card, 64G memory, I uploaded a workflow video on my homepage, ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ApgTbjfFsag.html Can you take a look and help me solve the problem? I have tried sizes over 1280 * 1280, Supir cannot repair images. If it is an image above 1280, how should we handle it? thanks
@controlaltai
@controlaltai 16 дней назад
The video does not show the last part. Do one thing send me your workflow, because obviously there is something wrong, I need to check your workflow on my system. Images above 1280to 1280 should not be a problem., if you have 24gb vram you can upscale to 2k from that resolution. mail @ controlaltai . com (without spaces).
@Epulsenow
@Epulsenow 21 день назад
Great video, but the models are pickle one and warning on hugging face! any suggestions?
@controlaltai
@controlaltai 21 день назад
There was no warning at the time of making the video. I don’t have any other source, the one listed in comfy ui also re directs to the same link.
@0AD8
@0AD8 22 дня назад
Hey man, I installed Kohya and finished training the model. How can I use it now? Do I installed A1111 and how do I link the model that I just trained?
@controlaltai
@controlaltai 22 дня назад
After installing a1111 put it in the checkpoint or lora folder depending on if your trained lora or checkpoint.
@0AD8
@0AD8 21 день назад
@@controlaltai ok Thanks, so I just plat the folder named "model" in the Lora model folder right?
@controlaltai
@controlaltai 21 день назад
No. The trained model will be a file, I don't know what you trained. You have to put that file in checkpoint or lora folder in a1111
@0AD8
@0AD8 21 день назад
@@controlaltai I trained a lora, so Im guessing I have to to put the JSON Source File in the lora folder? But its only 5 KB is that right? Excuse me for asking so many questions, I'm new to this😅
@controlaltai
@controlaltai 21 день назад
@0AD8 after and if you trained a Lora the Kohya ss saves it as a .safetensor file. You have to put that file in the Lora folder of a1111
@dxnxz53
@dxnxz53 22 дня назад
great video! is the workflow available anywhere? :)
@controlaltai
@controlaltai 22 дня назад
Thank you!! The json is made available for paid channel members. But you can create the workflow yourself as well. Nothing is hidden in the video, as in everything is shown on how to build it.
@onebitterbit
@onebitterbit 23 дня назад
Amazing and useful! Thanks!!!
@suthamchindaudom6410
@suthamchindaudom6410 23 дня назад
I really like your generation of Line Art image and many other clips. You are awesome. Is there a way comfyui can convert a realistic portrait photo to a painterly photo or watercolor or anime without distort facial features of a person? Could you build such a workflow please? I wonder if comfyui can really convert a people portrait to how much beautiful painting photo.
@controlaltai
@controlaltai 23 дня назад
Thanks. You can use the CosXL edit for such effects without any complicated workflow. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-mey08xsgybE.html
@root6572
@root6572 23 дня назад
why are the model files for inpainting not in safetensors format?
@EveryBeardHasAStory
@EveryBeardHasAStory 24 дня назад
Think I might break into a white's house and steal their pc. My generation takes ages and given my entire life is spent in front of a computer, I find it illogical that I shouldn't have one that can run anything worthy of note.
@controlaltai
@controlaltai 22 дня назад
Just upgrade your GPU. That's should be fine. These new ai tools are very GPU hungry.
@NgocNguyen-ze5yj
@NgocNguyen-ze5yj 24 дня назад
any chance for ComfyUI version tutorials? thanks so much
@controlaltai
@controlaltai 24 дня назад
Here ComfyUI: Face Detailer (Workflow Tutorial) ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-_uaO7VOv3FA.html
@NgocNguyen-ze5yj
@NgocNguyen-ze5yj 23 дня назад
@@controlaltai could you please help for change the face?
@NgocNguyen-ze5yj
@NgocNguyen-ze5yj 23 дня назад
@@controlaltai take another face to replace the original one?
@controlaltai
@controlaltai 23 дня назад
@@NgocNguyen-ze5yj I cannot make a tutorial for that due to some restrictions. But there are in RU-vid, search for reactor comfy tutorial.
@onebitterbit
@onebitterbit 24 дня назад
Best RU-vid Tutorial for Photos... Thank you very much!
@NgocNguyen-ze5yj
@NgocNguyen-ze5yj 24 дня назад
Do you have gmail for contact?, I have some short video about worflow and may you can help with details steps and long tutorial?, thanks