Welcome to Control Alt Ai. We provide tutorials on how to master the latest ai tools like StableDiffusion, MidJourney, Blue Willow, ChatGPT, and Everything AI basically.
Our Channel provides simplistic and practical tutorials and information that are easy to understand for everyone from a tech enthusiast, a developer, or someone curious about the latest advancements in AI.
We are committed to sharing our knowledge and expertise with our viewers. So if you're looking to stay informed on the latest AI tools, News and expand your knowledge, subscribe to our channel.
I tried attention masking again, similar to what you showed in this video(not same cause of the IP Adapter update), but when I generated a wide horizontal image with a mask applied to the center, I only got borders on the sides and the background didn't expand to fill the entire image size. Has this technique stopped working after an update, or could there be a mistake in my node setup? Would you mind checking this for me? 10:13
@@controlaltai Sorry, I was using an anime model(anima pencil), which is why it only output images with the background cropped out. When I switched to juggernaut it worked correctly!Sorry for the quick comment and Thank you for going out of your way to provide your email , offering to help
Error occurred when executing Yoloworld_ESAM_Zho: cannot import name 'packaging' from 'pkg_resources' (C:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu 1\ComfyUI_windows_portable\python_embeded\Lib\site-packages\pkg_resources\__init__.py)
this is also after i followed this step: Command to Install Inference: python -m pip install inference==0.9.13 python -m pip install inference-gpu==0.9.13
I am looking at this. I don't think it's an inference problem. Something to do with the python version or the latest comfy update. Will get to you if I find a solution.
A wonderful tutorial! I learned a lot more about ComfyUI. Thank you so much for taking the time to create this. Also, showing how you created the workflow and following along myself works much better as a learning tool.
Thank you for the tutorial! I would like to know how to do the same thing as in the fourth workflow but using IPAdapter FaceID to be able to place a specific person in the frame. I tried, but the problem is that the inputs to MultiAreaConditioning are Conditioning, while the outputs from IPAdapter FaceID are Model. How can I solve this problem? I would appreciate any help.
Okay but this area conditioning in the tutorial is not designed to work with ip adapter. That's a very different workflow. To place a specific person in the frame we have not covered that tutorial but involves masking and adding the person and then using lc light and a bunch of other stuff to adjust lighting as per the scene. Processing it through sampling and then changing the face again.
@@controlaltai Thank you for your response! 😊 It would be great if you could create a tutorial on this topic. I'm trying to develop a workflow for generating thumbnails for videos. The main issue is that SD places the person's face in the center, but I would like to see the face on the side to leave space for other information on the thumbnail. Your tutorial was very helpful for composition, but now I need to figure out how to integrate a specific face. 😅
Unfortunately due to an agreement with the company who owns the insight face copyright tech I cannot publicly create any face swapping tutorial for RU-vid. Just search for a reactor and you should find plenty of RU-vid. I am just restricted for public education not paid consultations or workflows which are private. (For this specific topic)
Thanks for excellent Video. However i wonder why my Blip Analyze Image is different with in the video . And also in my Blip Loader , there is no model name "caption". I already downloaded all in the Requirements sections
Blip was recently updated. Just input the new blip node and model and use whatever it shows there. These things are normal. Ensure comfy is updated to the latest version along with all custom nodes. Only caption will not work any more.
@@controlaltai So i already applyed your workflow json. But when I click the queue prompt, I get an "allocate on device" error. However, if I check it and then click the queue prompt again, it works fine without any errors. So, I searched for the "allocate on device" error related to ComfyUI, but my error log was different from the search results. My error log only mentions "allocate on device" without any mention of insufficient memory, and below that, it shows the code. However, other people's error logs mention insufficient memory. Despite this, could my error also be a memory issue?
Allocate on device error is running out of vram or system ram. If you can tell your vram and system ram and what is the size of the image you are trying to fix the face for and with what box settings or anything else you are trying to do i can guide you on how to optimize the settings as per your system specs.
@@controlaltai This is my system when i run comfy ui server. Total VRAM 6140 MB, total RAM 16024 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4050 Laptop GPU : cudaMallocAsync VAE dtype: torch.bfloat16 Im using Fix_Faces_Extra workflow json. And my images size is under 1mb jpg files. And process is stopped at FaceDetailer node. I think i should optimize the FaceDetailer Settings. Thanks
It does indeed in my testing but the workflow is way way different. I took a plane take off video and removed it completely (the plane that is) and re constructed the video. I did not include it in the tutorial as it was becoming too long.
3:37 After I installed the node, I had errors "cannot import name packaging from pkg_resources", I updated the inference and inference-gpu packages and it was working, so if anybody has the same errors try to update the inference and inference-gpu
It's too much info at the middle. You lost me when you doing the upscale, downscale, and setup the swiches and such. I think setting things neatly is nice but it's personal preference and in this case it's not about SUPIR at all. Let say I only want to know upscaling 2X I would have to scrub your video back and forth try to go through your switches connection and oh yeah where does that height and width go again???
Well, watch the video right at till the end where the techniques used especially the cases where downscale upscale and downscale are used repeatedly to upscale a single photo. These switches were not added for personal preference only. To show some of the supir techniques you require them. You can however skip just go to the next section. Don't add the switch only the 2x upscale. Connect the height and width from the bottom to the upscale factor input. Wherever the switch connects to.
in the current version of the web-ui v1.9.3-4 fixing the xformers (from 13:00) using the "Xformers Command:" breaks the environment -> resulting in popup windows again (entry point). I get back to the step where you remove venv and extension, then I skipped installing xformers with command in (venv) and added --xformers argument to the webui-user.bat - this installed correct version for me after running it. Web-ui is starting without errors and the current as of today is versions are: version: v1.9.3-4-g801b72b9 • python: 3.10.11 • torch: 2.1.2+cu121 • xformers: 0.0.23.post1 • gradio: 3.41.2 Cheers
1. In the Load CLIP Vision you're loading SDXL\pytorch_model.bin. What model would that be as of today? The ComfyUI Manager does not show this model. I figured it should be CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors. This should fit to die IPAdapter Model ip-adapter_sdxl.safetensors. Is this correct? 2. Also, IP Adapter had an update rendering the "Apply IPAdapter" node deprecated. I'm using IPAdapter Advanced, now. Weight_type ease in-out (found a comment by you concerning this), but the other parameters? combine_embeds=concat, start_at=0.0, end_at=1.0, embeds_scaling=V only? Problem: The KSampler gets an lineart image from the ControlNet Group (black lines, white background), but the result is always a black image with no error message involved. Why would the Image Generation only create black images?
For the Ip Adapter _SSDXL.safetensor model you should use - Clip-Vit_bigG-14 as clip vision, as you correctly said. Black images means issue with VRAM. For IP Adapter Advance, Weight type ease in ease out, no other changes.
@@controlaltai Thanks for your reply! I solved the assumed VRAM issue by switching from Windows to Linux (for that purpose). Getting my RX 6750 XT running was not trivial, but I finally got around that. I ran the exact workflow that failed under Win and got a result, finally. Thanks for pointing out the possible reason for failure, as it indicated a configuration problem. Your hint motivated me to finally move to Linux for SD generations. It's faster, too.
Hey guys! Is there a way to change the output name on the image on the filename_prefix? I was able to do that with the %KSampler.seed% for example. But... It worked only on the filename_prefix on the Video Combine node only. I can't make it work with the Save image node. I would love to have a custom name with the model+cfg+steps. Or even better, a custom node that prints that information on the images. So, I could know that information without opening each image on Comfy. Thanks a lot!
Heads up! The ApplyIPAdapter node no longer exists. It appears as the replacement is now known as "IPAdapter Advanced". A lot of the others have also changed names, including Load IPAdapter Model which is now called "IPAdapter Model Loader". That's easy to find, but just be aware that just searching for the same names as in this video may not work anymore.
When I download animate anyone evloved through manager, I get an error saying "ImportError: DLL load failed while importing torch_directml_native" asfter restarting
Thanks for the video But slowdown twice at least !!! It's really painfull to pause, rewind to see where you're clicking It 's a tutorial not a formula one race !
I use a tiles node, but the generated image is still in tiles and will not be merged into a single image. What is the reason for this? Is the size of the segmented image appropriate?
@@controlaltai 4090 graphics card, 64G memory, I uploaded a workflow video on my homepage, ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ApgTbjfFsag.html Can you take a look and help me solve the problem? I have tried sizes over 1280 * 1280, Supir cannot repair images. If it is an image above 1280, how should we handle it? thanks
The video does not show the last part. Do one thing send me your workflow, because obviously there is something wrong, I need to check your workflow on my system. Images above 1280to 1280 should not be a problem., if you have 24gb vram you can upscale to 2k from that resolution. mail @ controlaltai . com (without spaces).
Hey man, I installed Kohya and finished training the model. How can I use it now? Do I installed A1111 and how do I link the model that I just trained?
@@controlaltai I trained a lora, so Im guessing I have to to put the JSON Source File in the lora folder? But its only 5 KB is that right? Excuse me for asking so many questions, I'm new to this😅
Thank you!! The json is made available for paid channel members. But you can create the workflow yourself as well. Nothing is hidden in the video, as in everything is shown on how to build it.
I really like your generation of Line Art image and many other clips. You are awesome. Is there a way comfyui can convert a realistic portrait photo to a painterly photo or watercolor or anime without distort facial features of a person? Could you build such a workflow please? I wonder if comfyui can really convert a people portrait to how much beautiful painting photo.
Think I might break into a white's house and steal their pc. My generation takes ages and given my entire life is spent in front of a computer, I find it illogical that I shouldn't have one that can run anything worthy of note.