Тёмный
My AI Force
My AI Force
My AI Force
Подписаться
Welcome to the future of art, a place where imagination meets AI!
Our channel is dedicated to demystifying the world of Stable Diffusion, Midjourney, and DALL-E through engaging tutorials.
Step into a realm where technology empowers your creativity, and every tutorial brings you closer to mastering the art of AI. Subscribe and paint your masterpiece with the brush of AI!
Forget SD3 Medium: These Models Are Better!
10:11
3 месяца назад
Test LoRAs with X/Y Plot in ComfyUI
6:44
5 месяцев назад
Комментарии
@mediaschoolai
@mediaschoolai 12 часов назад
Thanks!
@phenix5609
@phenix5609 21 час назад
Thank work great for under 2k picture (even if it take time for 2k with my 3080 10go.) but could you work on a version of the worflow working for low memory card (like 10go, in my case) to be able to output 4k picture (like 3840px width or height depending if it's a portrait or landscape) without getting oom it would be awesome ? because with the current workflow anythink above 2k, li something close to the 3840px in W or H doesn't even launch the ksampler process ( i mean seeing any % ) i'm using the ggulf due to the limit of my gc, even after 15 min of waiting .
@eromsetyb2524
@eromsetyb2524 23 часа назад
looks like a nice workflow but I'm not able to run it on apple silicon as I got this message "VAEEncode convolution_overrideable not implemented. You are likely triggering this with tensor backend other than CPU/CUDA/MKLDNN, if this is intended, please use TORCH_LIBRARY_IMPL to override this function"
@music_news888
@music_news888 День назад
I used RTX4080 and I need around half hour to generate 1 image , is that normal?
@sasansasani
@sasansasani День назад
i have NVIDIA GeForce RTX 4070 Ti , Total VRAM 12282 MB, total RAM 65277 MB, i still have error of Allocation on device 🥲🥲🥲 why???
@my-ai-force
@my-ai-force 23 часа назад
Thanks for your comment! In my workflow, I start with quite small images for upscaling. I use a default factor of 7 times, which works great for those tiny images. However, I totally understand that this can be too high for standard pictures, leading to issues with graphics memory. It’s all about finding that perfect balance! If you have any tips or experiences with upscaling, I’d love to hear them!
@97BuckeyeGuy
@97BuckeyeGuy День назад
FYI: This workflow is only viable for upscales up to something less than 2k. If you go higher than that with this workflow, you're going to go OOM.
@taezonday
@taezonday День назад
Cant load the VAE in the workflow. How to unhide the selection? Just says load VAE with no options.
@my-ai-force
@my-ai-force 23 часа назад
Click the small circle button on the node to expand it.
@jdesanti76
@jdesanti76 День назад
After many tests, I realized that the seed in the KSampler was fixed, changing it to random solved my problems
@my-ai-force
@my-ai-force 23 часа назад
Thank you so much for your support and for being a part of this channel! Just to clarify, I use Florence 2 mainly for automatically generating prompts. If you're not getting the desired results, I recommend trying a larger model within Florence 2 to create more detailed prompts. Also, the 'ControlNet Conditioning Scale' on HuggingFace is directly related to the Strength of ControlNet in my workflow. If you have any questions or insights, feel free to share!
@jdesanti76
@jdesanti76 20 часов назад
@@my-ai-force After many tests, I realized that the seed in the KSampler was fixed, changing it to random solved the problem. Thanks for the answer
@wonder111
@wonder111 День назад
I also have a RTX 3090, and to avoid initial out of memory error, I dropped the factor of upscale from 7 to 4. This allowed case sampler to start running. 15 minutes later and it’s only finished by 25%. The fans on my computer are going wild and this for an upscale that’s only 4K, how could this possibly be worth the time, or the expense in electricity? You really need to fix this workflow. Thanks anyway.
@romanioamd5319
@romanioamd5319 День назад
i have rtx 3090 too it took me a little over three hours to get through the whole process. I've reduced the upscale factor by two times.😂😂😂
@xyzxyz324
@xyzxyz324 День назад
i have RTX3090 (24 VRAM) and 128GB system RAM, i still have error of "torch.cuda.OutOfMemoryError: Allocation on device 0"..
@gnsdgabriel
@gnsdgabriel 6 часов назад
Check the image size after resizing. The best would be to use another node for resizing, one that allows you to define specific dimensions values.
@mr.entezaee
@mr.entezaee День назад
Why is it so slow now? What is death? Our father is getting old to take this picture with quality! is it normal
@guajararock
@guajararock День назад
Thank you :)
@my-ai-force
@my-ai-force 23 часа назад
No problem 😊
@captainpike3490
@captainpike3490 День назад
Great workflow Thank you
@my-ai-force
@my-ai-force 23 часа назад
Glad it helps 😊
@fixelheimer3726
@fixelheimer3726 День назад
Can it also work as sharpener ?
@fixelheimer3726
@fixelheimer3726 День назад
I've tried it, and yes it does :D now another question came up: if i want to enter the prompt myself without florence; whats the simplest way to achieve this in your workflow?
@phenix5609
@phenix5609 21 час назад
​@@fixelheimer3726 Basicaly u need to: 1- create a "clip text encode ( prompt)" node ( if you don't know how to just double click anywhere in comfy and type what you are looking for) 2- now find the green " clip text encode ( prompt )" near florence open it ( click grey dot on upper left site on the node) take the linked link from the "clip" entry of this node ( click and hold) to the "clip" entry of the "clip text encode ( prompt)" node create at first 3- take the "conditionning" exit of that same node ( click drag on it ) et attach it to the " flux guidance" node entry. Done. U can now type what u want to modify in the base image, in the "clip text encode ( prompt)" node, you created and that will appear in the result image .
@phenix5609
@phenix5609 18 часов назад
@fixelheimer3726 Basicaly u need to: 1- create a "clip text encode ( prompt)" node ( if you don't know how to just double click anywhere in comfy and type what you are looking for) 2- now find the green " clip text encode ( prompt )" near florence open it ( click grey dot on upper left site on the node) take the linked link from the "clip" entry of this node ( click and hold) to the "clip" entry of the "clip text encode ( prompt)" node create at first 3- take the "conditionning" exit of that same node ( click drag on it ) et attach it to the " flux guidance" node entry. Done. U can now type what u want to modify in the base image, in the "clip text encode ( prompt)" node, you created and that will appear in the result image .
@phenix5609
@phenix5609 18 часов назад
Basicaly u need to: 1- create a "clip text encode ( prompt)" node ( if you don't know how to just double click anywhere in comfy and type what you are looking for) 2- now find the green " clip text encode ( prompt )" near florence open it ( click grey dot on upper left site on the node) take the linked link from the "clip" entry of this node ( click and hold) to the "clip" entry of the "clip text encode ( prompt)" node create at first 3- take the "conditionning" exit of that same node ( click drag on it ) et attach it to the " flux guidance" node entry. Done. U can now type what u want to modify in the base image, in the "clip text encode ( prompt)" node, you created and that will appear in the result image .
@Dany-w3g
@Dany-w3g День назад
Wow, you're a genius! Thanks, bro!
@my-ai-force
@my-ai-force 23 часа назад
Thank you for the kind words.
@manipayami294
@manipayami294 День назад
When I use GGUF loader the app crashes. Does anyone know how should I fix this problem?
@neotrinity99
@neotrinity99 2 дня назад
Thanks for the tutorial;
@my-ai-force
@my-ai-force 23 часа назад
You're welcome!
@baheth3elmy16
@baheth3elmy16 2 дня назад
Hey, another RU-vidr is using your Workflow as his: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Sqp9YxfGxFQ.html I commented to him that he is stealing your work but he is deleting my comment.
@manipayami294
@manipayami294 4 дня назад
Can you do it with Flux GGUF verisons?
@my-ai-force
@my-ai-force 23 часа назад
In theory, yes.
@muhamadaliff4445
@muhamadaliff4445 5 дней назад
still cannot download and install zhozho. already install but it still said zho zho biref is missing . please help
@97BuckeyeGuy
@97BuckeyeGuy 6 дней назад
Great workflow! Thank you
@my-ai-force
@my-ai-force 23 часа назад
You're so welcome!
@wellshotproductions6541
@wellshotproductions6541 6 дней назад
Awesome workflow and great video! Found it over on OpenArt, then made my way here! Keep it up. Subscribed!
@my-ai-force
@my-ai-force 23 часа назад
Awesome, thank you!
@Thawadioo
@Thawadioo 6 дней назад
ComfyUI is giving me dizziness
@baheth3elmy16
@baheth3elmy16 6 дней назад
(SOLVED) Hi. Your Workflow has a problem with the Flux group number 4. The VAE Encoder returns an error "'NoneType' object is not subscriptable" I used both 17GB flux and 11GB flux. Can you please tell us what the problem might be? Edit: Problem solved: The problem was that I disabled the optional groups because I thought I would save VRAM. When I enabled them, the workflow worked.
@mohammadbaranteem3487
@mohammadbaranteem3487 4 дня назад
hello my friend I am an Iranian and I do not have enough control. My problem is that it only works until the flux stage. You managed to solve the problem and I don't understand your advice. Can you explain with a photo?
@baheth3elmy16
@baheth3elmy16 2 дня назад
@@mohammadbaranteem3487 The workflow is divided into four groups. Groups 2 and 4 are optional and usually you can disable them. But if you will use Group 3 which is the Flux enhancement over the SDXL then you must enable group 2. Otherwise, the Flux group and entire workflow won't work.
@baheth3elmy16
@baheth3elmy16 6 дней назад
The flux model in your description is the wrong model. It is the 11gb model and it won't work in your workflow.
@baheth3elmy16
@baheth3elmy16 6 дней назад
Thank you very much for the workflow, you've put lots of work in it.
@my-ai-force
@my-ai-force 23 часа назад
Glad to hear your kind words, thanks.
@leolis78
@leolis78 7 дней назад
very interesting your videos. I had tried the Lora but I noticed that I have to be careful with the strength. I think it could be left in values not too high and complement the anti-blur effect of the background by engineering the prompt. Very good advice, thank you very much for sharing.
@my-ai-force
@my-ai-force 23 часа назад
Thanks again for your comment! I really appreciate your input and support.
@lukehancockvideo
@lukehancockvideo 7 дней назад
Where do the images output to? They are not appearing in my ComfyUI Output folder.
@my-ai-force
@my-ai-force 23 часа назад
You can replace the ‘Preview Image’ node with a ‘Save Image’ node and the image will be saved.
@Dany-w3g
@Dany-w3g 7 дней назад
Cool video! I am also experimenting like you and I also noticed that short hints have a very bad effect on the result, you need to describe in detail and chatgpt is the best way to do that.
@my-ai-force
@my-ai-force 23 часа назад
Thank you for your comment! Your findings are incredibly useful, and I appreciate you sharing them. They really add value to our discussion!
@digitalface9055
@digitalface9055 7 дней назад
missing nodes crashed my comfyui, won't start anymore
@cemilhaci2
@cemilhaci2 7 дней назад
Perfect
@superlucky4499
@superlucky4499 8 дней назад
Thanks!
@my-ai-force
@my-ai-force 23 часа назад
Thank you so much for your generous support of this channel! Your contribution truly means a lot and helps me continue creating great content!
@superlucky4499
@superlucky4499 8 дней назад
NICE!
@my-ai-force
@my-ai-force 23 часа назад
Thank you! Cheers!
@WasamiKirua
@WasamiKirua 9 дней назад
thank you very much, great workflow
@my-ai-force
@my-ai-force 8 дней назад
Glad you like it!
@Macieks300
@Macieks300 9 дней назад
Thanks so much for this workflow.
@my-ai-force
@my-ai-force 8 дней назад
Glad it was helpful!
@Klayhamn
@Klayhamn 9 дней назад
flux?
@AInfectados
@AInfectados 10 дней назад
After installing missing nodes, there is one that couldn't be found: *LayerColor: Brightness & Contrast* How i install it?
@my-ai-force
@my-ai-force 23 часа назад
You can try refreshing your browser or updating ComfyUI. Also, this node is not very important, so you can delete it.
@SteMax-d6z
@SteMax-d6z 10 дней назад
thank you! This workflow requires a lot of memory, my 16G is full.
@my-ai-force
@my-ai-force 23 часа назад
My pleasure. You can try the GGUF version of Flux.
@delivery727
@delivery727 10 дней назад
Does this work with Flux?
@MatthewWaltersHello
@MatthewWaltersHello 11 дней назад
I find it makes the eyes look like googly-eyes. How to fix?
@deadly_participant258
@deadly_participant258 12 дней назад
After following this video kohya no longer works
@deadly_participant258
@deadly_participant258 12 дней назад
This generated nothing bust static I was looking to make my Lora’s better not worse
@maelstromvideo09
@maelstromvideo09 12 дней назад
try differensial diffusion it make inpainting better, without most of this ass pain.
@Bpmf-g3u
@Bpmf-g3u 12 дней назад
I run Flux on mimicpc and it handles details quite well too, the proficiency made for a surprisingly graphic experience.
@spelgenoegen7001
@spelgenoegen7001 13 дней назад
Awesome! Everything works perfectly with diffusion_pytorch-model_promax.safetensors. Thanks!
@leo8032
@leo8032 14 дней назад
Thanks a lot, it works!
@my-ai-force
@my-ai-force 10 дней назад
Glad it helped
@deonix95
@deonix95 14 дней назад
Error occurred when executing CheckpointLoaderSimple: ERROR: Could not detect model type of: D:\Programs\SD\models/Stable-diffusion\flux1-dev-fp8.safetensors File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Programs\ComfyUI_windows_portable\ComfyUI odes.py", line 539, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Programs\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 527, in load_checkpoint_guess_config raise RuntimeError("ERROR: Could not detect model type of: {}".format(ckpt_path))
@henroc481
@henroc481 12 дней назад
same here
@วรายุทธชะชํา
I want to generate multiple sizes in one round. How can I do that? Sir
@JulioLlanosSuarez
@JulioLlanosSuarez 14 дней назад
+1
@mcdigitalargentina
@mcdigitalargentina 15 дней назад
Amigo gran trabajo! Suscripto a tu canal. Gracias por compartir tu trabajo.
@happyme7055
@happyme7055 15 дней назад
Stunning!!!!! First working Outpaint ever ;-) GJ! 2 things would be usefu, i guess... a negative prompt and an optional lineart controlnet implemenation...