Welcome to the future of art, a place where imagination meets AI! Our channel is dedicated to demystifying the world of Stable Diffusion, Midjourney, and DALL-E through engaging tutorials. Step into a realm where technology empowers your creativity, and every tutorial brings you closer to mastering the art of AI. Subscribe and paint your masterpiece with the brush of AI!
Thank work great for under 2k picture (even if it take time for 2k with my 3080 10go.) but could you work on a version of the worflow working for low memory card (like 10go, in my case) to be able to output 4k picture (like 3840px width or height depending if it's a portrait or landscape) without getting oom it would be awesome ? because with the current workflow anythink above 2k, li something close to the 3840px in W or H doesn't even launch the ksampler process ( i mean seeing any % ) i'm using the ggulf due to the limit of my gc, even after 15 min of waiting .
looks like a nice workflow but I'm not able to run it on apple silicon as I got this message "VAEEncode convolution_overrideable not implemented. You are likely triggering this with tensor backend other than CPU/CUDA/MKLDNN, if this is intended, please use TORCH_LIBRARY_IMPL to override this function"
Thanks for your comment! In my workflow, I start with quite small images for upscaling. I use a default factor of 7 times, which works great for those tiny images. However, I totally understand that this can be too high for standard pictures, leading to issues with graphics memory. It’s all about finding that perfect balance! If you have any tips or experiences with upscaling, I’d love to hear them!
Thank you so much for your support and for being a part of this channel! Just to clarify, I use Florence 2 mainly for automatically generating prompts. If you're not getting the desired results, I recommend trying a larger model within Florence 2 to create more detailed prompts. Also, the 'ControlNet Conditioning Scale' on HuggingFace is directly related to the Strength of ControlNet in my workflow. If you have any questions or insights, feel free to share!
I also have a RTX 3090, and to avoid initial out of memory error, I dropped the factor of upscale from 7 to 4. This allowed case sampler to start running. 15 minutes later and it’s only finished by 25%. The fans on my computer are going wild and this for an upscale that’s only 4K, how could this possibly be worth the time, or the expense in electricity? You really need to fix this workflow. Thanks anyway.
I've tried it, and yes it does :D now another question came up: if i want to enter the prompt myself without florence; whats the simplest way to achieve this in your workflow?
@@fixelheimer3726 Basicaly u need to: 1- create a "clip text encode ( prompt)" node ( if you don't know how to just double click anywhere in comfy and type what you are looking for) 2- now find the green " clip text encode ( prompt )" near florence open it ( click grey dot on upper left site on the node) take the linked link from the "clip" entry of this node ( click and hold) to the "clip" entry of the "clip text encode ( prompt)" node create at first 3- take the "conditionning" exit of that same node ( click drag on it ) et attach it to the " flux guidance" node entry. Done. U can now type what u want to modify in the base image, in the "clip text encode ( prompt)" node, you created and that will appear in the result image .
@fixelheimer3726 Basicaly u need to: 1- create a "clip text encode ( prompt)" node ( if you don't know how to just double click anywhere in comfy and type what you are looking for) 2- now find the green " clip text encode ( prompt )" near florence open it ( click grey dot on upper left site on the node) take the linked link from the "clip" entry of this node ( click and hold) to the "clip" entry of the "clip text encode ( prompt)" node create at first 3- take the "conditionning" exit of that same node ( click drag on it ) et attach it to the " flux guidance" node entry. Done. U can now type what u want to modify in the base image, in the "clip text encode ( prompt)" node, you created and that will appear in the result image .
Basicaly u need to: 1- create a "clip text encode ( prompt)" node ( if you don't know how to just double click anywhere in comfy and type what you are looking for) 2- now find the green " clip text encode ( prompt )" near florence open it ( click grey dot on upper left site on the node) take the linked link from the "clip" entry of this node ( click and hold) to the "clip" entry of the "clip text encode ( prompt)" node create at first 3- take the "conditionning" exit of that same node ( click drag on it ) et attach it to the " flux guidance" node entry. Done. U can now type what u want to modify in the base image, in the "clip text encode ( prompt)" node, you created and that will appear in the result image .
Hey, another RU-vidr is using your Workflow as his: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Sqp9YxfGxFQ.html I commented to him that he is stealing your work but he is deleting my comment.
(SOLVED) Hi. Your Workflow has a problem with the Flux group number 4. The VAE Encoder returns an error "'NoneType' object is not subscriptable" I used both 17GB flux and 11GB flux. Can you please tell us what the problem might be? Edit: Problem solved: The problem was that I disabled the optional groups because I thought I would save VRAM. When I enabled them, the workflow worked.
hello my friend I am an Iranian and I do not have enough control. My problem is that it only works until the flux stage. You managed to solve the problem and I don't understand your advice. Can you explain with a photo?
@@mohammadbaranteem3487 The workflow is divided into four groups. Groups 2 and 4 are optional and usually you can disable them. But if you will use Group 3 which is the Flux enhancement over the SDXL then you must enable group 2. Otherwise, the Flux group and entire workflow won't work.
very interesting your videos. I had tried the Lora but I noticed that I have to be careful with the strength. I think it could be left in values not too high and complement the anti-blur effect of the background by engineering the prompt. Very good advice, thank you very much for sharing.
Cool video! I am also experimenting like you and I also noticed that short hints have a very bad effect on the result, you need to describe in detail and chatgpt is the best way to do that.
Error occurred when executing CheckpointLoaderSimple: ERROR: Could not detect model type of: D:\Programs\SD\models/Stable-diffusion\flux1-dev-fp8.safetensors File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\Programs\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Programs\ComfyUI_windows_portable\ComfyUI odes.py", line 539, in load_checkpoint out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Programs\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 527, in load_checkpoint_guess_config raise RuntimeError("ERROR: Could not detect model type of: {}".format(ckpt_path))
Stunning!!!!! First working Outpaint ever ;-) GJ! 2 things would be usefu, i guess... a negative prompt and an optional lineart controlnet implemenation...