🔴 Stable Diffusion 3.5 Large & Turbo Models Just Released - ComfyUI Support 👉 ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-PMxpmYp3N58.html 👉 Want to reach out? Join my Discord by clicking here - discord.gg/5hmB4N4JFc
I have fine tuned the model and Now I need to chek how good it is generating given a related prompt. Can we use ComfyUI for this. I have lora safetensors and config.json downloaded after fine tuning it
@@rabbit1259 If you created a flux lora, it should work fine with this comfyUI set up. Just make sure you download the workflow with the lora node connected in the description. Don't forget to place your lora in the lora folder inside the models directory.
After updating ComfyUI, lora worked! I have gtx 960 4gb. Model schnell Q8_0.gguf. Results. 512x512 4 steps. Generated in 2 minutes! 768x768 4 steps. Generated in 2.40 minutes! 1024x1024 4 steps. Generated in 4.20 minutes! If only on the processor (2x 2630v2), it would be 16 minutes, now 12. I didn't think that it would work on such an old video card.
Fixes for the issues not mentioned in the video - remove the '|pysssss' string on line 143 of the workflow json file - rename the 'diffusion_pytorch_model.safetensors' file you downloaded to 'flux_vae' before adding it to the vae folder
I renamed it but this is still happening: Prompt outputs failed validation VAELoader: - Value not in list: vae_name: 'flux_vae.safetensors' not in ['taesd', 'taesdxl', 'taesd3', 'taef1']
Because of my professional needs and the high learning threshold of flux, I've been using mimicpc to run flux before, it can load the workflow directly, I just want to download the flux model, and it handles the details wonderfully, but after watching your video, I'm using mimicpc to run flux again finally have a different experience, it's like I'm starting to get started! I feel like I'm starting to get the hang of it.
Thanks for the video. I didn't try yet (still downloading... ^^ ) those GGUF models but to be able to choose the models that suits with your PC config and your will is very nice. PS: the GGUF nodes are now available in the comfyUI manager. there's no more need to use the git clone command in powershell.
5 stars!! It works for me with a RTX 2050 4GB. takes around 2 minutes with Schnell. Which is a lot better than not working at all. Image quality isgreat as well.
Yeah, I was pretty early when I first got this installed. I don't even think it was available in the comfyui manager at the time I made this video. If it was I didn't install through there but I believe you can now which should be a lot easier.
Can't get past this error. I've got all requirements installed correctly. Error occurred when executing UnetLoaderGGUF: module 'comfy.sd' has no attribute 'load_diffusion_model_state_dict' File "/content/drive/MyDrive/AI/ComfyUI/execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/content/drive/MyDrive/AI/ComfyUI/execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/content/drive/MyDrive/AI/ComfyUI/execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/content/drive/MyDrive/AI/ComfyUI/custom_nodes/ComfyUI-GGUF/nodes.py", line 130, in load_unet model = comfy.sd.load_diffusion_model_state_dict(
I completed it step by step according to your steps. The moment the image came out, I was surprised! I feel very accomplished, thank you! Since my graphics card is GTX1060 6G, when I use this GGUF_Q4 model, it took about 3 minutes to generate a picture, which was still very slow. If I want to generate a picture within 1 minute, do you have any graphics card to recommend ?
I'm happy you successfully installed the workflow. If your looking to get a graphics, I would recommend a used RTX 3090(24gb VRAM), if you can find one, which would not only get you down under a minute but ensure you will be set to run bigger image models, faster but also other AI resource intensive models like open source llm's. GTX cards are known to be quite slower then the RTX cards. If your just looking for a cheaper upgrade for your 1060, instead of getting the 3090, you can upgrade to the RTX 3060(12bg VRAM) which is a couple hundred dollars cheaper then the 3090 but has its limitations too.
Hi, thanks a lot for your video. Very clear. However, when I start ComfyUI, i have the following error: Missing Node Types > LoraLoader|pysssss Any idea how to solve this?
This is only for windows? I'm stuck at this command ".\python_embeded\python.exe -s -m pip install -r .\ComfyUI\custom_nodes\ComfyUI-GGUF equirements.txt". Trying to run on Google Colab
The portable comfyui package is only for windows. You will need to manually install comfy for linux. On colab, if you can navigate back to the main directory that host the "python_embeded" folder and run the command there, it could possibly work.
Is the NO LORA workflow correct? I'm seeing a connected Lora node in the workflow. If I did want to use a Lora with the Schnell Model, can I only use Loras that have Schnell in their name?
Yes the no lora workflow doesn't have its model node connected to the ksampler 's model node so the lora won't have its effect on the generated image. If you want, you can disconnect the clip nodes from the lora node and reconnect the clip nodes from the dualclip loader node directly to the clip encoder nodes. I believe you can use almost any lora, with these flux models, as I been using some sd loras with the flux dev gguf model and has worked with no issue. You can try it out and see if any errors pop up.
thanks for great tutorial!! Just curious, Im using the Schnell version (Q4_00) as suggested in your vid but I really cant generate on just "steps 4" as its quite blurry...so I dlike to know if its possible to obtain in this setup nice sharp results with just 4 steps. Thanks for any advice... p.s. on steps 20 it works really nicely!
I actually used the dev model in this video as it provides better quality but indeed needs more then 4 steps to get good results. I tried improving my image quality with the schnell model but eventually always end up going back to dev. You can try using a merged dev and schnell model from civitai to generate better 4 step images or maybe add some schnell loras to the workflow.
Thanks but I got an error: Warning: Missing Node Types When loading the graph, the following node types were not found: LoraLoader|pysssss Do you know how to fix this?
Another commenter had this same issue and his solution was to modify the workflow json file and remove the "|pysssss" in the model loader section. You can open the file in a notepad or vs code and see if it works for you as well.
I'm getting errors bro... when I press queue prompt C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF odes.py:79: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:212.) torch_tensor = torch.from_numpy(tensor.data) # mmap how do I fix this? Thanks
What type of loras can you use for GGUF workflow. The flux loras I tried from Civit AI (Flux1 D) have have no effect even when the trigger words are used.
Just an fyi, If you used my workflow, be aware that you have to connect the GGUF model loader node to the Lora node then connect the lora node to the ksampler node to actually have the loras take affect.
Thanks. If your talking about image to image instead of the text to image, you can you use my recently dropped image to image workflow that you can find in the description here - ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-sbnMn8nMQgk.html. Comes with some upscaling nodes as well.
@@TheLocalLab Hi, when I try to use GGUF, I get this error: "Error occurred when executing UnetLoaderGGUF: module 'comfy.sd' has no attribute 'load_diffusion_model_state_dict'." have any idea how to solve this? thx
i have a lora of myself and i want to make me as a studio ghibli character. I've tried everything in my power but i still can't achieve that. Im trying to combine my lora and a studio ghibli i found on civit which both are trained on the flux dev base model , and still can't generate me as it! Any help?
There is a pretty important error in your workflow. You have to manually link the "Load Lora" Node to the "KSampler" via the model-link. Otherwise the Lora won't be applied.
I do understand what you mean but honestly I'd rather keep the use of Loras optional. Maybe I should've mentioned this in the video. If I'd attached the Lora node in the workflow, you would have to use it in order to generate or detach the node manually as well if you don't.
Thank you for the video.. Does anyone know what is the best way to use 155h with integrated 8gb, 16gb 7600mhz ram, NPU, very fast hard drive.. It's the zenbook 14th gen.. Than you for any info
Prompt outputs failed validation DualCLIPLoader: - Value not in list: clip_name1: 'clip-vit-large-patch14 .safetensors' not in ['model.safetensors', 't5xxl_fp8_e4m3fn.safetensors'] LoraLoader: - Value not in list: lora_name: '{'content': 'flux_realism_lora.safetensors', 'image': None, 'title': 'flux_realism_lora.safetensors'}' not in ['lora.safetensors'] Can nayone help me with the error :)
Did you download the correct clip models and place them in the clip folder? Click the arrows in the dualclip node to check to make sure you can see them. If you cant select the models in the selector fields then the model may not be in the clip folder.
The Vae file showd in your video is different from the one in link, also both Vae file names in both links are same, we cant paste 2 files with same names in same folder, can you plz advice?
Yes I changed the name of the original vae files after i downloaded them. You can just change the names to flux vae shnell or dev, it should still work fine.
I just checked, the vae file is still available. Just go into the vae folder in the file and versions section for either the dev or schnell model and download the "diffusion_pytorch_model.safetensors". I renamed it when I downloaded it to flux_vae.
Prompt outputs failed validation DualCLIPLoader: - Value not in list: clip_name1: 'clip-vit-large-patch14 .safetensors' not in ['model.safetensors', 't5xxl_fp8_e4m3fn.safetensors'] LoraLoader: - Value not in list: lora_name: '{'content': 'flux_realism_lora.safetensors', 'image': None, 'title': 'flux_realism_lora.safetensors'}' not in ['lora.safetensors'] I am getting this error, I used the schnell model, what should I do?
@@mashrurmollick In the dualcliploader node in the workflow, use the arrows to select 'model.safetensors' to use the model. Or you can also rename the file if you like to 'clip-vit-large-patch14 .safetensors'.
Thanks man, I managed to overcome all the error messages, one thing that I'm facing is when I'm pressing the "Queue prompt" option, this green border that highlights the nodes, is jumping from one node to the other, up until "Ksampler" node, it's remaining stuck there, I reduced the number of steps from 20 to 4, still it's stuck, can you help me out?
@@mashrurmollick That's actually normal. The ksampler is the longest step in the process. That's where the model actually starts generating the image. Check your terminal when it reaches the ksampler node and watch it for a few minutes, you should see the progress bar increase as each step gets executed. Depending on your pc specs and model you use, this can happen fast or take a while.
Unfortunately this has no effect on a mac. No speed increase at all and i tried all the gguf models. Any idea why? Or is it simpy not designed to work on macs?
No GGUFs are also compatible with MacOS but there could be a variety of reasons why your not seeing speed increases, especially with lower quants. There's just not enough information to really tell.
hey man...i've installed everything accordingly but UNET LOADER (GGUF) gives me this error everytime Error occurred when executing UnetLoaderGGUF: module 'comfy.sd' has no attribute 'load_diffusion_model_state_dict' im using flux1-dev-Q6_K.gguf file tried different workflow, same error...everything is updated.
Yeah I think you need to update your comfyUI. I would also look into installing the comfyUI manager to make updating and installing new nodes a breeze.
its just....um ..i am confused with lots of model versions! i habe 2060 12 gb , and im using nf4 model, it takes 90 sec to generate 1024*1024 image. So, if I prefer quality over speed, well i lil bit faster generation will definitely help, so which model should I choose bro? AND, please make video with your own voice man🙂🙂👌👌
You can try the 6_K and the 8_0 quants and see how the output quality compares with the nf4. Its best to experiment to really find the sweet spot, especially if you can improve results with Loras which is why I like the lower quants(4_0).
Connect the GGUF model loader node to the Lora node, then connect the Lora node to the ksampler node. Be advised that you will need to also make sure there's always Lora loaded to use the workflow. If you no longer want to use the Lora revert back to the default workflow.
Thx! but i get an error when trying to use: Error occurred when executing UnetLoaderGGUF: module 'comfy.sd' has no attribute 'load_diffusion_model_state_dict' File "C:\Users\anden\Downloads\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\anden\Downloads\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\anden\Downloads\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\anden\Downloads\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF odes.py", line 130, in load_unet model = comfy.sd.load_diffusion_model_state_dict( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You have to update your comfyUI, either through the ComfyUI manager and restart(recommended), git pull via command line, or just install the latest version.
If a user can't download a model from HF and drag it into their models folder once its complete, I'm not sure I would trust them running wget commands in the correct directories. I wouldn't be surprised if the vae model somehow ends up in the python library packages folder lol.
Prompt outputs failed validation VAELoader: - Value not in list: vae_name: 'flux_vae.safetensors' not in ['diffusion_pytorch_model.safetensors', 'taesd', 'taesdxl', 'taesd3', 'taef1'] man so close...
Ok, two things. 1. Did you download the VAE file from huggingface? and 2. If you did, change the name of the VAE file you downloaded from huggingface to "flux_vae".safetensors
@@TheLocalLab Would you mind telling me the url to the correct one please? Pretty sure I have the right one, trying the renaming thing now. I think it's working, I hear my GPU fans firing up lol. Yep, it's working thank you so much.
"You have to manually link the "Load Lora" Node to the "KSampler" via the model-link" I dont have telepathy. How do I do this? The nodes dont match. And should I use Lora Loader with the snake or without the snake. Remember I cant read your minds.
It's all a learning process my guy. I don't know what what you meant by "the snake" but to link the nodes, simply click and drag the link line from the purple node next to" model" in the unet model loader to the purple node on left "model" of lora loader node, then connect the right purple node "model" in the lora node to the left purple node "model" in the ksampler. Its easy as cake.
Well it depends on the quant you use and ram(normal) you have, but anything over 3gb vram can produce a 1024x1024 image with the right quant. You would probably just wait longer if your using less vram.
with Q4_0 schnell model, it works but not very well : I had to reduce the controlnet strength below 0.55 to make it look correct. above 0.65, a photo looked like an illustration with plastic faces. I will download and try with the f16 dev model to test if it works correctly with it . edit: I tested with some non schnell models (Q6 K, Q5 KS, Q8 0, F16) and it works very very well with all of them. The issue I had was only with the schnell versions
Yeah during the video, I don't think I used a Lora. If you look at the Unet loader node, it was connected directly to the Ksampler skipping the Lora Loader node.
@@PunxTV123 Yeah I just updated the workflow with the lora node connected. Here's the google link - drive.google.com/file/d/1zznjgT4zvE9PTNitHPAXmd5N_oFArhn-/view?usp=sharing. I will also add this link to the description for later if needed. Let me know if its good.
19 hours straight with no sleep food or break, and not a single image, or glimpse at a UI for that matter. It's just internet dumpster diving for convoluted code snippets that dont work. Even the official code on AMD's website is broken, and forget troubleshooting thats not possible. I'm so hungry tired and tilted this text is like walking barefoot on legos to my eyes. This 3090 is just sitting there looking at me like I would ever consider putting it into one of my systems, I dont care if it would work if i did or not. actually i may just go give it the office space treatment with an estwing hammer. That would make me feel much better. because this sadomasochistic linux entropy is easily the most irritating thing I've ever dealt with in my life. sorry i'm exploding on your page i'm delirious but no food or sleep until it works or i die trying. Well, time to wipe the partition and try again.
@@TheLocalLab i got it! and as per standard operating procedure with my lovely life as a die hard AMD fan the road less traveled sucked really bad there for a bit, but i was able to teach myself (with some help from forums and my gpt2's impeccable knack for Web searches) a very large chunk of Linux terminal commands, Python, and this lovely utility called docker. it's been like that since y2k for me, every issue that's brought up leads to a challenge and by overcoming those challenges you get juicy exp gains. and yeah AMD is a company at the end of the day but at least they don't cross the line like skynet and Intel(who is getting a nice slice of what they deserve pie) either way apologies for stumbling in here grumpy and delirious, and thank you, sincerely, for the vote of confidence. May your code never throw errors. cheers!
on apple silicon m1 pro no matter what it always output green blue black boxes or traingular small shapes like TV of 90's when we he have no signal, tested all Q models tried with both cpu gpu it does not work... it is for windows with Nvidia GPU only
When I am pressing the download button of flux dev vae file"diffusion_pytorch_model.safetensors", my browser's download progress bar says "diffusion_pytorch_model.txt file wasn't available on site".