Тёмный

How To Run Flux Dev & Schnell GGUF Image Models With LoRAs Using ComfyUI - Workflow Included 

The Local Lab
Подписаться 2,2 тыс.
Просмотров 24 тыс.
50% 1

Опубликовано:

 

27 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 192   
@TheLocalLab
@TheLocalLab 2 месяца назад
🔴 Stable Diffusion 3.5 Large & Turbo Models Just Released - ComfyUI Support 👉 ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-PMxpmYp3N58.html 👉 Want to reach out? Join my Discord by clicking here - discord.gg/5hmB4N4JFc
@dhrubajyotipaul8204
@dhrubajyotipaul8204 2 месяца назад
This is amazing. Thanks! 🙂
@rabbit1259
@rabbit1259 Месяц назад
I have fine tuned the model and Now I need to chek how good it is generating given a related prompt. Can we use ComfyUI for this. I have lora safetensors and config.json downloaded after fine tuning it
@TheLocalLab
@TheLocalLab Месяц назад
@@rabbit1259 If you created a flux lora, it should work fine with this comfyUI set up. Just make sure you download the workflow with the lora node connected in the description. Don't forget to place your lora in the lora folder inside the models directory.
@zoo6062
@zoo6062 2 месяца назад
After updating ComfyUI, lora worked! I have gtx 960 4gb. Model schnell Q8_0.gguf. Results. 512x512 4 steps. Generated in 2 minutes! 768x768 4 steps. Generated in 2.40 minutes! 1024x1024 4 steps. Generated in 4.20 minutes! If only on the processor (2x 2630v2), it would be 16 minutes, now 12. I didn't think that it would work on such an old video card.
@cyanideshep7288
@cyanideshep7288 2 месяца назад
THANK YOU!!!! This is the first tutorial that worked after searching for so long. Very clear and well put together :)
@derekwang5982
@derekwang5982 5 дней назад
Thank you! Great tutorial.
@kirubeladamu4760
@kirubeladamu4760 2 месяца назад
Fixes for the issues not mentioned in the video - remove the '|pysssss' string on line 143 of the workflow json file - rename the 'diffusion_pytorch_model.safetensors' file you downloaded to 'flux_vae' before adding it to the vae folder
@super_kenil
@super_kenil 2 месяца назад
The reason for renaming?
@cristianoazevedo8386
@cristianoazevedo8386 Месяц назад
@@super_kenil I think it is because there is two diferents VAEs, one for dev an another for schenell
@kawa9694
@kawa9694 Месяц назад
Doesn't matter ​@@super_kenil
@henrismith7472
@henrismith7472 26 дней назад
I renamed it but this is still happening: Prompt outputs failed validation VAELoader: - Value not in list: vae_name: 'flux_vae.safetensors' not in ['taesd', 'taesdxl', 'taesd3', 'taef1']
@Huang-uj9rt
@Huang-uj9rt 2 месяца назад
Because of my professional needs and the high learning threshold of flux, I've been using mimicpc to run flux before, it can load the workflow directly, I just want to download the flux model, and it handles the details wonderfully, but after watching your video, I'm using mimicpc to run flux again finally have a different experience, it's like I'm starting to get started! I feel like I'm starting to get the hang of it.
@lennoyl
@lennoyl 2 месяца назад
Thanks for the video. I didn't try yet (still downloading... ^^ ) those GGUF models but to be able to choose the models that suits with your PC config and your will is very nice. PS: the GGUF nodes are now available in the comfyUI manager. there's no more need to use the git clone command in powershell.
@DarioToledo
@DarioToledo 2 месяца назад
Thank you for the guide. Have actually experienced an improvement on Schnell from about 20s/it to 15s/it with GGUF on my 3050 ti 4gb.
@FirstLast-ye7qo
@FirstLast-ye7qo 2 месяца назад
5 stars!! It works for me with a RTX 2050 4GB. takes around 2 minutes with Schnell. Which is a lot better than not working at all. Image quality isgreat as well.
@weilinliang
@weilinliang 2 месяца назад
exactly what I'm looking for. You're the best!👍This is super helpful.
@自學成才
@自學成才 2 месяца назад
Thanks a lot!! This video really save me. I fuzz this question for a few days! Thank you very much!
@TrevorSullivan
@TrevorSullivan 2 месяца назад
Which Text-to-Speech model are you using to generate these videos? Sounds really similar to some others I've heard.
@HIMARS-M124
@HIMARS-M124 15 дней назад
The best instruction, but it should be said that before that you still need to install ComfyUI Manager
@TheLocalLab
@TheLocalLab 14 дней назад
Yeah, I was pretty early when I first got this installed. I don't even think it was available in the comfyui manager at the time I made this video. If it was I didn't install through there but I believe you can now which should be a lot easier.
@DaniDani-zb4wd
@DaniDani-zb4wd 2 месяца назад
I wanna see a comparison…… what is the drop in quality between versions?
@SebAnt
@SebAnt 2 месяца назад
WOW - Great Intro to the latest !!
@cgdtb
@cgdtb Месяц назад
Thanks
@casper508
@casper508 2 месяца назад
Can't get past this error. I've got all requirements installed correctly. Error occurred when executing UnetLoaderGGUF: module 'comfy.sd' has no attribute 'load_diffusion_model_state_dict' File "/content/drive/MyDrive/AI/ComfyUI/execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/content/drive/MyDrive/AI/ComfyUI/execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/content/drive/MyDrive/AI/ComfyUI/execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/content/drive/MyDrive/AI/ComfyUI/custom_nodes/ComfyUI-GGUF/nodes.py", line 130, in load_unet model = comfy.sd.load_diffusion_model_state_dict(
@TheLocalLab
@TheLocalLab 2 месяца назад
Update your comfyUI to the latest version.
@rickytamta87
@rickytamta87 2 месяца назад
It works..Thank you!!
@dongleo-zk4cd
@dongleo-zk4cd Месяц назад
I completed it step by step according to your steps. The moment the image came out, I was surprised! I feel very accomplished, thank you! Since my graphics card is GTX1060 6G, when I use this GGUF_Q4 model, it took about 3 minutes to generate a picture, which was still very slow. If I want to generate a picture within 1 minute, do you have any graphics card to recommend ?
@TheLocalLab
@TheLocalLab Месяц назад
I'm happy you successfully installed the workflow. If your looking to get a graphics, I would recommend a used RTX 3090(24gb VRAM), if you can find one, which would not only get you down under a minute but ensure you will be set to run bigger image models, faster but also other AI resource intensive models like open source llm's. GTX cards are known to be quite slower then the RTX cards. If your just looking for a cheaper upgrade for your 1060, instead of getting the 3090, you can upgrade to the RTX 3060(12bg VRAM) which is a couple hundred dollars cheaper then the 3090 but has its limitations too.
@dongleo-zk4cd
@dongleo-zk4cd Месяц назад
@@TheLocalLab Very helpful to me, thanks.😊
@Kapharnaum92
@Kapharnaum92 2 месяца назад
Hi, thanks a lot for your video. Very clear. However, when I start ComfyUI, i have the following error: Missing Node Types > LoraLoader|pysssss Any idea how to solve this?
@TheLocalLab
@TheLocalLab 2 месяца назад
Try updating your comfyUI.
@Kapharnaum92
@Kapharnaum92 2 месяца назад
@@TheLocalLab I updated and it still didn't work. I then modified your json file and removed the "|pysssss" part and it worked
@TheLocalLab
@TheLocalLab 2 месяца назад
Interesting, your the only one that has told me this but I'm glad its working for you now. Enjoy.
@expaintz
@expaintz 2 месяца назад
very cool intro to GGUF !
@EricTyler-b3v
@EricTyler-b3v Месяц назад
This is only for windows? I'm stuck at this command ".\python_embeded\python.exe -s -m pip install -r .\ComfyUI\custom_nodes\ComfyUI-GGUF equirements.txt". Trying to run on Google Colab
@TheLocalLab
@TheLocalLab Месяц назад
The portable comfyui package is only for windows. You will need to manually install comfy for linux. On colab, if you can navigate back to the main directory that host the "python_embeded" folder and run the command there, it could possibly work.
@Xammblu_Games
@Xammblu_Games Месяц назад
Is the NO LORA workflow correct? I'm seeing a connected Lora node in the workflow. If I did want to use a Lora with the Schnell Model, can I only use Loras that have Schnell in their name?
@TheLocalLab
@TheLocalLab Месяц назад
Yes the no lora workflow doesn't have its model node connected to the ksampler 's model node so the lora won't have its effect on the generated image. If you want, you can disconnect the clip nodes from the lora node and reconnect the clip nodes from the dualclip loader node directly to the clip encoder nodes. I believe you can use almost any lora, with these flux models, as I been using some sd loras with the flux dev gguf model and has worked with no issue. You can try it out and see if any errors pop up.
@Xammblu_Games
@Xammblu_Games Месяц назад
@@TheLocalLab Thank you so much!! My power supply can only support a GTX 1050Ti 4GB but it's working! All thanks to you!
@TheLocalLab
@TheLocalLab Месяц назад
No problem man, have fun with it!
@bikgrow
@bikgrow Месяц назад
Clear video
@haon2205
@haon2205 Месяц назад
The opening music sounded like Think About The Way by Ice Mc
@liborbatek8938
@liborbatek8938 Месяц назад
thanks for great tutorial!! Just curious, Im using the Schnell version (Q4_00) as suggested in your vid but I really cant generate on just "steps 4" as its quite blurry...so I dlike to know if its possible to obtain in this setup nice sharp results with just 4 steps. Thanks for any advice... p.s. on steps 20 it works really nicely!
@TheLocalLab
@TheLocalLab Месяц назад
I actually used the dev model in this video as it provides better quality but indeed needs more then 4 steps to get good results. I tried improving my image quality with the schnell model but eventually always end up going back to dev. You can try using a merged dev and schnell model from civitai to generate better 4 step images or maybe add some schnell loras to the workflow.
@cgdtb
@cgdtb Месяц назад
Thanks...
@AInfectados
@AInfectados Месяц назад
How to control the strength of the LORA, must modify CLIP or MODEL value?
@TheLocalLab
@TheLocalLab Месяц назад
You can modify both but would focus more on the strength model value.
@AInfectados
@AInfectados Месяц назад
@@TheLocalLab Thx.
@KITFC
@KITFC 2 месяца назад
Thanks but I got an error: Warning: Missing Node Types When loading the graph, the following node types were not found: LoraLoader|pysssss Do you know how to fix this?
@TheLocalLab
@TheLocalLab 2 месяца назад
Another commenter had this same issue and his solution was to modify the workflow json file and remove the "|pysssss" in the model loader section. You can open the file in a notepad or vs code and see if it works for you as well.
@KITFC
@KITFC 2 месяца назад
@@TheLocalLab thanks it worked!
@wildflower401
@wildflower401 15 дней назад
It says I'm missing a unetLoaderGGUF? How do I get that? Please help!
@TheLocalLab
@TheLocalLab 14 дней назад
If you have the comfyUI Manager installed, you can simply install the missing nodes through there. Just click and install the 'one's your missing.
@defidigest9
@defidigest9 Месяц назад
I'm getting errors bro... when I press queue prompt C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF odes.py:79: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:212.) torch_tensor = torch.from_numpy(tensor.data) # mmap how do I fix this? Thanks
@TheLocalLab
@TheLocalLab Месяц назад
Are you still able to generate images? I believe this is just a warning that can be safely ignored.
@rogersnelson7483
@rogersnelson7483 2 месяца назад
What type of loras can you use for GGUF workflow. The flux loras I tried from Civit AI (Flux1 D) have have no effect even when the trigger words are used.
@TheLocalLab
@TheLocalLab 2 месяца назад
Just an fyi, If you used my workflow, be aware that you have to connect the GGUF model loader node to the Lora node then connect the lora node to the ksampler node to actually have the loras take affect.
@alexhaba7503
@alexhaba7503 Месяц назад
how could we add an image as a parameter to the prompt, by the way, fantastic video. RTX4060TI 16GB 8Q model, like 20 seconds on 512x512 and 20 steps
@TheLocalLab
@TheLocalLab Месяц назад
Thanks. If your talking about image to image instead of the text to image, you can you use my recently dropped image to image workflow that you can find in the description here - ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-sbnMn8nMQgk.html. Comes with some upscaling nodes as well.
@LinusBuyerTips
@LinusBuyerTips 2 месяца назад
thank you for the video which graphic card do you have and which flux model work fast with RTX 4060 8go
@TheLocalLab
@TheLocalLab 2 месяца назад
I have RTX 4050 6gb and I run the Q4_0 dev model which pumps outs images in less then a 1:30. With loras the quality is even better.
@LinusBuyerTips
@LinusBuyerTips 2 месяца назад
@@TheLocalLab I appreciate your response Keep up the good work and can't wait to see more content
@LinusBuyerTips
@LinusBuyerTips 2 месяца назад
​@@TheLocalLab Hi, when I try to use GGUF, I get this error: "Error occurred when executing UnetLoaderGGUF: module 'comfy.sd' has no attribute 'load_diffusion_model_state_dict'." have any idea how to solve this? thx
@TheLocalLab
@TheLocalLab 2 месяца назад
@@LinusBuyerTips Update your comfyUI to the latest version.
@LinusBuyerTips
@LinusBuyerTips 2 месяца назад
@@TheLocalLab yes i do it but still get the same error?
@JamesPound
@JamesPound 2 месяца назад
The fp8 t5xx model gives less coherence and details. Try on a fixed seed with fp16.
@johnnyapbeats
@johnnyapbeats Месяц назад
i have a lora of myself and i want to make me as a studio ghibli character. I've tried everything in my power but i still can't achieve that. Im trying to combine my lora and a studio ghibli i found on civit which both are trained on the flux dev base model , and still can't generate me as it! Any help?
@TheLocalLab
@TheLocalLab Месяц назад
DId you check to make sure the lora node in the workflow is connected correctly?
@johnnyapbeats
@johnnyapbeats Месяц назад
@@TheLocalLab yeah I have them daisy chained I really don't know what's left to do...
@TheLocalLab
@TheLocalLab Месяц назад
@@johnnyapbeats Have you tested your lora with a different workflow?
@FrostyDelights
@FrostyDelights Месяц назад
it seems like the drop in quality isn't worth it to me, will the quality get better if i use the 23 gb model?
@TheLocalLab
@TheLocalLab Месяц назад
You can try the NF4 and fp8 models and see how you like it or use the full precision model if you have the compute to run it locally.
@FrostyDelights
@FrostyDelights Месяц назад
@@TheLocalLab i tried the fulll model and my pc exploded
@FrostyDelights
@FrostyDelights Месяц назад
@@TheLocalLab ty for the reply really helped me !
@TheLocalLab
@TheLocalLab Месяц назад
@@FrostyDelights hahahaha. I think the next best step down would be the fp8 model version. Give it a shot.
@HaiNguyen-qt6vc
@HaiNguyen-qt6vc 2 месяца назад
hello, can i use lora externally?
@TheLocalLab
@TheLocalLab 2 месяца назад
What do you mean? Are you asking if you can use a different lora from a different source?
@AcamBash
@AcamBash 2 месяца назад
There is a pretty important error in your workflow. You have to manually link the "Load Lora" Node to the "KSampler" via the model-link. Otherwise the Lora won't be applied.
@TheLocalLab
@TheLocalLab 2 месяца назад
I do understand what you mean but honestly I'd rather keep the use of Loras optional. Maybe I should've mentioned this in the video. If I'd attached the Lora node in the workflow, you would have to use it in order to generate or detach the node manually as well if you don't.
@AcamBash
@AcamBash 2 месяца назад
@@TheLocalLab Okay. It's all good. Watching the video i thought you used a Lora and wondered why it didn't work for me. See you included a hint now.
@TheLocalLab
@TheLocalLab 2 месяца назад
Yes yes, I'll be sure to mention that again in the future. Hope your enjoying these ggufs.
@Hood_History_Club
@Hood_History_Club 2 месяца назад
@@AcamBash we all did. Not sure how to 'connect' the lora node to whatever, because the nodes dont match.
@schuss303
@schuss303 2 месяца назад
Thank you for the video.. Does anyone know what is the best way to use 155h with integrated 8gb, 16gb 7600mhz ram, NPU, very fast hard drive.. It's the zenbook 14th gen.. Than you for any info
@leonv_photographySG
@leonv_photographySG Месяц назад
Hi i am getting blurred image instead when i generate, not too sure on what is the issue though
@TheLocalLab
@TheLocalLab Месяц назад
Yeah try not to change the cfg. The cfg should be 1, higher causes blurred images.
@Vanity7k
@Vanity7k 2 месяца назад
Hi, mine's just doing a black screen on the output.. I have a 4080. Has anyone had this issue?
@bishwarupbiswas4234
@bishwarupbiswas4234 Месяц назад
Prompt outputs failed validation DualCLIPLoader: - Value not in list: clip_name1: 'clip-vit-large-patch14 .safetensors' not in ['model.safetensors', 't5xxl_fp8_e4m3fn.safetensors'] LoraLoader: - Value not in list: lora_name: '{'content': 'flux_realism_lora.safetensors', 'image': None, 'title': 'flux_realism_lora.safetensors'}' not in ['lora.safetensors'] Can nayone help me with the error :)
@TheLocalLab
@TheLocalLab Месяц назад
Did you download the correct clip models and place them in the clip folder? Click the arrows in the dualclip node to check to make sure you can see them. If you cant select the models in the selector fields then the model may not be in the clip folder.
@slavazarkeov4600
@slavazarkeov4600 11 дней назад
@@TheLocalLab Works very well, I select the correct models, thank you for the tuto!
@anirudhsays1534
@anirudhsays1534 2 месяца назад
The Vae file showd in your video is different from the one in link, also both Vae file names in both links are same, we cant paste 2 files with same names in same folder, can you plz advice?
@TheLocalLab
@TheLocalLab 2 месяца назад
Yes I changed the name of the original vae files after i downloaded them. You can just change the names to flux vae shnell or dev, it should still work fine.
@SiddharthSingh-oy5bc
@SiddharthSingh-oy5bc Месяц назад
Flux vae, file not able to download from hugging face(file not available on the site). Can some one pls help with the file.
@TheLocalLab
@TheLocalLab Месяц назад
I just checked, the vae file is still available. Just go into the vae folder in the file and versions section for either the dev or schnell model and download the "diffusion_pytorch_model.safetensors". I renamed it when I downloaded it to flux_vae.
@iBluSky
@iBluSky Месяц назад
You need to sign in to agree the terms popup then you can download
@mashrurmollick
@mashrurmollick Месяц назад
Prompt outputs failed validation DualCLIPLoader: - Value not in list: clip_name1: 'clip-vit-large-patch14 .safetensors' not in ['model.safetensors', 't5xxl_fp8_e4m3fn.safetensors'] LoraLoader: - Value not in list: lora_name: '{'content': 'flux_realism_lora.safetensors', 'image': None, 'title': 'flux_realism_lora.safetensors'}' not in ['lora.safetensors'] I am getting this error, I used the schnell model, what should I do?
@TheLocalLab
@TheLocalLab Месяц назад
Check your clip folder inside the models directory to make sure your clip-vit-large-patch14 .safetensors is inside.
@mashrurmollick
@mashrurmollick Месяц назад
It is there with the name "model.safetensors"
@TheLocalLab
@TheLocalLab Месяц назад
@@mashrurmollick In the dualcliploader node in the workflow, use the arrows to select 'model.safetensors' to use the model. Or you can also rename the file if you like to 'clip-vit-large-patch14 .safetensors'.
@mashrurmollick
@mashrurmollick Месяц назад
Thanks man, I managed to overcome all the error messages, one thing that I'm facing is when I'm pressing the "Queue prompt" option, this green border that highlights the nodes, is jumping from one node to the other, up until "Ksampler" node, it's remaining stuck there, I reduced the number of steps from 20 to 4, still it's stuck, can you help me out?
@TheLocalLab
@TheLocalLab Месяц назад
@@mashrurmollick That's actually normal. The ksampler is the longest step in the process. That's where the model actually starts generating the image. Check your terminal when it reaches the ksampler node and watch it for a few minutes, you should see the progress bar increase as each step gets executed. Depending on your pc specs and model you use, this can happen fast or take a while.
@newreleaseproductions9150
@newreleaseproductions9150 2 месяца назад
Can you make a tutorial for training Your own models using Ai tool kit.
@antiplouc
@antiplouc 2 месяца назад
Unfortunately this has no effect on a mac. No speed increase at all and i tried all the gguf models. Any idea why? Or is it simpy not designed to work on macs?
@TheLocalLab
@TheLocalLab 2 месяца назад
No GGUFs are also compatible with MacOS but there could be a variety of reasons why your not seeing speed increases, especially with lower quants. There's just not enough information to really tell.
@antiplouc
@antiplouc 2 месяца назад
@@TheLocalLab what information do you need? I have a mac studio m2
@nkalra0123
@nkalra0123 2 месяца назад
why did you delete the workflow.json from google drive?
@TheLocalLab
@TheLocalLab 2 месяца назад
You must still be using the old link. I added a new link due to some issues with the previous workflow. The link is in my description.
@antoniojoaocastrocostajuni8558
@antoniojoaocastrocostajuni8558 2 месяца назад
Can I use python and diffusers to run this model using line codes, insteade of ComfyUI?
@TheLocalLab
@TheLocalLab 2 месяца назад
Well the only two python dependencies for the ComfyUI-GGUF Extension node are gguf>=0.9.1 and numpy
@gsudhanshu3342
@gsudhanshu3342 2 месяца назад
can you do a similar type of video for forge
@TheLocalLab
@TheLocalLab 2 месяца назад
Could be a possibility in a future video.
@GenoG
@GenoG 2 месяца назад
@@TheLocalLab Me too please!! 😘
@alifrahman9447
@alifrahman9447 2 месяца назад
hey man...i've installed everything accordingly but UNET LOADER (GGUF) gives me this error everytime Error occurred when executing UnetLoaderGGUF: module 'comfy.sd' has no attribute 'load_diffusion_model_state_dict' im using flux1-dev-Q6_K.gguf file tried different workflow, same error...everything is updated.
@superfeel1275
@superfeel1275 2 месяца назад
u dont have the latest comfy version. when in the comfy folder,open the cmd and run "git pull"
@TheLocalLab
@TheLocalLab 2 месяца назад
Yeah I think you need to update your comfyUI. I would also look into installing the comfyUI manager to make updating and installing new nodes a breeze.
@alifrahman9447
@alifrahman9447 2 месяца назад
@@TheLocalLab already done it man. still same error. cant find a solution, there's a pink border on Unetloader Update: Thanks ,,it wortked!
@alifrahman9447
@alifrahman9447 2 месяца назад
@@superfeel1275 thanks man, it worked,m i updated through manager, but when i update using cmd, it worked😊😊
@alifrahman9447
@alifrahman9447 2 месяца назад
its just....um ..i am confused with lots of model versions! i habe 2060 12 gb , and im using nf4 model, it takes 90 sec to generate 1024*1024 image. So, if I prefer quality over speed, well i lil bit faster generation will definitely help, so which model should I choose bro? AND, please make video with your own voice man🙂🙂👌👌
@TheLocalLab
@TheLocalLab 2 месяца назад
You can try the 6_K and the 8_0 quants and see how the output quality compares with the nf4. Its best to experiment to really find the sweet spot, especially if you can improve results with Loras which is why I like the lower quants(4_0).
@alifrahman9447
@alifrahman9447 2 месяца назад
@@TheLocalLab thanks man, gonna try both
@didichung4377
@didichung4377 2 месяца назад
lora not working with this flux gguf version...
@TheLocalLab
@TheLocalLab 2 месяца назад
Connect the GGUF model loader node to the Lora node, then connect the Lora node to the ksampler node. Be advised that you will need to also make sure there's always Lora loaded to use the workflow. If you no longer want to use the Lora revert back to the default workflow.
@darajan6
@darajan6 2 месяца назад
Hi, I wonder if 3070 8G card +64G ram could run this workflow?
@TheLocalLab
@TheLocalLab 2 месяца назад
My friend you can for sure run this and more with those specs. You should have no issue.
@xD3NN15x
@xD3NN15x 2 месяца назад
Thx! but i get an error when trying to use: Error occurred when executing UnetLoaderGGUF: module 'comfy.sd' has no attribute 'load_diffusion_model_state_dict' File "C:\Users\anden\Downloads\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\anden\Downloads\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\anden\Downloads\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\anden\Downloads\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF odes.py", line 130, in load_unet model = comfy.sd.load_diffusion_model_state_dict( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@TheLocalLab
@TheLocalLab 2 месяца назад
You have to update your comfyUI, either through the ComfyUI manager and restart(recommended), git pull via command line, or just install the latest version.
@falconbmstutorials6496
@falconbmstutorials6496 2 месяца назад
Would love to see a list of WGET instead of a list of websites ... for a beginner this is too confusing
@TheLocalLab
@TheLocalLab 2 месяца назад
If a user can't download a model from HF and drag it into their models folder once its complete, I'm not sure I would trust them running wget commands in the correct directories. I wouldn't be surprised if the vae model somehow ends up in the python library packages folder lol.
@Blucee-w3k
@Blucee-w3k 2 месяца назад
One job and fail.. The Flux_Vae you have is not in the description o you ename it, but in all case don't work
@droidJV
@droidJV 2 месяца назад
It's on the description, he just renamed it on his computer. It's the file called "diffusion_pytorch_model.safetensors".
@Xplo8E
@Xplo8E 2 месяца назад
I have nvidia 3050 4gb, does it runs?
@TheLocalLab
@TheLocalLab 2 месяца назад
You should be able to run one of the gguf quants for sure.
@erans
@erans 2 месяца назад
1.70it/sec (around 30 seconds) per generation of 512x512 on a rtx 3060ti
@henrismith7472
@henrismith7472 26 дней назад
Prompt outputs failed validation VAELoader: - Value not in list: vae_name: 'flux_vae.safetensors' not in ['diffusion_pytorch_model.safetensors', 'taesd', 'taesdxl', 'taesd3', 'taef1'] man so close...
@TheLocalLab
@TheLocalLab 26 дней назад
Ok, two things. 1. Did you download the VAE file from huggingface? and 2. If you did, change the name of the VAE file you downloaded from huggingface to "flux_vae".safetensors
@henrismith7472
@henrismith7472 26 дней назад
@@TheLocalLab Would you mind telling me the url to the correct one please? Pretty sure I have the right one, trying the renaming thing now. I think it's working, I hear my GPU fans firing up lol. Yep, it's working thank you so much.
@TheLocalLab
@TheLocalLab 26 дней назад
@@henrismith7472 You should find all the links in the description.
@Hood_History_Club
@Hood_History_Club 2 месяца назад
"You have to manually link the "Load Lora" Node to the "KSampler" via the model-link" I dont have telepathy. How do I do this? The nodes dont match. And should I use Lora Loader with the snake or without the snake. Remember I cant read your minds.
@TheLocalLab
@TheLocalLab 2 месяца назад
It's all a learning process my guy. I don't know what what you meant by "the snake" but to link the nodes, simply click and drag the link line from the purple node next to" model" in the unet model loader to the purple node on left "model" of lora loader node, then connect the right purple node "model" in the lora node to the left purple node "model" in the ksampler. Its easy as cake.
@AmerikaMeraklisi-yr2xe
@AmerikaMeraklisi-yr2xe 2 месяца назад
How much gpu ram I need for 1024x1024 px?
@TheLocalLab
@TheLocalLab 2 месяца назад
Well it depends on the quant you use and ram(normal) you have, but anything over 3gb vram can produce a 1024x1024 image with the right quant. You would probably just wait longer if your using less vram.
@TrevorSullivan
@TrevorSullivan 2 месяца назад
The photo of President Trump with a rifle is awesome! Nice one! 😉
@KlausMingo
@KlausMingo 2 месяца назад
AI is moving so fast, every day there's something new, it's hard to keep up and try everything.
@1lllllllll1
@1lllllllll1 2 месяца назад
There’s an AI that’ll keep up with progress and distill it all for you to consume once a week.
@spiritform111
@spiritform111 Месяц назад
hm... for some reason it generates white noise.
@spiritform111
@spiritform111 Месяц назад
working now... idk what i did. lol
@didichung4377
@didichung4377 2 месяца назад
missing lora loader nodes right here....
@TheLocalLab
@TheLocalLab 2 месяца назад
Look closer, the Lora node is included in the workflow towards the bottom left.
@Enigmo1
@Enigmo1 2 месяца назад
@@TheLocalLab It's not connected to ksampler, so you're not getting any results out of it
@alex.nolasco
@alex.nolasco 2 месяца назад
I assume xlabs control net is incompatible
@lennoyl
@lennoyl 2 месяца назад
with Q4_0 schnell model, it works but not very well : I had to reduce the controlnet strength below 0.55 to make it look correct. above 0.65, a photo looked like an illustration with plastic faces. I will download and try with the f16 dev model to test if it works correctly with it . edit: I tested with some non schnell models (Q6 K, Q5 KS, Q8 0, F16) and it works very very well with all of them. The issue I had was only with the schnell versions
@mrsam2822
@mrsam2822 Месяц назад
on MAC M2 16 gb 512x512 Generated in 7 minutes! 😅
@TheLocalLab
@TheLocalLab Месяц назад
No gpu? Also which GGUF quant you use?
@RuthannPerzanowski-h6t
@RuthannPerzanowski-h6t Месяц назад
Irwin Mission
@brunozarrabe7122
@brunozarrabe7122 2 месяца назад
I'm just getting a black box instead of an image
@PunxTV123
@PunxTV123 2 месяца назад
i got same result on your image without using lora
@TheLocalLab
@TheLocalLab 2 месяца назад
Yeah during the video, I don't think I used a Lora. If you look at the Unet loader node, it was connected directly to the Ksampler skipping the Lora Loader node.
@PunxTV123
@PunxTV123 2 месяца назад
@@TheLocalLabdo u have new workflow? Sorry im new on this comfyui, dont know what to conmect and where
@TheLocalLab
@TheLocalLab 2 месяца назад
@@PunxTV123 Yeah I just updated the workflow with the lora node connected. Here's the google link - drive.google.com/file/d/1zznjgT4zvE9PTNitHPAXmd5N_oFArhn-/view?usp=sharing. I will also add this link to the description for later if needed. Let me know if its good.
@PunxTV123
@PunxTV123 2 месяца назад
@@TheLocalLab it works now thanks
@Blucee-w3k
@Blucee-w3k 2 месяца назад
Whee is VAE ???????????
@oszi7058
@oszi7058 2 месяца назад
i only get blue pixels
@KINGLIFERISM
@KINGLIFERISM 2 месяца назад
pee pee pee
@casper508
@casper508 2 месяца назад
They just won't let us stick to one setup... lol
@BrowningHazlitt-f1y
@BrowningHazlitt-f1y Месяц назад
Clay Crossing
@ceeespee2204
@ceeespee2204 2 месяца назад
19 hours straight with no sleep food or break, and not a single image, or glimpse at a UI for that matter. It's just internet dumpster diving for convoluted code snippets that dont work. Even the official code on AMD's website is broken, and forget troubleshooting thats not possible. I'm so hungry tired and tilted this text is like walking barefoot on legos to my eyes. This 3090 is just sitting there looking at me like I would ever consider putting it into one of my systems, I dont care if it would work if i did or not. actually i may just go give it the office space treatment with an estwing hammer. That would make me feel much better. because this sadomasochistic linux entropy is easily the most irritating thing I've ever dealt with in my life. sorry i'm exploding on your page i'm delirious but no food or sleep until it works or i die trying. Well, time to wipe the partition and try again.
@TheLocalLab
@TheLocalLab 2 месяца назад
I'm rooting for you buddy!
@Ashutoshprusty
@Ashutoshprusty 2 месяца назад
Use miniconda to make your life simple
@ceeespee2204
@ceeespee2204 2 месяца назад
@@TheLocalLab i got it! and as per standard operating procedure with my lovely life as a die hard AMD fan the road less traveled sucked really bad there for a bit, but i was able to teach myself (with some help from forums and my gpt2's impeccable knack for Web searches) a very large chunk of Linux terminal commands, Python, and this lovely utility called docker. it's been like that since y2k for me, every issue that's brought up leads to a challenge and by overcoming those challenges you get juicy exp gains. and yeah AMD is a company at the end of the day but at least they don't cross the line like skynet and Intel(who is getting a nice slice of what they deserve pie) either way apologies for stumbling in here grumpy and delirious, and thank you, sincerely, for the vote of confidence. May your code never throw errors. cheers!
@zdrive8692
@zdrive8692 2 месяца назад
on apple silicon m1 pro no matter what it always output green blue black boxes or traingular small shapes like TV of 90's when we he have no signal, tested all Q models tried with both cpu gpu it does not work... it is for windows with Nvidia GPU only
@mashrurmollick
@mashrurmollick Месяц назад
diffusion_pytorch_model.txt "file wasn't available on site" can someone please help me?
@TheLocalLab
@TheLocalLab Месяц назад
The file is diffusion_pytorch_model.safetensors, if your talking about the flux vae. Its not a text, I just use that for demonstration purposes.
@mashrurmollick
@mashrurmollick Месяц назад
@@TheLocalLab yes, I'm talking about the vae file of the flux dev model
@TheLocalLab
@TheLocalLab Месяц назад
@@mashrurmollick Yes then you can simply download the .safetensors file from either the dev or schnell model page.
@mashrurmollick
@mashrurmollick Месяц назад
When I am pressing the download button of flux dev vae file"diffusion_pytorch_model.safetensors", my browser's download progress bar says "diffusion_pytorch_model.txt file wasn't available on site".
@VirtualDarKness
@VirtualDarKness 4 дня назад
@@mashrurmollick you need to sign in and accept the terms
Далее
Visualizing 4D Pt.1
22:56
Просмотров 949 тыс.
Comfyui consistent characters using FLUX DEV
12:27
Просмотров 20 тыс.
Install Flux locally in ComfyUi with Low VRam
15:00
Просмотров 4,8 тыс.