@1:43 * Requires Nvidia graphics card * Minimum 12GB VRAM * At least 32GB system RAM Not so fast. * Can be run on CPUs and AMD GPUs too (tho it's painfully slow) * GPUs with 8GB VRAM can also be used (tested with Nvidia 3060) * Even 16GB system RAM is enough (but make sure you're using FP8 mode). Not the most ideal setup, but it gets the job done.
I have RTX 3080 12GB + 32GB. I'm gonna fire this up because FLUX is damn good. I'm just wondering if we can make images with our own character references. Maybe not yet?
@@bigbrotherr I'm not sure but if you use your Flux gens on a ConfyUI node for inpainting, face swapping, would it work? I barely tried comfi a few weeks ago but still on a1111 :)
it works on a 3050 4GB with 32GB Ram, had to use the schnell fp8 model with a smaller resolution (768px). to get it around 1 min 40 sec. The big benifit of this model is that it gives good results, no deformed humans. So the long time to generate a picture is more worth it.
You realize its a gamechanger after generating couple of images, the prompt is understood very well in flux which didnt happen before, hands with 5 fingers, natural poses, crazy crazy stuff cant wait for more updates.
The quality is great, but I don't see it becoming very popular until it can be used with controlnets and IP-Adapter. It doesn't recognize negative prompts too. Until then SDXL and SD1.5 will continue to dominate.
for weight d-type if you're on 24gb vram card you should be on default instead of fp8 for higher quality, you didn't say anything there so thought i'd bring it up. good tutorial vid for newbies otherwise. sizes up to 2 mega pixels (1920x1080 for instance) work.
@@neoneil9377 idk what to tell ya... close all other programs? I literally have nothing open but edge & comfyui and it works flawlessly on my 3090, all properly loads into vram on fp16.
I use the flux schnell model with my RTX 3060 12gb and it created just fine. The team behind Black Forest worked on Stable Diffusion and Midjourney so they really know there stuff.
Just for the record: I've tried this on my home-build with just a GTX 1070 (8gb vram) and 32 gb system ram, and i'm getting a 1024x1024 image in about a minute WITH Photoshop running in the background and two browsers open.
I use the Schnell version of the Flux model and FP16 mode on my 4080, and the 1024px square image is generated in 8 seconds only. I don't consider that too slow, especially for the first release. But on a Dev model, it needs about 28 seconds, and that one might be slow for some. But the difference is not huge, so I'm using the Schnell model in the end. Even the difference between FP8 and FP16 modes is not so great, so systems with less than 32GB RAM are also fine.
@@context_eidolon_music just asking because when I used MJ, it was incredibly slow at peak times. I cancelled it because I can't rely on an online generator. As slow as Flux is for you, do you think the image quality is worth it?
I donmt understand. When i try to que prompt after doing all other steps the teminal goes (env) (base) H:\Research\pino\api\comfyui.git\app> and browser disconnects. Help
the workflow does not open with a "Load diffusion model" node. It has a UNETLoader node, which doesnt work if I am using one of the safetensors models...
I have one ai model already made by rendernet. Im using that face model on comfyui epic realisim model with reactor faceswap to create different images same face, different places. So is there any option to use Flux Dev to make samething ? I searched but I couldnt find a way to use my model face and create pictures. I start to think SD is still better option
hello when are going to launch "Ultimate Guide to AI Digital Model/Influencer on Stable Diffusion ComfyUI (Beginner Friendly)" course , i am waiting for it , can you tell the exact date of launch ?
0:13 => "...another company was working in silence." Good one, bro. Same people, but they needed to start a new company because they are being sued by Getty and it's looking like that lawsuit will end Stable Diffusion. Flux.1 is basically Stable Diffusion 3.1 with more model data.
How is some generic pose so great mark of this model, just wondering? I mean yes the model can understand the meaning and what you are looking for quite well. But most models seem to often be able to generate these super generic stock photo poses, but not be able to generate certain not so common poses or even worse, most have so far been very limited with the view angles and different FOVs, let alone open mouths + teeth, crossed hands and such... because those images weren't in the training data set or didn't make it, or the model simply can't work with too small details. But definitely the model doesn't mangle the poses and seems to be way better with hands and poses, saw some pretty consistent and varied yoga poses in Reddit, not sure which of these models, but anyway way more consistent than any of SD models.
I cannot download the ae.sft but I can find ae.safetensors so I renamed to ae.stf but I'm getting an error. Prompt outputs failed validation VAELoader: Value not in list: vae_name: 'ae.sft' not in ['taesd', 'taesdxl', 'taesd3']
Do you know how to install on the stability matrix? The Models folder, which is a folder shared with other models in the stability matrix, does not have a Unet folder like the regular Comfy UI.
What about negative Prompts to avoid constantly getting super models with tons of make-up on their faces? Also, only 1 out of 10 pictures for me have correct limbs and hands. Stable Diffusion won't be buried for a very long time. Tried this prompt and all I get is garbage: "an astronaut sitting on a rock on moon and playing guitar. in the background the rising earth wich is in an atomic world war."
isnt the prompt a bit crazy? Ai seems to suffer when you give them too many details (sitting on a rock + playing guitar) I would try only sitting on a rock, or only playing guitar for better results
I have a GTX1080Ti with 11GB VRAM, I am running the flux models with the fp8 T5, resolution is 1024x1024, at around 25s/it, quite slow(SDXL/SD3/Kolors by comparison is around 2-3s/it), but it isn't unreasonable tbh, given that I am using the full 23GB flux models(tried both schnell and dev) with only 11GB VRAM, it is to be expected. Edit: The point of my comment was even on 11GB VRAM I am able to run the models, I also only have 32GB system RAM, it is probably pushing the limits, as it does completely saturate my VRAM, but it works with less than 12GB.
SAI is def finished but SDXL is going to stay for a while. The problem with flux.1 is that dev and schnell ver are distilled model from their API only pro model, meaning traditional fine tuning method are not going to work. So at best, it’s a Midjouney alternative with little community fine tunes and LORAs like SD1.5 and SDXL.
Thank you for making the tutorial video. This is my first time installing ComfyUI to test Flux1. I followed all the instructions and installed everything, but when I click on 'run_cpu.bat' (or 'nvidia'), sometimes the ComfyUI screen appears, and sometimes it doesn't. When it doesn't appear, the last command line shown is as below, and when I press any key, it just disappears and nothing happens. I've tried many times but still can't open it. I don't know what this error is. Could you please guide me on how to fix it? Thank you very much. D:\ComfyUI\ComfyUI_windows_portable>pause Press any key to continue..."
i see that the schnell model is on apache2.0 license. isn't this completely free for commercial use? i am confused cause they wrote schnell is tailored for local development and personal use. do you know what these actually mean?
@@bentontramellI’ve tried wide angle shot and I’ve tried to give it a focal length but it just won’t work. Something else I’ve never been able to do is to get someone lit only but a single light source, like a flame. Flux just adds lights everywhere. And without negative prompts it’s even harder. But it’s a V1 and it’s quite amazing. Looking forward for v2
I have a 3080ti with 12 GB RAM, and 32 GB system ram, for some reason it crashes at "GOT PROMPT", I tried the FP8. Also other checkpoints of flux give me the same error. Other models work. I downloaded the last version of standalone COMFYUI.
Thanks for the video, but I don't know why I get this error "module 'torch' has no attribute 'float8_e4m3fn'" I updated comfy and even updated my torch version after getting this error but still, no go. Comfy works for everything else, but flux :( 128gb ram 24gb vram windows 10, python 3.10
@@MrGenius2 thanks, but i did. the only thing i can think of is the fact some of my models are in the automatic1111 folders and i added them through tye extra modle yaml, but for the flux models i placed them all in comfy, which file specifically do you mean? i placed them all
Whenever I have issues with something new for Comfyui, I just create an additional Comfyui portable. Tried installing Kolors on my first Comfyui and had issues, so I went to my third iteration of Comfyui portable which had way less stuff installed. It worked.
I got the same error. I then tried using the fp16 version clip and that worked (though it took like 25 minutes on my rig which doesn't have as much RAM as yours).
@@Aiconomist Thank you so much for your help. What do I do with this file? its the last step folder download. workflow-flux-simple-workflow-schnell-40OkdaB23J2TMTXHmxxu-reverentelusarca-openart
@@InterestingLifeTravels On the official "Flux Examples" webpage you have two pre-generated images (an anime girl and some bottle on the table). Just drag and drop one of these images on top of your ComfyUI, depending on whether you're using Schnell or Dev model. Then the default ComfyUI workflow for your model will show up.
@@InterestingLifeTravels i figured it out, in the comfyui interface click on load, and then click that workflow file (.json) and the ui will look like in this video
nothing happens, after clicking qeue prompt, the green circle stays on the first box where you enter the model. the CMD says: got promt model_type FLOW model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16 RTX 3060 Ti (8GB? i think?) i dont mind waiting a bit longer, but nothing happens
Well isn't that group basically at least partially former Stability AI developers? You talk like this was some unknown group of devs. And I wouldn't care about their licenses, as long as they are not transparent about their data sets and how they acquired those images (I have no idea about that though if there is some info).
Have downloaded all files and put them into their right places (IMHO) - but when I start the ComfyUI server I just get my usual SD nodes - nothing about FLUX1 ?? I have reset Chrome etc. too - When I try to LOAD nothing is shown to select. Wonder what I have done wrong - please ??
Hmmm - then I (Re-)installed everything using the Manager (models 311++). Refreshed and now I get something when I try to load (flux 1-dev.... and flux 1-schnell...) but trying tu load one of these I just get: "Unable to find workflow in flux-schnell ... etc ??
Prompt outputs failed validation VAELoader: - Value not in list: vae_name: 'vaeae.sft' not in ['FluxVAE.safetensors', 'ae.sft', 'taesd', 'taesdxl'] DualCLIPLoader: - Value not in list: clip_name1: 't5xxl_fp8_e4m3fn.safetensors' not in ['clip_l.safetensors', 't5xxl_fp16.safetensors']