Then make a video teaching how to install ComfyUI on Ubuntu Linux! I saw that you did it for Stable Diffusion, but I really wanted to install ComfyUI on Ubuntu!
The video is sped up. It took me 1 minute and 32 seconds to generate an image with 20 steps in ComfyUI and 2 minutes and 3 seconds for an image also with 20 steps in webUI. My GPU is an RX 550 4GB.
🙂There was a lot of trial and error when I tried to install comfyui without any guidance. I downloaded everything and installed it into the terminal. And somehow it worked.. Even though it takes up a lot of storage
Open the terminal in the folder with the file you want to enable for execution. For example, for webUI, you must be in the folder that contains the file "webui-user.bat". Then, right-click and open the terminal. Next, run the command: "Unblock-File -Path .\webui-user.bat". Without the quotes.
first of all your guide is very clean and easy to follow, i followed all youstep , and it all worked, then stable diffusion opens up in the webpage, just like in the video, and i tested it with a random prompt but it cannot generate and the error is this : "SafetensorError: device privateuseone:0 is invalid" i tried to download 2 different Models but the error is the same. Any idea ? cuz i checked online but couldnt find anything.
Sorry, but this video will not work for you. The installation method for Nvidia GPUs is different. You do not need to use torch-directml. The standard torch with CUDA is what you need. Unfortunately, I do not have any tutorial for Nvidia cards yet.
Yes, if you close the terminal session, you will have to do all that to use it again, but everything will already be downloaded and configured, so it only takes a few seconds.
If you are using comfyUI, download the model and place it in the ComfyUI > models > checkpoints folder, or if you are using webUI, download the model and place it in the stable-diffusion-webui-directml > models > Stable-diffusion folder, after that just reload the page in the browser and select the new model.
Windows DirectML does not manage memory very well at the moment. In Linux, I was able to use the "--normalvram" argument perfectly and obtained much better performance. Generating an image with exactly the same parameters(seed, lora, model...) in Linux took about 145 seconds, while in Windows it took 201 seconds.
Thank you. Other people have already asked me about ZLUDA, but I haven't tested it yet. As soon as I have time, I will test it and make a video with ZLUDA on Windows with ComfyUI and webUI.
I recommend using the "--normalvram" parameter for cards with VRAM between 6GB and 12GB. For cards with a higher capacity, such as 16GB or 24GB, use the "--highvram" parameter. This will ensure that the models are loaded in the GPU memory and will accelerate the generation process.
Hi there, I installed as u showed in the tutorial. I have a 7900xtx and at the time i start the queue my vram gets to about 20gb usage but it still says fallback to cpu. i get about 3-4 it/s and after finishing the process my vram is still full at 20gb. whats happening here? it still says fallback to cpu while executing
I used comfyui. While executing the graphics cards goes up in GPU usage like to 80%. The Vram stays at 20gb at any time after first image, after the second it goes eben higher. I still only have like 3-4 it/s. I used standard parametrs from the basic layout, with my own prompt. if i put up like 3 batches with 40 steps it will even drop to 1.5-2.6 it/s
Sorry. You switched the units and confused me. If you are getting 3-4 iterations per second, it seems correct to me. The VRAM is getting full because machine learning with AMD on Windows is a bit limited, and DirectML does not manage VRAM very well. Do not expect performance similar to NVIDIA cards. If I am not mistaken, a high-end AMD card will have a third of the performance of a high-end Nvidia 4000 Series card in machine learning.
@@LinuxMadeEZ I remember the same performance on other people's 6900xt. Is there really not that big of a difference at all between the 6xxx and 7xxx and cards? Would it be a big difference switching to Linux?
About the performance between the 6000 and 7000 series, according to AMD: "Radeon™ 7000 series GPUs feature more than 2x higher AI performance per Compute Unit (CU) compared to the previous generation." If this is true, I have no way to test it because the most recent card I have access to is an RX 6600. Regarding Linux, when using it, you will have much better VRAM management, allowing you to use more complex models and workflows. In my case, I was able to enable some more optimizations and experienced a considerable performance gain. However, this was on a very limited GPU.
Thakns for the great video. When I try ot generate a model, it is not using the GPU at all, just CPU. I have a 6650XT. When runing comfy, I get this at the start: Using directml with device: Total VRAM 1024 MB, total RAM 16333 MB pytorch version: 2.3.1+cpu Set vram state to: LOW_VRAM Device: privateuseone How do I get it to use the GPU? Instead it has the Device as "privateuseone". I did some googling but have come up blank so far. Thanks for any help!
@@LinuxMadeEZ Thanks for the reply. I did the directml install command and there were no errors. Then I noticed as I was watching your video that your GPU was listed as privateuseone also. I checked my GPU and it was working off and on, hitting 99% then dropping . Took about 193 seconds to make an image. One thing is I have 8gigs of VRAM but it only shows 1 gig (just like in your video). It errors out unless I use the lowvram parameter.
Usually, the VRAM is completely used. You can check this in the task manager. Torch reports 1GB of VRAM because DirectML does not manage memory very well. The only alternatives for AMD cards would be to use ZLUDA, which I do not recommend because, in my case, I had many more crashes, or use Linux, which officially supports the full AMD ROCm.
@@LinuxMadeEZ It happens to me too, this is what i get from the terminal: (venv) PS C:\Users\avile\OneDrive\Documentos\StableDiffusion\ComfyUI> pip install numpy Requirement already satisfied: numpy in c:\users\avile\onedrive\documentos\stablediffusion\venv\lib\site-packages (2.1.1) [notice] A new release of pip available: 22.2.1 -> 24.2 [notice] To update, run: python.exe -m pip install --upgrade pip (venv) PS C:\Users\avile\OneDrive\Documentos\StableDiffusion\ComfyUI> python main.py --directml --use-split-cross-attention --normalvram A module that was compiled using NumPy 1.x cannot be run in NumPy 2.1.1 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'. If you are a user of the module, the easiest solution will be to downgrade to 'numpy
It looks like there is an error with your git installation. Please check if you followed the installation instructions exactly as shown in the video. If not, try reinstalling it and then restart your computer.
@@LinuxMadeEZ the problem is that git clone download is interrupted. And I can't download at all, it's constantly interrupted. I downloaded it on another machine, then copied it to the machine with the GPU. But now I have the next problem. I can't run webui-user.bat because the user name on the machine from where I have copied git clone is different, and the path to python is different, too. Do you know by any chance in which repository's files I can correct the path to python? That's getting silly.
it so frustrating i'm on 6700 xt 12gb follow your steps with the same install results, when i built a image 512x512 queue prompt no gpu working only 100% of RAM, gpu no use, cpu 5% took five min to generated first time second time six min again six and so on, put normalvram or lowvram same result the comfyui dont touch my gpu. T_T
Activate the venv and run the command "python" or "py", this will start a python console(interative mode), then run "import torch", and then check if torch is seeing your gpu with the command: "torch.cuda.get_device_name(0)" does this command return the name of your GPU?
@@LinuxMadeEZ venv\Lib\site-packages\torch\cuda\__init__.py", line 414, in get_device_name return get_device_properties(device).name raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled
Somehow you ended up installing the CPU-only version of PyTorch. Activate the venv and try running the command "pip install torch-directml". Then, repeat the steps mentioned in my previous comment and see if anything changes.
Sure, several people told me about Zluda. I did some research, and I'm going to test if there is any gain in performance or stability. If there is, I'll make a video as soon as possible.
You are correct, sorry. I have updated the description with a safer way to enable script execution. There is also a command provided to revert the configuration that is shown in the video.
@@LinuxMadeEZ thank you I'm very bad at scripting but I know a little about security. But how do you use the code you suggested to allow only the scripts in this tutorial
Open the terminal in the folder with the file you want to enable for execution. For example, for webUI, you must be in the folder that contains the file "webui-user.bat". Then, right-click and open the terminal. Next, run the command: "Unblock-File -Path .\webui-user.bat". Without the quotes.
Please help: Traceback (most recent call last): File "C:\Users\*redatcted*\Documents\Stable Diffusion\ComfyUI\main.py", line 87, in import comfy.utils File "C:\Users\*redatcted*\Documents\Stable Diffusion\ComfyUI\comfy\utils.py", line 20, in import torch File "C:\Users\*redatcted*\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\__init__.py", line 148, in raise err OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\*redatcted*\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies.
First, install the "App Installer" from the Microsoft Store. Then, restart your computer and run the following command without the quotes: "winget install --id Microsoft.VisualStudio.2022.BuildTools --override "--passive --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64""
Requested to load AutoencoderKL Loading 1 new model loaded partially 64.0 63.99990463256836 0 !!! Exception during processing !!! Numpy is not available Traceback (most recent call last): File "F:\stable diffusion\comfyUI\execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "F:\stable diffusion\comfyUI\execution.py", line 192, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "F:\stable diffusion\comfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "F:\stable diffusion\comfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) File "F:\stable diffusion\comfyUI odes.py", line 1497, in save_images i = 255. * image.cpu().numpy() RuntimeError: Numpy is not available Prompt executed in 124.74 seconds (venv) PS F:\stable diffusion\comfyUI>