[English Translation] The xformers and gradio circumference have been upgraded, and there seems to be some trouble with dependencies. AI Art JAPAN is considering how to deal with it. Thank you very much. The following workaround will allow you to start and generate the file for the time being. Some behavior seems to be wrong (when you press the download button, it makes you download a htm file), but you can use it for the time being. 1. resolve inconsistency between xformers version and pytorch Start a command prompt, CD to diffBIR installation folder, venv\Scripts\activate to move to the virtual environment. pip uninstall torch torchvision torchaudio remove pytorch once pip install torch torchvision torchaudio --extra-index-url download.pytorch.org/whl/cu121 Replace with pytorch specified by xformers. 2. upgrade to the latest version of gradio pip install --upgrade gradio 3. deal with source related errors (3 are displayed, and deal with not starting due to errors) Backup "gradio_diffbir.py" under the installation folder and edit it. Line 148: edit the following line input_image = gr.Image(source="upload", type="pil") Change to the following Image(type="pil") Line 149: Change run_button = gr.Button(label="Run") Change to the following run_button = gr.Button(value="Run") Line 168: Change result_gallery = gr.Gallery(label="Output", show_label="False", elem_id="gallery").style(grid=2, height="auto") to the following result_gallery = gr.Gallery(label="Output", show_label=False, elem_id="gallery") It seems that an inconsistency has occurred due to an upgrade of gradio, which controls the UI, and it starts and works for the time being. However, when I press the download button for the generated image, It downloads an image.htm file instead of a png file, Until the problem is completely resolved, right-click on the generated file, save it, and use it.
An error was encountered while executing gradio_diffbir.py after the environment was deployed.TypeError: Image.__init__() got an unexpected keyword argument 'source',Is there a problem with pillow's version,Sorry, my English is not good, so it is machine translation, I hope you can understand my description。
Thanks for watching my video. Unfortunately, since ControlNet is not available at this time, I cannot specify detailed layouts, etc. We have to try our best with LoRA, which has strong layout and composition enforcement, or with text prompts. This is the reason why TensorRT is not very popular. TensorRT is a function for people who have a specific LoRA they want to use and want to generate a large number of images while allowing for a certain amount of random elements.
【cuDNNエラー対処方法】※2023年10月22日修正(フォルダが異なっていました) 環境構築後webUI起動時に、ダイナミックリンクライブラリに関するエラーポップアップが4つ出現しますが、 cuDNNの最新バージョンを入手し、インストール。(NVIDIAの登録が必要です) developer.nvidia.com/cudnn cuDNN v8.9.5 (September 12th, 2023), for CUDA 11.x を入手してください。 ダウンロードしてきた中のフォルダ「bin」の中身、 cudnn_adv_infer64_8.dll cudnn_adv_train64_8.dll cudnn_cnn_infer64_8.dll cudnn_cnn_train64_8.dll cudnn_ops_infer64_8.dll cudnn_ops_train64_8.dll cudnn64_8.dll を webUIインストールフォルダ以下の \venv\Lib\site-packages\torch\lib フォルダの下に、 元のファイルのバックアップを別フォルダ等に取得した後、上書きコピーします。 webUIを起動するとエラーは解消します。 ---- How to deal with cuDNN errors After building the environment, I get 4 error popups about dynamic link libraries when starting webUI, Get the latest version of cuDNN and install it. (NVIDIA registration is required) developer.nvidia.com/cudnn cuDNN v8.9.5 (September 12th, 2023), for CUDA 11.x Please obtain the following. Contents of the folder "bin" in the download, cudnn_adv_infer64_8.dll cudnn_adv_train64_8.dll cudnn_cnn_infer64_8.dll cudnn_cnn_train64_8.dll cudnn_ops_infer64_8.dll cudnn_ops_train64_8.dll cudnn64_8.dll under the webUI installation folder \venv\site-packages\torch\lib After making a backup of the original file in another folder, overwrite it. The error will be resolved when you start webUI.
Excuse me, but what could you be more specific at the create a "installation packaages"? and could this thing work in a conda create python 3.9? I'm kinda a compelely new here. You are an exllent youtuber, thank you anyway.
Sorry for the late reply. Perhaps you are in a Windows environment. The official installation procedure assumes Linux, and the triton package cannot be installed on Windows. The "wheel" package is provided, though, Unfortunately, this one gives the following error in a conda python 3.9 environment. ERROR: triton-2.0.0-cp310-cp310-win_amd64.whl is not a supported wheel on this platform. The environment in which we have been able to confirm the operation is We have confirmed that it works with python 3.10, not conda. It seems that python 3.11 does not work either, so if you want it to work, you will have to build your environment with pinpoint accuracy. I guess I will have to coexist with python 3.10 being careful not to disrupt the existing conda environment... This is deprecated as it is quite difficult.