Тёмный

Clothes Swapping Made Easy! ComfyUI 

Creator Brew - Experimenting & Sharing
Подписаться 2 тыс.
Просмотров 17 тыс.
50% 1

Swap like a pro: clothes, hair, and anything else you can imagine. ComfyUI, IPAdapter + Segment Anything will make this task a breeze!
This video is short and easy to follow. Try out on your own, the more you build your own nodes the more you will be able to stitch together into your own creative workflows. For a short cut, here is the workflow, download the image and then drag to Comfyui to import the workflow:
github.com/cre...
(or from here)
comfyworkflows...
Main Model:
sd_xl_base_1.0.safetensors
Ipadapter Model
ip-adapter_sdxl_vit-h.safetensors
Clipvision
SD1.5\model.safetensors
SAM Model Loader (segment anything)
sam_vit_h (2.56GB)
Grounding Dino Model loader
GroundingDINO_SwinT_OGC (694MB)

Опубликовано:

 

28 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 87   
@tunghoang4161
@tunghoang4161 5 месяцев назад
Love it, this led me to comfy ui.
@creatorbrew
@creatorbrew 5 месяцев назад
Comfyui is fun, great that you’ll get to try it!
@alecubudulecu
@alecubudulecu 6 месяцев назад
Loved this. Thank you!
@zaraarmalk1084
@zaraarmalk1084 Месяц назад
I am unable to find sd1.5/model.safetensors, can u please provide the link where the safe tensors are present
@HosseinAhmadi-x3n
@HosseinAhmadi-x3n 7 месяцев назад
I can't find Clip version "SD1.5\model.safetensor" can you share its HuggingFace page, please
@creatorbrew
@creatorbrew 7 месяцев назад
huggingface.co/h94/IP-Adapter/blob/main/models/image_encoder/model.safetensors
@davidwang6541
@davidwang6541 7 месяцев назад
How can I directly use your workflow or can you give a customized version if I pay ? Is it possible if I ask a model photo to take an appointed product like a new brand thermos cup trained by a dataset with various model’s poses, can it be done by comfyui?
@creatorbrew
@creatorbrew 7 месяцев назад
Hi - there are two approaches that you can combine with the technique in the video. In-painting from a reference or Lora training. Or forcing an image via IPAdpater or the diffusion itself. I think Lora training would be the first good step to take. Lora is forcing the the visual by feeding in a limited number of images to refine a model (SD XL) to give back a result. What's the best way to connect with you to talk about further, want to make sure I'm 100% understanding you need.
@davidwang6541
@davidwang6541 7 месяцев назад
@@creatorbrew i have tried Leonardo AI. dataset training for a product, but result is deformed , and it can't be taken by a model , i have no idea to connect both sides, so i think that only comfy UI look promising in its accurately control , but i don't understand it at all, it's somewhat complicated for me to tweak it, and it spends much time to learn it , if you can give a help , i would like to pay and get the result directly , sincerely
@creatorbrew
@creatorbrew 7 месяцев назад
How to contact you? And did you look at my Lora training video yet?
@sirjcbcbsh
@sirjcbcbsh Месяц назад
I'm so confused about for which SAMmodel Loader and GroundingDino Model Loader shall I choose... I just chose them by chance.
@sirjcbcbsh
@sirjcbcbsh Месяц назад
What GPU you're using, I'm using RTX4070 Ti Super, but still occured: Warning: Ran out of memory when regular VAE encoding, retrying with tiled VAE encoding. Warning: Ran out of memory when regular VAE decoding, retrying with tiled VAE decoding.
@valorantacemiyimben
@valorantacemiyimben 2 месяца назад
Hello, I am getting the following error, how can I fix it? When loading the graph, the following node types were not found: IPAdapterApplyEncoded Nodes that have failed to load will show as red on the graph.
@Avalon19511
@Avalon19511 2 месяца назад
The Ipadapter apply node is replaced with the ipadapter adavanced, but the problem there is the image connection which I am running into
@ckhmod
@ckhmod 10 часов назад
Is there a method to use a brush to paint your own mask?
@DwaynePaisleyMarshall
@DwaynePaisleyMarshall 4 месяца назад
I now also get Error occurred when executing VAEEncodeForInpaint: expected scalar type Byte but found Float
@creatorbrew
@creatorbrew 4 месяца назад
Hi - 1. download the portable version of comfyui. And test it by running the default workflow that shows (the case) up 2. Install the manager. 3. Update comfyui 4. Download the workflow. 5. Go to the manager and install the missing nodes. 6. Restart confyui manually (check your see if there are any node conflicts. 7. Download the models and place them in the model folder. 8. Restart comfyui (or click the refresh button) to have those models listed. 9. Start out simple, 1024x1024 images for the person snd the clothes.
@DwaynePaisleyMarshall
@DwaynePaisleyMarshall 4 месяца назад
@@creatorbrew I'm on a Mac
@creatorbrew
@creatorbrew 4 месяца назад
@@DwaynePaisleyMarshall If I remember, I think that error happens when the wrong model is used. (the only other reason would wrong node).
@klopp6308
@klopp6308 4 месяца назад
Hey thx for the amazing video! Just a simple question: how do the VAE encoders and decoders work without a vae? For me, the encoder and the decoder don't work like they did in your video, so the image with the mask superimposed doen't show up
@BriO853
@BriO853 2 месяца назад
I think he's using a model with a baked vae, so it's not loaded by a seperate node.
@claudioestevez61
@claudioestevez61 7 месяцев назад
Like number 100 👍
@DwaynePaisleyMarshall
@DwaynePaisleyMarshall 4 месяца назад
Is anyone else having trouble with ApplyIPA adaptor? Can't seem to find the node in Ipadapter
@creatorbrew
@creatorbrew 4 месяца назад
Did you down load the workflow, went to the manager, added the missing nodes, restart comfy? As long as the provided workflow works you can reproduce on your own.
@DwaynePaisleyMarshall
@DwaynePaisleyMarshall 4 месяца назад
@@creatorbrew Yep I did all of this, there just isn't an equlivent node for ApplyIPA adaptor, not unless the IPA has had an update?
@ChAzR89
@ChAzR89 3 месяца назад
@@creatorbrew seems like there is no "apply ipadapter" node in ipadapter plus which is the package it should come with.
@Avalon19511
@Avalon19511 2 месяца назад
I think you need to update the tutorial, because the IPAdapter apply is no longer used it is replaced with IPAdapeter Advanced, the settings are different
@creatorbrew
@creatorbrew 2 месяца назад
You can download the original ipadapters from git. Or, with a little rework, you can rewire the new ipadapters to the flow (swap in). How much should I help? Always a question, part of learning is working through a minor bump since 3 years from now who knows what the exact adapters will be but the workflow concept will be the same. But I can see if anyone wishes a one click solution having a new tutorial would accommodate them.
@jasonyu8020
@jasonyu8020 5 месяцев назад
They are not the same jacket....WHY?
@creatorbrew
@creatorbrew 5 месяцев назад
Why = AI isn’t Photoshop and it has to dream up what you are asking. However, with the right prompts and control you can force the randomness into a closer space . The strongest way is to train a Lora on about 10 images which will be a strong guide to getting a close result Train ComfyUI Lora On Your Own Art! ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-5PtLQSFrU38.html
@ogamaniuk
@ogamaniuk 5 месяцев назад
Is it possible to make the jacket look exactly as it is in the source image?
@creatorbrew
@creatorbrew 5 месяцев назад
Lora training, requires multiple images Train ComfyUI Lora On Your Own Art! ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-5PtLQSFrU38.html instead of a character in the video.
@pedroquintanilla
@pedroquintanilla 6 месяцев назад
I get red IPADANTER and CLIP VISION and I don't know how to solve this problem, could you help me with it?
@creatorbrew
@creatorbrew 6 месяцев назад
RED = those nodes are not installed. See the other response about installing missing nodes.
@Adesigner-in9mn
@Adesigner-in9mn 6 месяцев назад
hey I keep getting this error any info on a fix? Error occurred when executing VAEEncodeForInpaint: expected scalar type Byte but found Float File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/workspace/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/workspace/ComfyUI/nodes.py", line 360, in encode mask_erosion = torch.clamp(torch.nn.functional.conv2d(mask.round(), kernel_tensor, padding=padding), 0, 1)
@creatorbrew
@creatorbrew 6 месяцев назад
Hi- have you tried the following: download the new portable version of comfyui (that’s the self contained folder with no installs) then install the comfyui manager.. update “all” the comfyui and nodes. Then follow the workflow .
@zengze4858
@zengze4858 8 месяцев назад
Hello, uploader. I have two questions for you. First, what format should the original image of the clothes be in? Second, the workflow you shared can only produce preliminary images. These images are very rough. How can I optimize them using ComfyUI?
@creatorbrew
@creatorbrew 8 месяцев назад
Hi - PNGs are good to start to avoid the JPG compression (boxes). This demo workflow is the simple level to show how to create a task and link them together, so you can keep on growing. For further refining, to continue with ComfyUI would be linking an Upscaler and detailer node. My source image of the person isn't that hi-res, starting with better source material will help. The other way to go is to start making a daisy chain of the masks process as shown in the video, such as 1) taking the person as the final output and running segment anything with the prompt of "person" or "character" and that to 2) an invert mask to knock out the background. 3) Connecting that chain to an inpaint vae and 4) drop in a background (image or via prompt). This will get rid of the stray pixel smudges around the person. Then to focus back on the jacket, 1) copy and paste the jacket workflow for masking and add rerun it with the newly cleaned up background version of the person. This could help eliminate some of the rough pixels around the wrist. What I have been doing is breaking outfit into pieces, and combining those masks together with a focus on each area. Also, for something quick, I take the general output and do retouching in photoshop (or photopea.com if you want to use a free image editor). Using Comfyui as away to get the general position / pose of the person correct with the current outfit and then correcting imperfections on the pixel layer there. ComfyUI, Automatic 1111, and Photoshop are tools that are part of a workflow. Sometimes it is "quicker" to dip into a few tools to get refined output than to try to use just one tool. In theory, I can keep using masking and inpainting to wire together a big network of nodes that could automate the process. I've done this when creating Kiosk experiences with a front end with stable diffusion. But if it is for personal or production art, then using each tool's strengths to save time is a way to go.
@zengze4858
@zengze4858 8 месяцев назад
Thank you for your answer. I have learned how to remove the background from a long image and make it transparent, so it can be used as the original image for the clothing. Then I tried to get a high-definition optimized image through SD enlargement, but the effect was not ideal. I tried many combinations, and I feel that it seems impossible to get the ideal image with only comfyui. Do I really need to use several software together to get the ideal image? Is there any way to get a very good image with only comfyui?@@creatorbrew
@SlappyMarsden
@SlappyMarsden 7 месяцев назад
What folder is the file ip-adapter_sdxl_vit-h.safetensors supposed to be in? Everything points to my A1111 install but the Load IPAdapter Model node keeps failing. The file is located in my C:\AI\StableDiffusion\stable-diffusion-webui\extensions\IP-Adapter folder...any ideas? Thanks
@creatorbrew
@creatorbrew 7 месяцев назад
within the Comfyui/customnodes/ipadapter models.
@jamesyang4026
@jamesyang4026 7 месяцев назад
hi! I tried what you did but the results weren't really good, is there a way to get in touch so we can discuss maybe? I appreciate your video!
@creatorbrew
@creatorbrew 7 месяцев назад
Start a reddit threat and link to it here, then we can chat.
@sunnytomy
@sunnytomy 7 месяцев назад
hi just tested the workflow, but didn't get good results compared to yours. I noticed the garment image has gray background, does it mean we need to remove the background of the clothing image before using the ipdapter? Thanks for this inspiring workflow
@creatorbrew
@creatorbrew 7 месяцев назад
Hi, Can you share your result via github? or other place? I'm about to make another tutorial for little tweaks to improve. But if you have a business reason (other than hobby) reach out to me directly. What I share on RU-vid are the "starts" of things as I perfect the final outcome for clients.
@sunnytomy
@sunnytomy 6 месяцев назад
@@creatorbrew Thanks, Yes, I would love to share some of the bad results. Is there other ways than github that I can share the output pictures with you? Cheers.
@creatorbrew
@creatorbrew 6 месяцев назад
@@sunnytomy make a reddit channel in the Stable Diffusion area, then drop the link back here.
@chanansiegel834
@chanansiegel834 6 месяцев назад
I am getting SystemError: Unexpected non-whitespace character after json at postion 4 (line 1 Column 5)
@creatorbrew
@creatorbrew 6 месяцев назад
It could be the usual 1) update comfyui, 2) update the nodes 3) make sure the models are the correct version. Which node does it fail at?
@talhadar5038
@talhadar5038 8 месяцев назад
I followed your video however I am getting this error: Error occurred when executing KSampler: Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead. My models are: clip_vision: SD1.5/pytorch_model.bin ipadapter: ip-adapter-plus-face_sdxl_vit-h.safetensors checkpoint: sd_xl_base_1.0.safetensors EDIT: Its working now. Do you have an older 10XX Nvidia card? If so the Dtype mismatch might need --force-fp16 in the launch options.
@creatorbrew
@creatorbrew 8 месяцев назад
Hi - if you post your workflow to the same URL I posted mine, I can take a look. A difference I see, I didn't use the plus-face version, I used this model: ip-adapter_sdxl_vit-h.safetensors. To trouble shoot. Hold down the CONTROL key and drag a selection around the top nodes. Then press the CONTROL M key (this deactivates those nodes). Slick on the Load Checkpoint and press CONTROL M to reactivate that node. See if the bottom part of the workflows works. If it does, the means it is one of the models specified on the top one (probably ip-adapter_sdxl_vit-h.safetensors) as a guess.
@talhadar5038
@talhadar5038 8 месяцев назад
@@creatorbrew I mistyped. I actually used ip-adapter_sdxl_vit_h.safetensors Can you tell me which clip vision are you using? Its definitely a model mismatch according to my research
@creatorbrew
@creatorbrew 8 месяцев назад
It's the SD 1.5 version model.safetensors, based on my browser search history, I believe I downloaded it from here: huggingface.co/h94/IP-Adapter/blob/main/models/image_encoder/model.safetensors
@djivanoff13
@djivanoff13 8 месяцев назад
attach the workflow for uploading so that we don't have to do it manually!please
@creatorbrew
@creatorbrew 8 месяцев назад
comfyworkflows.com/workflows/7e9b0e9f-e012-4f2d-ac7b-989bd8589fd1 this one is a simple one and one way to burn into our minds a task is to try it out :)
@djivanoff13
@djivanoff13 8 месяцев назад
@@creatorbrew Error occurred when executing GroundingDinoSAMSegment (segment anything): How can I fix this?
@creatorbrew
@creatorbrew 8 месяцев назад
Are you running on CPU/GPU? Which GPU? The usual cause is the wrong models mixing. Here are the models I'm using. Here's the list of models... Main Model: sd_xl_base_1.0.safetensors Ipadapter Model ip-adapter_sdxl_vit-h.safetensors Clipvision SD1.5\model.safetensors SAM Model Loader (segment anything) sam_vit_h (2.56GB) Grounding Dino Model loader GroundingDINO_SwinT_OGC (694MB)
@djivanoff13
@djivanoff13 8 месяцев назад
@@creatorbrew rtx 3060 12g
@creatorbrew
@creatorbrew 8 месяцев назад
@@djivanoff13 that's enough Vram, so it's probably a matter of getting the models listed in the description/this reply from huggingface.
@renwar_G
@renwar_G 7 месяцев назад
your a G bruv
@creatorbrew
@creatorbrew 7 месяцев назад
😊
@Spinaster
@Spinaster 8 месяцев назад
Thank you for sharing the workflow but the result it's not the same as your, it change the jacket with something completely different from the loaded jacket image. I follow all your steps and load the same models except for the ip-adapter_sd15.safetensor, it seems that something is missing in your tutorial. Also, I don't understand how the masked latent would be interpreted from the ipadapter... how it will recognize the are to fill? The Ksampler is 10 time more slower than usual even after all the models are loaded in memory (16GB gtx vcard). Should I use an inpainting version of the SDXL model? Thanks.
@creatorbrew
@creatorbrew 8 месяцев назад
Hi, do you have a way to share your result? /workflow. Yesterday on a different project I was on a 2060 and ran the same workflow in a 4050 and got a different visual result even though everything was fixed and used the same models and images. The Segment anything can be time consuming the first time it runs, after that the mark is created and then the rest of the nodes can be adjusted. You can try inpainting it on your image and then bypass the segment anything mode.
@creatorbrew
@creatorbrew 8 месяцев назад
Latent acts as the dream, with giving so much info to what the Kssmpler will dream up the only place to be creative is on the masked out area. The IPadapter pushed the interpretation of the new jacket forward into the Ksampler where the latent of the inpainted jacket (via segment anything) meets up with the model to create the effect.
@utkarshaggarwal6057
@utkarshaggarwal6057 2 месяца назад
doesnt work bro
@shashanksrivastava8638
@shashanksrivastava8638 7 месяцев назад
i can't use Segment anything while using on my system...please help me out
@creatorbrew
@creatorbrew 7 месяцев назад
What is the issue you are experiencing? Have you 1) updated your comfyUI 2) downloaded the models
@HosseinAhmadi-x3n
@HosseinAhmadi-x3n 7 месяцев назад
It is GOLD!!! Virtual try-on for free
@abellos
@abellos 3 месяца назад
i not found the ApplyIPA adaptor node, and i downloaded the model ip-adapter_sdxl_vit-h.safetensors but when i start the workflow, i have undefined in the node of load ip adapter model. As every pithon script none work also on comfy
@SamhainBaucogna
@SamhainBaucogna 3 месяца назад
Thanks, very useful, it works. One question: Wouldn't it be possible to create the mask by hand, would it be more convenient? Greetings, great tutorial explained in a simple and effective way.
@creatorbrew
@creatorbrew 3 месяца назад
Hi, thank you for watching and enjoying. Yes, you can use the image and then wire in the mask as an image to skip segment anything. It would simplify. Segment anything is better suited for when you want your mages to be batched and don’t have the time to mask.
@SamhainBaucogna
@SamhainBaucogna 3 месяца назад
@@creatorbrew Thank you for your very kind reply, best regards.
@ИльяКачеловский-й1д
@ИльяКачеловский-й1д 5 месяцев назад
Which folder should I paste the downloaded files into?
@creatorbrew
@creatorbrew 4 месяца назад
Which files? The image is dragged to comfyui interface (you probably knew that), the models can be placed in the stable diffusion models folder.
@fintech1378
@fintech1378 8 месяцев назад
awesome
@SasukeGER
@SasukeGER 6 месяцев назад
clothes all blurry for me or placed in center and not covering the Arms... any tips ?
@pedroquintanilla
@pedroquintanilla 6 месяцев назад
Error occurred when executing IPAdapterModelLoader: Error while deserializing header: HeaderTooLarge File "E:\IA\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\IA\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\IA\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\IA\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 593, in load_ipadapter_model model = comfy.utils.load_torch_file(ckpt_path, safe_load=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\IA\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 13, in load_torch_file sd = safetensors.torch.load_file(ckpt, device=device.type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\IA\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\safetensors\torch.py", line 308, in load_file with safe_open(filename, framework="pt", device=device) as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@creatorbrew
@creatorbrew 6 месяцев назад
1. Do you have ComfyUI Manager installed? If so, click the Manger button. When it is done updating Then click Update ALL. (check the terminal window). Exit ComfyUI and restart. 2. If you don't have Comfyui Manager, then download the Comfyui Portable, then install the ComfyUI Manger. Then click the ManagerButton and install the missing nodes.
@ysy69
@ysy69 3 месяца назад
Thanks for this. Too bad we cannot replicate the exact jacket but only a close approximation due to the nature of diffusion models
@creatorbrew
@creatorbrew 2 месяца назад
Training a Lora on the jacket would help keep consistency. Or using a photo of the jacket and inpainting to switch people.
@ysy69
@ysy69 2 месяца назад
@@creatorbrew we have hundreds of items and unfortunately training a Lora on each would not be feasible compared to traditional methods
@yuanzhouli6983
@yuanzhouli6983 7 месяцев назад
Hi, can you just offer some suggestions on swapping logo on Tshirt , It should be easier than swapping cloth? I mean if a model is standing in side view, can somehow we swap the big logo on the Tshirt that the model waers?
@creatorbrew
@creatorbrew 7 месяцев назад
Does the segment anything node pick up the logo? such as typing in the word "logo" or "square" or whatever it looks like? In the end it is an inpainting that will do it. This workflow is automating the inpainting by creating the mask of the jacket automatically. Do you have a picture of the model/logo anywhere to share?
@SayedSaadmanArefin
@SayedSaadmanArefin 8 месяцев назад
im getting this error "Error occurred when executing KSampler: Query/Key/Value should all have the same dtype query.dtype: torch.float16 key.dtype : torch.float32 value.dtype: torch.float32" please help
@creatorbrew
@creatorbrew 8 месяцев назад
what is the size of your graphic card's memory? What is the length of your video? Check to see if the models match the names. There are some low version and high version of models. An alternative approach is to launch comfyui with the command line switch so it is all running on the CPU to get everything on the CPU. Also, check the image sizes. Start with everything at 512x512 then bump up the size to 1024x1024 and go up from therre.
@creatorbrew
@creatorbrew 8 месяцев назад
... a way to troubleshoot is to see at the start of the video when it is two separate workflows.? try each workflow along and the one that doesn't work has the wrong model (maybe) you can wire it like in the video. Then select the nodes that you don't want to run and press CONTROL M on the keyboard to deactivate those nodes. Then reactivate them and deactivate the other work flow. Hopefully that will give an answer.
Далее
Future of E-commerce?! Virtual clothing try-on agent
36:45
CORTE DE CABELO RADICAL
00:59
Просмотров 1,8 млн
How to change clothes with AI.
14:45
Просмотров 106 тыс.
Build Your Own ComfyUI APP!
19:08
Просмотров 22 тыс.
ComfyUI Segmentation MASKING Tutorial
3:12
Просмотров 3,3 тыс.
One Click SDXL LoRA Training is HERE! Easiest Method
13:03