Тёмный

SDXS: 90 millisecond renders with comfyUI 

Stephan Tual
Подписаться 4,8 тыс.
Просмотров 3,5 тыс.
50% 1

Came for a config file, stayed for the hundreds of cute animals.
Workflow & Models for download (or Plug&Play cloud) at:
flowt.ai/community/sdxs-0.09-...
▬ TIMESTAMPS ▬▬▬▬▬▬▬▬▬▬▬▬
00:00 Introduction to the Kittens
01:09 Autoqueuing Bears
02:31 Running comfyUI in verbose mode
03:00 Editing the Yaml
04:09 Running the clean instance
04:45 Re-testing - 100 bears in 12 seconds!
05:05 Absolute madness begins
06:25 Before and after comparison
▬ SOCIALS/CONTACT/HIRE ▬▬▬▬▬▬▬▬▬▬▬▬
Discord: / discord
All socials: linktr.ee/stephantual
Hire Actual Aliens: www.ursium.ai/
▬ LINKS REFERENCED ▬▬▬▬▬▬▬▬▬▬▬▬
github.com/IDKiro/SDXS

Наука

Опубликовано:

 

1 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 28   
@stephantual
@stephantual 2 месяца назад
My thoughts: can't wait to play with SDXS more and push it to its limits, the 1024x model isn't out yet (this is the 512 previous version they released), the image quality is evidently so-so but hey, 90 millisecond per image is pretty cool. I think Comfy can't handle it, even a plain install caps at 100 frames per minute, when the sampler should theoretically do 100fps. The save to disk time in particular is painful, even on a raid zero array made out of 2x T700 PCIe5 nvme drives. I hope we can collectively find a solution, because this stuff is fun!
@kallamamran
@kallamamran 2 месяца назад
Still don't know why I would want to create 100 kinda bad images per second 🤔 It will be interesting to see where this is going though
@industrialvectors
@industrialvectors 2 месяца назад
because of video rendering.
@carstenli
@carstenli 2 месяца назад
Also realtime audioreactive visualization comes to mind.
@stephantual
@stephantual 2 месяца назад
Yup, quality isn't great, no doubt. Like others said use case is for video rendering, low-diffusion pass during video upscales, magic mirrors, prompt travelling powered by an LLM (nodeGPT), images changing to the beat etc etc 😼 If you want to get better quality, your best bet is using Loras and the usual tricks of the trade (prompt weighting etc etc). Cheers!
@EduardsRuzga
@EduardsRuzga 2 месяца назад
If we later can use those images to upscale or structural image2image to better quality and detail its useful. Its kinda like searching for ideas to send to bigger, slower more expensive model to refine.
@TheGiovany82
@TheGiovany82 2 месяца назад
Thanks for the tips! You should have a look inside the input folder, it contain all image used for img 2 img and have many accumulated files that can be deleted to have more spaces. Also many custom node contain demo PNG and workflow which can be erased.
@stephantual
@stephantual 2 месяца назад
😅Just looked at mine 😅 - let's just say I just gained a lot of disk space heheh. Thanks! 👽
@kalicromatico
@kalicromatico 2 месяца назад
wooooooo
@TheGiovany82
@TheGiovany82 2 месяца назад
Thanks you ☺️
@stephantual
@stephantual 2 месяца назад
You're welcome 😊👽
@peterrossc
@peterrossc 2 месяца назад
Top tip for model storage, you can use symlimks inside your models folder. I use that to have a big ol' model repository on a different drive and even split up my models between SSD and HDD storage, e.g. if I'm using a model a lot, I can just transfer it to my ssd model folder and because its all symlinked, it just works.
@weirdscix
@weirdscix 2 месяца назад
It should be noted, symlinks only work if they are a subdirectory, for eg, if you symlinked the whole checkpoint folder, whenever you updated comfyui then git will replace any symlinks with the actual folder, it's why I have subdirectories of SDXL, SD15 etc
@NotThatOlivia
@NotThatOlivia 2 месяца назад
yes, but since that was trained on 512*512 - quality is nah, but for videos - this can be the solution!
@stephantual
@stephantual 2 месяца назад
Agreed :)
@clearstoryimaging
@clearstoryimaging 2 месяца назад
Thanks Stephan. FYI - I'm not seeing the workflow on Flowt.
@stephantual
@stephantual 2 месяца назад
It turns out, it got taken down as it's so fast it was deemed a DDOS threat :) I guess I should be flattered hehe 😅 I'm trying to re-home it, but CF is down at the moment. I'll find a solution :)
@stephantual
@stephantual 2 месяца назад
Ok fixed, it's back up, in order to make sure we don't create a flood, I've switched "save image" to "vhs" so it makes a video :)
@daviddiehn5176
@daviddiehn5176 2 месяца назад
Hello I need your help, I am using a RTX A5000 on RunPod (24GB VRAM 22GB RAM) but I am still not able to use the Supir Nodes. It always fails (unexspected error) while trying to load the Supir Model Loader. I have seen people with an worse GPU still using it, I am bit clueless. I know you used RTX 3090 with same VRAM, how did get it to work?
@stephantual
@stephantual 2 месяца назад
I used a 4090 but yeah it works on anything with at least 8gb vram, the trick is to keep the size of the image small enough and understand that SUPIR is not an upscaler, but a form of (very fancy) controlnet for SDXL. Once you get that part right (only say 1024x pixels into SUPIR, then you can use Lanczos3 to upscale the output). I have a video coming up going through all that, it's going to take me a week to record, but i think it will be useful 👽
@TR-707
@TR-707 2 месяца назад
wow 10ms WITH saving images is pretty insane. I went down the rabbithole to get realtime video but the preview slows stuff down, the browser slows stuff down, the nvidia tensor stuff was a waste of time btw tries that also, and 1024 is pretty slow and 512 looks kinda fail. I think i ended up with lcm lora ontop of turbo lightning derived model as the fastest or just 1 step lcm with like 0 cfg which looked hillarious since it had OK images but it ignored the prompt pretty much
@stephantual
@stephantual 2 месяца назад
Yup, sounds about right! 👽 I did further tests and it doesn't support controlnets (as they have to be retrained) or ipadapter, while LORAs are hit and miss. I think however it signifies something is shifting in our little community, and we're seeing some pretty innovative stuff. I'm working with others to see if we can get comfyUI to render much faster, had some decent results by switching the vae decode to TAESD and disabling most of the fluff rendering on the webgui. I'm also working on an alternative, will post when ready (the cluster is emitting smoke, is that normal? 😅)
@voxyloids8723
@voxyloids8723 2 месяца назад
Can I install on G and path to C ?
@stephantual
@stephantual 2 месяца назад
Yup yup and it also works with external drives (but keep the drive plugged, evidently). 👽
@amkkart
@amkkart 2 месяца назад
Does this work also with animation?
@stephantual
@stephantual 2 месяца назад
Currently working on something like that - great minds think alike! 👽
@Konrad162
@Konrad162 2 месяца назад
with 512x512... it is not surprising and 4090 .....
@stephantual
@stephantual 2 месяца назад
Yup they only released the 512 for now, but they have a 1024 coming up. As for the 4090, yeah I feel you - it was a big investment for me (given my channel brings exactly 0 dollar in revenue hahah 😅)
Далее
SUPIR Definitive Tutorial for Creative Upscaling
26:18
CGI vs Practical - Can you tell the difference?
11:22
OVOZ
01:00
Просмотров 1,4 млн
КАК НАЧАТЬ ПОНИМАТЬ LINUX (2024)
21:10
Upscaling Footage - Nuke, Topaz  Video AI and ComfyUI
22:14
This Problem Changes Your Perspective On Game Dev
25:51
Photorealism in Blender: Unlocked
4:18
Просмотров 142 тыс.
Bad Gear - We're basically DOOMED
12:03
Просмотров 63 тыс.
I Optimised My Game Engine Up To 12000 FPS
11:58
Просмотров 564 тыс.
I Found 8 POWERFUL Blender Tips you've never heard of
20:05