Тёмный

SDXL 1.0 blows away Stable Diffusion 1.5. And here is the testing to prove it. 

SiliconThaumaturgy
Подписаться 4,9 тыс.
Просмотров 23 тыс.
50% 1

Опубликовано:

 

25 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 95   
@LiLGWaez
@LiLGWaez Год назад
I JUST discovered ur channel, and instantly had to subscribe. This channel feels like an absolute GOLDMINE. Thank you so much for covering all of this stuff, it feels like ive stumbled upon something amazing. I love how enjoyable ur narration is too. Hahaha. Keep it up man. I'll be watching ur videos on hands after this. Ah those pesky hands. One day i'll master them. Hope u have a great day dude.
@Modioman69
@Modioman69 Год назад
I highly enjoy your style, explanation and methods you use for your content and you deserve massively higher viewer counts/subs. You’re a gem in a community of over saturated content. Keep up the great work. I do still feel 1.5 is way ahead until SDXL gets more trained models which will definitively surpass 1.5 just based on everything you explained here. I will wait to try until then. Excellent video.
@CaptDabbs
@CaptDabbs 5 месяцев назад
dude, i just jumped in around Xmas 23 w/ 8g 3060 now its 500gigs of stuff & you just gave the best little darn talk ive seen so far. subbed.
@makebritaingreatagain2613
@makebritaingreatagain2613 9 месяцев назад
0:05 I prefer the one on the left. It looks way more interesting.
@pon1
@pon1 Год назад
Subscribed, best comparison of them all so far!
@3diva01
@3diva01 Год назад
My PC struggles hard core with SDXL, even with 12 GB vram. Also I've been keeping an eye on the new images that people have been producing with SDXL and haven't yet seen a huge increase in the quality of the images compared to some of the best 1.5 models. So I'm sticking with SD 1.5 for now. Hopefully by the time the community makes great looking models for SDXL someone will also find a way to make it run better in Automatic1111.
@WoodenCreationz
@WoodenCreationz Год назад
Agree! Running 16gbs of Ram and it sucked down all my ram and just locks up. Ripped it out and back to the drawing board.
@3diva01
@3diva01 Год назад
@@WoodenCreationz Yeah, it's definitely not worth using right now, IMO. Maybe in a couple of months it will run smoother and have better models to work with. For now I'm definitely sticking with 1.5 as I'm getting great results with it that look much better than what I've been able to get out of SDXL.
@Dmitrii-q6p
@Dmitrii-q6p Год назад
guys, read docs sometimes. -- medvram , will fix all issues + check settings to reduce VRAM usage. a lot of shit to reduce VRAM
@marcinszuszkiewicz
@marcinszuszkiewicz Год назад
Thanks for your very informative review; subscribed
@flisbonwlove
@flisbonwlove Год назад
Nice review mate !!
@moki123g
@moki123g Год назад
While XL 1.0 does look great, I think I am going to hold off for a while and let people work their magic on it. I subscribed, and thanks for doing these tests!
@LagiohX3
@LagiohX3 Год назад
Loras not working on it such a negative its starting from zero again.
@WifeWantsAWizard
@WifeWantsAWizard Год назад
Thanks for the update. A few notes... (3:00) Or maybe they were just trying to pad their stats by adding together two parameter counts. (3:19) In that diagram, there are inputs for "Prompt 1" and "Prompt 2". But, the user only enters one prompt, right? So, should we trust people who can't put together an accurate flow chart or are we actually doubling up our side of the workflow? (3:48) So, actually it's two **products** in one: text-to-image and image-to-image--NOT a new way of doing business with "two different models", right? (4:59) A "whopping" 12GB? Starfield is 15.48 GB--and that's just one game. Newegg has 1TB SSDs for $59. (7:29) Why did you swap colors between slides? (10:00) HAND score? What objective mathematical evaluation does THAT use?
@vvidover
@vvidover Год назад
Darn good breakdown. Well done.
@mistraelify
@mistraelify 10 месяцев назад
I really want to tell anyone (and the author) reading my comment that SD 1.5 has it's own advantages against SDXL. Also you cannot compare your SD 1.5 prompts and seeds with SDXL it's bad behavior. They're completely different in terms of understanding how you ask the models and how they're processed. Yes you have better space, Yes you can have more consistent results without needing to upscale, Yes you can add more prompting informations which is more precise. But it has also it's downs. Many concepts died with SDXL, Many LoRa's needs compatible models, Weights are very difficult to handle to get what you want, Prompts needs to be more precise according to your model to really achieve something decent. For rendering basic prompts with specific art style and LoRa's it's good but going further without training is very difficult to achieve what you want. Besides, very good video, just wanted to clarify: NO SD 1.5 is NOT wiped out by SDXL at all !
@genin69
@genin69 Год назад
awesome information, thanks for the hard work and nerdy deep dives. ill look at SDXL in about 4months. at the moment its just no good at all. no creativity between generated seeds. mostly the same composition after doing about 40 odd renders on a single prompt. in sd1.5 I would get incredibly diverse images with wild imaginative results that always blow my mind. its like buying a camera, never ever buy the first model. always wait at least 6months to a year and get the version2.
@RedmotionGames
@RedmotionGames Год назад
Thanks for this video. Much more useful info than other people seem to be giving, re file sizes, etc. Sub'd. Will try Comfy (A1111 giving out of memory errors)
@blitzar8443
@blitzar8443 Год назад
Thanks for the info. I might get back into SD after people have some more time to train SDXL models to experiment with it myself.
@ai_and_gaming
@ai_and_gaming Год назад
Fantastic video!
@haobanggeng
@haobanggeng Год назад
Tnaks for your Info,explanation best. I have a question how do you compare the sd1.5 and sdxl in size image。I want to know why sd1.5 costs more time than sd2.1 in big more 1 megapixels
@Bericbone
@Bericbone Год назад
Refiner is NOT meant for img2img. It's meant to interpret leftover noise from the base model. That means you stop the generation before the image is done generation, and pass the leftover latent to the referine to complete the image. If you use the refiner for Img2Img you are going to get inferior results from doing it this way. Also, it's not meant for higher resolutions than what the image is generated at. You should not use it for upscaling. Auto1111 has not currently implemented correct use of the refiner.
@BIG_PASTA
@BIG_PASTA Год назад
Thanks for the info! Is there a webui/colab type option out there to use it correctly?
@Sheevlord
@Sheevlord Год назад
Thanks for the comprehensive explanation! I was worried that my 8 Gb GPU would prevent me from trying SDXL but it looks like it will be just barely enough. I should give it a try
@vallejomach6721
@vallejomach6721 Год назад
Didn't work for me using A1111. Caused BSOD a couple of times. Had it generate an image a couple of times but then sending to img2img and changing to the refiner model either failed and reverted to the base model, thus didn't work, or crashed. First image from start up took about 7 or 8 minutes to load the model and then the actual image generation for a single image took about 10 minutes. Far too slow and painful to use for me. Comfy UI may work better but that'll be for another day to look at for me.
@Sheevlord
@Sheevlord Год назад
@@vallejomach6721 Dang, that sounds rough.
@LagiohX3
@LagiohX3 Год назад
​@@SheevlordI have a 3090 but only 16gb ram (had to sell ram, had 64gb) and it makes my PC freeze when there is no more ram just by changing to the model. I managed to try it once but i will have to wait till my new ram arrives.
@Sheevlord
@Sheevlord Год назад
@@LagiohX3 It's a silly question, but do you have swap partition or file enabled?
@mistermcluvin2425
@mistermcluvin2425 Год назад
Thanks! Great information. I just started using sdxl yesterday and it's very impressive. The vram requirements are crazy tho. I wonder how this will affect its widespread adoption?
@siliconthaumaturgy7593
@siliconthaumaturgy7593 Год назад
Right now 8GB seems like a lot of VRAM compared to 4GB for SD 1.5. But keep in mind that Stable Diffusion 1 required 10GB of VRAM when it was originally released. Things have been getting more and more efficient.
@ikariameriks
@ikariameriks Год назад
​@@siliconthaumaturgy7593I hope so, because sdxl struggles on 12GB VRAM as of now. Could get fixed in a few days though. And not XL itself but the many programs that work it.
@Axherion
@Axherion Год назад
​@@siliconthaumaturgy7593So you think even 4GB ram will be enough to use that version of XL ?
@ericneo2
@ericneo2 Год назад
I don't understand why these models cannot use system memory and only VRAM. If I have a server with 1024GB of system memory why can I not use it? Why are we being limited to only VRAM?
@GooseAlarm
@GooseAlarm 11 месяцев назад
I have the same question. :/
@ericneo2
@ericneo2 11 месяцев назад
@@GooseAlarm GDDR6 only has 2 memory channels most servers have 4-8. It just feels like a manufactured problem to sell more expensive GPUs.
@Kaucukovnik666
@Kaucukovnik666 11 месяцев назад
@@ericneo2I My thoughts exactly. Feels like AI is just filling the void left by crypto. "Hey, crypto is crashing and nearly all games aim at the weakest current console's graphical capabilities, we need to utilize all those overkill specced GPUs somehow. Why not, say, throw machine learning at them? And call it AI, cos it sounds way cooler!" "Raytracing" is selling stupidly powerful (and power hungry) GPUs to gamers, and "AI" is doing the same for tinkerers. Not actual artists really, those don't need (or even want) a high resolution output straight from a text prompt. In particular Auto1111 seems especially "efficient" at consuming resources. ComfyUI generates 1024x1504 images for me (after zero configuration) while A1111 eats up all my VRAM just trying to load a model, no matter the setup. It doesn't even account for all the memory used, its numbers don't add up to the total memory available, but it needs more anyways. Anyone brings up an issue, gets "dude, you need a better GPU" responses and soon after the bug report gets silently closed as inactive. Even in such obvious scenarios like always claiming it needs exactly 20MB more. Doesn't affect 24+GB card owners and they can at least feel good about their purchase. It wasn't an overkill for bragging rights, it was a necessity!
@coreyhughes1456
@coreyhughes1456 Год назад
For now 1.5 is still better, faster, and easier to use. Hoping to be proven wrong in the near future.
@TheBobo203
@TheBobo203 Год назад
interesting to see our favourite models with 2.5x more precision
@jdnaveen321
@jdnaveen321 Год назад
i have 3080ti, i5 12600k, 32gb ddr5 ram yet sdxl model loading alone takes quite long time and generating with it takes huge, deleted them and gonna stay with 1.5 for now since it has long way to go
@krystiankrysti1396
@krystiankrysti1396 Год назад
Do You plan to do study on which SD 1.5 models are highest resolution ? I tested some with 600x1200 and some pass and some fail, lot of "best" or most downloaded models fail
@Chris3s
@Chris3s Год назад
Do you know if I should switch to invoke 3.0 (it laos now has nodes) from Auto1111 or just switch to comfyUI (for SD 1.5)? Heard comfy uses less VRAM, how easy is it to use controlNET there? A comparisson video between those might be interesting (all 3 using nodes, with auto1111 using the comfyUI plugin).
@Dmitrii-q6p
@Dmitrii-q6p Год назад
should you eat with fork or a spoon? who the hell knows. it depends.
@Steamrick
@Steamrick Год назад
One thing I've noticed that in prompts describing *two* subjects doing something (for example, mother and son building a sand castle), SDXL blows SD1.5 away by so much, there's no possible comparison. SD1.5 can compensate by using regional prompter (or comparable), but it's basically incapable of doing it natively.
@ДанилЧірва
@ДанилЧірва Год назад
Hi, awesome comparison, but I have a question. Is it okay that in comfyUI on GPU 3060 12gb 1024 x 1024 image generating for 20-30s but if I change prompt, time increases up to 60-100s for 1 generation and that happens only if refiner connected. And that is a huge problem when I want to experiment with prompts (I am using official workflow)
@FusionDeveloper
@FusionDeveloper Год назад
Try Photon model file for SD 1.5 at 1024x1024. It's amazing.
@generalawareness101
@generalawareness101 Год назад
I just found that one and agreed but it doesn't do something I prompted so I switch back to 2.1 model that does. Shocked at the quality I was getting though for what it did give me.
@RyokoChanGamer
@RyokoChanGamer Год назад
I gave up using sdxl 1.0 on automatic1111, I tried for hours surrounded by errors, until finally after spending the night awake, I managed to make it work, however, very slow and uses EVERYTHING that my pc has (16gb of ram, 20gb of vram( 12 of the card + 8 shared), half of the cpu, and 100% of disk usage with paging file), I click to generate the image, I release everything and I'm just looking, because it's impossible to use or move the mouse... even if image generation does not take so long (about 40s at 1024x1024) it is not being practical to use, I tried to use it through comfyui and it is working well, but I don't like its interface, I prefer to wait more and follow the evolution, improvements and optimizations until be able to try again to use in automatic1111
@TheBobo203
@TheBobo203 Год назад
python process takes 20-40gb ram using sdxl on my PC
@RyokoChanGamer
@RyokoChanGamer Год назад
@@TheBobo203 😱🫡
@mistertitanic33
@mistertitanic33 Год назад
Im running into the same issue. Im so bummed because I really wanted to use Automatic1111 but I guess I may have to use Comfy. Im thinking about just waiting a few months and taking the time to learn the software with smaller models until SDXL because more performant
@RyokoChanGamer
@RyokoChanGamer Год назад
@@mistertitanic33 I was testing some prompts and generating some images here in comfyui, the generated images were with much less quality than the ones I generated in automatic1111, I used exactly the same promps, negative and positive (exactly the same, I copied and pasted), cfg, samplers , steps etc... in automatic1111 the images were much prettier... I don't know if I was doing something wrong, but I don't think so, both in automatic1111 and in compfyui, I was using everything in the most basic, fresh installation of webui and comfyui
@mistertitanic33
@mistertitanic33 Год назад
@@RyokoChanGamer well I’m definitely gonna wait until I can get it to work on Auto. Btw what are your specs? Im running on a rtx 2070 and 16 gb ram. I have xformers on but that doesn’t seem to be enough
@MrSongib
@MrSongib Год назад
I want to try use fine tune sd 1.5 then use the Refiner in img2img or Refiner at low res then into img2img for highres, some people already try this stuff. (seems faster and seems fun)
@DmitryPokrovsky
@DmitryPokrovsky Год назад
Great!
@Seany06
@Seany06 Год назад
I'm running it on 8gb with base and refiner. Works fine but probably not gonna be able to use controlnet when it arrived unless it gets further optimized, hopefully.
@ywueeee
@ywueeee Год назад
can you make a video on all the text encoders available and see which one can be used in img2img to get the best prompt from the image
@marcus_ohreallyus
@marcus_ohreallyus Год назад
I dont know...maybe im doing it wrong, but I'm pretty good with 1.5 and I tried sdxl recently and thought it looks over-stylized.
@TheShadiya
@TheShadiya Год назад
Updated model that is better than its predecessor? wow!
@Axherion
@Axherion Год назад
I have 3050Ti what do you think should I stay at 1.5 or XL ?
@igorthelight
@igorthelight Год назад
You will struggle with SDXL Stay on SD 1.5 for now. And start saving for RTX 4070Ti (16 Gb version) or at least RTX 3060 (12 Gb version)
@Axherion
@Axherion Год назад
@@igorthelight For laptop version right until I save some money XL will much better also will be free right 🤔
@igorthelight
@igorthelight Год назад
@@Axherion While you have not so powerful PC, Stable Diffusion 1.5 would be your choice ;-) You may try SDXL, but most likely it will not work or would work very slow. Both are Free and Open Source. Both could be run locally (on your PC instead of from some remote website).
@Difdauf
@Difdauf Год назад
I don't think this comparison is totally fair. We forgot to mention that loading SDXL could freeze your computer for 15 minutes. That thing is far more greedy than just lot of VRAM. This isn't exactly "fun" to use.
@diyaaelhak
@diyaaelhak Год назад
do you bleave me, if i told you, i watch your (entire:1) videos, in one sitting, and yes, you give us pure, [valuable|information:0.5], ((so thank you)), by the way I am confused in somethings like what is stabel defusion, is it a model or a technology deal with models, is other AI using same SD or they have own AI, why you comparing SDXL with dreamshaper and darksushi, I am literally confused, and google have no answers for the basics.
@zafiralpstv8004
@zafiralpstv8004 Год назад
SDXL 1.0 is even better
@yokipop9467
@yokipop9467 Год назад
why sd never have version 3 🤣, they always change the name.. sd2.1 to SDXL 1.0.. back to 1 again
@LLCinema22
@LLCinema22 Год назад
I don't know about parameters but what everybody says about SDXL regarding fingers and humans that are not close up it shucks. Can't understand why #midjourney is always years ahead
@coloryvr
@coloryvr Год назад
Big FANX for that great Video! ...so....Just one Question: Can I run Deforum on SDXL?
@HornbyMalcolm
@HornbyMalcolm Месяц назад
733 Grimes Grove
@АлексейОвсянников-ь6р
@АлексейОвсянников-ь6р 10 месяцев назад
cba watching this vid with amount of ads
@warlord76i
@warlord76i Год назад
Well... eats memory like a hungry dinosaur
@abline11
@abline11 9 месяцев назад
SDXL is still hopeless at photo realism even with the latest models. I’ve given up with it now.
@TennysonAmos-o4u
@TennysonAmos-o4u Месяц назад
Odie Ridges
@JeffreyHall-l5z
@JeffreyHall-l5z 24 дня назад
Lauren Inlet
@jasemali1987
@jasemali1987 10 месяцев назад
Again CFG effect is neglected in your video, too bad
@flareonspotify
@flareonspotify Год назад
I hate they make the image burry on purpose to create depth it looks awful
@igorthelight
@igorthelight Год назад
Add "blured background" as a negative prompt
@erics7004
@erics7004 Год назад
I have 4gb vram and I could run SDXL 1.0 with 1024x1024, 4 minutes for a single image, but it's worth it.
@panyzhal
@panyzhal Год назад
how much ram? i have 32 and 12vram and it consumes everything when generating
@alienrenders
@alienrenders Год назад
​@@panyzhalI have 11GB on 1080ti and it takes 1 minute to generate. I noticed that using lowvram or medvram made it run out of memory or made it extremely slow. So don't use those vram settings in a1111.
@panyzhal
@panyzhal Год назад
@@alienrenders oh thanks maybe is that, i'll check and comment results
@panyzhal
@panyzhal Год назад
​@@alienrenders It worked; it's on the limit of RAM and VRAM but can make an image in 20s. Just opening the model took too long, around 10 minutes.
@AlfredMag-g8h
@AlfredMag-g8h 25 дней назад
Kulas Harbors
@zizyip6203
@zizyip6203 Год назад
ComfyUi is a joke. And what I want SDXL 1.0 to do it can't it's a complete joke compared to 1.5
@JudiCowger-i7j
@JudiCowger-i7j 25 дней назад
Geovanni Square
@FlowerPower3000
@FlowerPower3000 9 месяцев назад
SDXL epic fail...
Далее
Ilkinchi hotin oberasanmi deb o’ylabman🥹😄
00:26
How do Graphics Cards Work?  Exploring GPU Architecture
28:30
Why Are Open Source Alternatives So Bad?
13:06
Просмотров 657 тыс.