Тёмный

1000% FASTER Stable Diffusion in ONE STEP! 

Sebastian Kamph
Подписаться 150 тыс.
Просмотров 96 тыс.
50% 1

Опубликовано:

 

26 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 265   
@leecoghlan1674
@leecoghlan1674 10 месяцев назад
You've made my day, no more waiting 30 mins on my potato pc for a generation. Thank you so much
@CoconutPete
@CoconutPete 7 месяцев назад
i installed but must have done something wrong as the quality seems poorer... back to the drawing board lol
@tungstentaco495
@tungstentaco495 10 месяцев назад
As others have mentioned, not using this LCM at full strength helps if you are having issues with messy/distorted images. I'm getting pretty good results with setting the LCM at 0.5 with 16 steps. Still really fast, but with better looking generations. Also, I recommend trying this if you are having issues with the LCM while using models and lora's that are trained on a particular subject.
@haggler40
@haggler40 10 месяцев назад
One issue is it makes animatediff not work well since animatediff usually needs more steps like 25-30 to get some good motion. Just wanted to put that out there, it does work with animatediff though.
@alsoeris
@alsoeris 8 месяцев назад
how do you change the strength? if its not in the prompt
@tungstentaco495
@tungstentaco495 8 месяцев назад
@@alsoeris In automatic1111, when LCM is used in the prompt, it would look something like this... for half strength. for full strength for 20% strength etc.
@marlysilva2816
@marlysilva2816 10 месяцев назад
Sebastian, I really like your videos and your simple way of explaining things. Could you create a tutorial or recommend a video for Stable Diffusion or CmofyUI on how to insert an object that has been generated into other scenes? Generate the same element in different scenes? For example, I generated the design of a new bottle and then, the prompt gave me a perfect result, after which I want to create an image of this same bottle in a scene with different angles or different poses (like a new photo of someone holding the bottle of juice, for example) It would be very interesting to have this type of video.
@Dzynerr
@Dzynerr 10 месяцев назад
Sometimes you give us quite the gems from the industry. Your research and sharing the knowledge is highly appreciated.
@sebastiankamph
@sebastiankamph 10 месяцев назад
Thank you kindly! 🌟😊
@davewxc
@davewxc 10 месяцев назад
Tip for experimentation: use it like a regular lora and play with the weight. Some custom models that give horrible colors at 1, will actually work better at 0.7.
@sebastiankamph
@sebastiankamph 10 месяцев назад
Great tip!
@KrakenCMT
@KrakenCMT 10 месяцев назад
I've discovered the same. Also increasing steps to hone in on the right quality. Maybe not 1000% increase, but 500% is still pretty good :) Even going all the way down to .1 will allow some to work much better and still get the speed increase.
@cyberprompt
@cyberprompt 10 месяцев назад
yes, I'd feel more comfortable using the standard lora syntax instead of this black box method from the dropdown. same with my saved styles. Anyone know how to see them again and not just the tabs to add them? (please don't mention styles.csv that's where I edit them).
@jonathaningram8157
@jonathaningram8157 10 месяцев назад
It doesn't appear under the regular lora network for me. I can just choose it from the dropdown menu
@wilsonicsnet
@wilsonicsnet 8 месяцев назад
Thanks for the tip, I've seen my Anime models get really dim after applying LCM.
@joppemontezinos2092
@joppemontezinos2092 9 месяцев назад
I am also using an RTX4090 setup and i gotta say that i dont see much of a speed difference, however finding out about the comparison capabilities made it so much better to choose what model to use based on what i wanted to create. thank you for the info
@joppemontezinos2092
@joppemontezinos2092 9 месяцев назад
May also be noted i was doing about 80 sampling steps and at an upscale value of 2.3
@memb.
@memb. 9 месяцев назад
@@joppemontezinos2092 You're supposed to use 4 to 10 sampling steps AND cfg 1 to 3. It's very fast and yields good results but it's honestly a godsendfor mass producing images. You can make 100+ images SO FAST you can just pick the best one and high-res that with a better config to get the absolutely best of the best results.
@user-cute371
@user-cute371 2 месяца назад
SAME
@irotom13
@irotom13 10 месяцев назад
I made the same grid as in the video with 8 sampling steps for 2 cases: 1) with this LoRA and 2) withOUT it / None. The time to generate is basically the same (actually without this LoRA is 10 seconds faster) => so the speed depends on the sampling steps rather than LoRA. While quality => depends on the sampler but there are some VERY good effects without this LoRA at all for the same sampling steps. I can't see much difference in either speed or quality if the right sampler is used.
@sebastiankamph
@sebastiankamph 10 месяцев назад
The point of using this lora & sampler is that you can achieve results in 8 steps that you otherwise might need 25 or more steps with other samplers. For the best quality, I'd recommend the Comfy route using the lcm sampler together with that Lora, as a1111 with another sampler is more of a half-measure atm.
@petec737
@petec737 10 месяцев назад
​@@sebastiankamphlet's be honest, nobody uses lcm if they are looking for the best quality. The only people using lcm are the ones with old pc's who want to have some fun poking a couple 512x512 still unusable image. On any high end graphic card, 8 steps vs 25 steps is only 1 second difference, no matter the model or sampler used, so something like the lcm makes no sense to professional users.
@sinisterin5832
@sinisterin5832 8 месяцев назад
My not so "ptato PC" and my impatience thank you very much, I am your fan. I already passed the information on to my brother, I'm sure he will be happy too.
@sebastiankamph
@sebastiankamph 8 месяцев назад
Thanks for sharing!
@duskairable
@duskairable 10 месяцев назад
I've tried this with my ancient gpu gtx 970😂, generating 512x768, cfg 7, 30 steps image usually takes 42 seconds. With LCM it takes only 7 seconds, the result is comparatively good 👍
@jibcot8541
@jibcot8541 10 месяцев назад
You should be able to do it in 4-8 steps with LCM, my 3090 can make a a 512x512 image in 0.25 seconds
@eukaryote-prime
@eukaryote-prime 10 месяцев назад
980ti user here. I feel your pain.
@TheMaxvin
@TheMaxvin 10 месяцев назад
I have tried GTX1080Ti generating 768x768, cfg 8, 30 steps: with LCM or no the same result 30 sec.((((
@petec737
@petec737 10 месяцев назад
​@@jibcot8541which 100% looks like trash and is totally unusable. Not sure what's up with people wanting to brag about being able to generate some tiny (512x512px) low quality images in a second.
@mehmetonurlu
@mehmetonurlu 9 месяцев назад
I'm wondering what would happened if i use this with vega8. Hope it helps.
@UHDking
@UHDking 2 месяца назад
I am a big fan of you. Thanks for sharing knowledge in easy to follow language while everything is explained within the details not like other radio just repeating information that sometimes is not fully useful. Your stuff is good. Got my like and sub and a long time follower. I am one of you as AI researcher. Thanks very much.
@sebastiankamph
@sebastiankamph 2 месяца назад
So nice of you!
@UHDking
@UHDking 2 месяца назад
@@sebastiankamph Thanks man. I told it from heart and I benefited couple time due to your video. Good job while sharing info like a champ.
@bankenichi
@bankenichi 10 месяцев назад
Duuuude, ive been using the sdxl one for a few days and it is a gamechanger, didnt know there was one for 1.5, awesome!
@sebastiankamph
@sebastiankamph 10 месяцев назад
Sweet! How have you been liking it for SDXL?
@bankenichi
@bankenichi 10 месяцев назад
@@sebastiankamph It's been amazing honestly, an order of magnitude faster on my 1080, going from 20+ mins with hires fix to about 1.5-3 mins using lcm. I was trying it out with 1.5 yesterday and it's great too, went from about 3 mins to just 30 secs. It honestly makes the experience much more enjoyable for me, being able to see this kind of improvement.
@timhagen1426
@timhagen1426 10 месяцев назад
Doesn't work
@rycrex7986
@rycrex7986 4 месяца назад
JUst started a week ago and ive been loving it. Sweitching to comfy
@VooDooEf
@VooDooEf 10 месяцев назад
fuck das ist das beste SD Video dieses Jahr, ich kannst nicht fassen, wie schnell man jetzt damit arbeiten kann! Nvidia kann ihre TensorRT extention in die Tonne hauen!
@2008spoonman
@2008spoonman 8 месяцев назад
FYI: install the animatediff extension in A1111, this will automatically install the LCM sampler.
@Mowgi
@Mowgi 10 месяцев назад
LCM's are what we call Rice Crispy Treats in Australia. Used to love when Mum put them in my lunch box for school 🤣
@marhensa
@marhensa 10 месяцев назад
I found out the picture quality is worse ONLY when applied to custom SDXL models, when applied to SDXL vanilla, or SDXL SSD-1B, it's somewhat par in quality yet SUPER FAST!!! (Tested on ComfyUI, LCM SSD1-B, LCM Sampler, 8 Steps).
@taiconan8857
@taiconan8857 10 месяцев назад
Useful info, thanks! Unfortunately, in my case, I'm often on custom checkpoints, but the methodology could be instrumental in making future iterations faster. 👏🤩
@marhensa
@marhensa 10 месяцев назад
@@taiconan8857 yah, surely it's doable for helping animated diff, that needs many frames to generate.
@taiconan8857
@taiconan8857 10 месяцев назад
@@marhensa OH! I HADN'T EVEN CONSIDERED THAT YET! You're totally right! I'ma definitely need to revisit this when I'm at that stage. 👌😲
@aegisgfx
@aegisgfx 10 месяцев назад
Wow so instead of creating a hundred images every day that nobody cares about I can create 10,000 images a day that nobody cares about, fantastic!!!
@politicalpatterns
@politicalpatterns 10 месяцев назад
Why are you so salty over this? It's a tool that some people use in their workflow. 😂
@alderdean6112
@alderdean6112 10 месяцев назад
The SDXL lora does not seem to work for me. My RTX3060 with 12Gb VRAM gets 100% loaded and freezes the whole system for several seconds for each iteration. The outcoming images are usually a jumble of pixels. SD1.5 lora, however, seems to somewhat accelerate things for SD1.5 trained models.
@CoconutPete
@CoconutPete 7 месяцев назад
Update: I wasn't able to get it to work, then found a post on Reddit which suggested deleting the "cache.json" file in the webui directory. I renamed mine to cache2.json (just in case) and sure enough the Lora tab was showing ssd-1b in it and noticed speed improvements. Must be a bug of some sort as the cache.json file showed up again and everything seems to be working
@sebastiankamph
@sebastiankamph 7 месяцев назад
Happy you got it working!
@hjjubnh
@hjjubnh 10 месяцев назад
In A1111 I don't see any difference in speed, the results are just worse
@maikelkat1726
@maikelkat1726 7 месяцев назад
'thanks but it doesnt make it faster...its the same speed...3-4 secs sdxl with or without lora in between ... any ideas why? i have old rtx 3090, 8g
@ScorgeRudess
@ScorgeRudess 10 месяцев назад
This is amazing!!! Thanks!
@sebastiankamph
@sebastiankamph 10 месяцев назад
Glad you like it! 😊🌟
@ulamss5
@ulamss5 10 месяцев назад
Thanks for the mega grid comparison - most of the comparisons so far are probably using the DPM 2M Karras, long time best performer, and seemingly terrible with LCM. I'll let the community do a few more evaluations with sampler and CFG before switching over.
@Christian-iu3lo
@Christian-iu3lo 10 месяцев назад
Lmao this crashes the crap out of my amd card. I have a 7800xt and it steals all of my vram immediately which forces me to restart
@april11729_
@april11729_ 7 месяцев назад
my god! it works !!!!thankyou !!
@sebastiankamph
@sebastiankamph 7 месяцев назад
Enjoy!
@biggestmattfan28
@biggestmattfan28 3 месяца назад
Do you know how to make it faster for pony diffusion? I dont think this works for pony models
@daan3898
@daan3898 10 месяцев назад
Thanks for the research, will try it out !! :)
@sebastiankamph
@sebastiankamph 10 месяцев назад
Hope you like it!
@markusblandus
@markusblandus 10 месяцев назад
Any chance you can show how the live webcam setup can be done? Thanks!
@sebastiankamph
@sebastiankamph 10 месяцев назад
For the quickest answer, I'd guide you towards my Discord and ask kiksu himself.
@ovworkshop3105
@ovworkshop3105 10 месяцев назад
it actually works very well, to create small samples then upscale them img2img, even SDXL is quick.
@sebastiankamph
@sebastiankamph 10 месяцев назад
Interesting approach!
@spiritsplice
@spiritsplice 8 месяцев назад
Vanlandic can't even see the files. They won't show up in the list after dropping them in the folder and restarting.
@BlueSentinel-o1r
@BlueSentinel-o1r 8 месяцев назад
After my first generation, the following generations are much slower. Any idea why this happens and how to avoid it?
@DJVibeDubstep
@DJVibeDubstep 8 месяцев назад
I'm using the DirectML version because I have an AMD and I have to use my CPU and It's PAINFULLY slow. Will this help with that? Or is it only for those using GPUs? I actually have a really decent GPU (RX 5700 XT) but I sadly can't use it since SD hardly supports AMD.
@LinkL337
@LinkL337 8 месяцев назад
Did u try it? I have rx 7800 xt and have the same problem. Looking for options to improve rendering performance. AMD released a video with a tutorial but I haven't tried that yet.
@DJVibeDubstep
@DJVibeDubstep 8 месяцев назад
@@LinkL337 I have not I just sucked it up and using the painfully slow CPU way lol. I spent 7+ hours trying all types of things though and nothing worked. I literally have to use my CPU it seems.
@DerXavia
@DerXavia 10 месяцев назад
Its even slower for me and looks much much worse using XL
@clay6440
@clay6440 4 месяца назад
your link for civitai is no longer working
@gorge.p96
@gorge.p96 10 месяцев назад
Cool video. Thank you
@TheSparkoi
@TheSparkoi 4 месяца назад
hey do you think we can have more than 0.7 frame par second if you render only 500X500 with a 4090 as hardware
@zahrajp2223
@zahrajp2223 8 месяцев назад
How i can use it with fooocus?
@sidejike438
@sidejike438 5 месяцев назад
I already did the --xformers edit, can I still use this Lora or would the quality of images be affected?
@stanTrX
@stanTrX 9 месяцев назад
thanks but mine is still very very very slow... what else can i do?
@CoconutPete
@CoconutPete 7 месяцев назад
I'm confused with trying to get this working with SSD-1B. I downloaded, put in the correct folder, renamed and it shows in the add to network prompt drop down, but so far notice no improvements and quality seems poor. I keeps seeing something about diffusers but not sure what that is all about . Going back to the drawing board lol
@micbab-vg2mu
@micbab-vg2mu 10 месяцев назад
Amazing!!! Thank you :)
@sebastiankamph
@sebastiankamph 10 месяцев назад
You're very welcome! 🌟😊
@matthallett4126
@matthallett4126 10 месяцев назад
I've got a 4090 as well, and I can't not reproduce your results in A1111. Will keep trying.
@sebastiankamph
@sebastiankamph 10 месяцев назад
I am running with sdp memory optimization. Similar speed increase as xformers.
@N-DOP
@N-DOP 10 месяцев назад
Is there also a way to enhance performance for image2image generations? I selected the Lora, adjusted the steps and the CFG Scale but the render time is still the same if not worse. Please help :'D
@palax73
@palax73 10 месяцев назад
Thanks bro!
@sebastiankamph
@sebastiankamph 10 месяцев назад
You bet!
@olvaddeepfake
@olvaddeepfake 10 месяцев назад
i don't have the option to add the lora setting to the UI
@athenalong
@athenalong 10 месяцев назад
HAHAHA 😅 I ::: honestly ::: look forward to the Dad jokes 🤣 Even if I don't have time to watch the entire video when I initially see it, I will watch until the joke and then come back later 😆👏🏾
@sebastiankamph
@sebastiankamph 10 месяцев назад
Hah, glad to hear it! And great that you're coming back too 😅😁
@khalifarmili1256
@khalifarmili1256 Месяц назад
can this work on SD 3 ?
@tuhinbiswas98
@tuhinbiswas98 9 месяцев назад
this will work with intel arc???
@kahosin890
@kahosin890 6 месяцев назад
How did you get that Ui?
@cyberprompt
@cyberprompt 10 месяцев назад
Oh and @sebastiankamph... I almost always laugh at your jokes even if my wife hates when I tell her them. Said the facial hair one to her yesterday because I DON'T like facial hair and she knows that! :)
@sebastiankamph
@sebastiankamph 10 месяцев назад
Hah, I love it! Keep spreading the dad jokes for everyone to enjoy 😊🌟
@intelligenceservices
@intelligenceservices 9 месяцев назад
i have a 3060 12GB gpu, was getting vram errors with this workflow on XL. process was rerouted to cpu. 50-70 seconds. so i suspected my vram was being squatted by orphan processes. rebooted and it's now working the way you describe. thanks.
@stableArtAI
@stableArtAI 4 месяца назад
Ok first run of video, very confused what the one step use to make it 1000% faster??? download "1" file?? you started download several files and what so lost..
@andreassteinbrecher458
@andreassteinbrecher458 10 месяцев назад
hey :) did the KSampler changed with the last update? i get errors on all my animatediff workflows since i updated all comfi-ui. Error occurred when executing KSampler: local variable 'motion_module' referenced before assignment
@sebastiankamph
@sebastiankamph 10 месяцев назад
Hmmmmm, good question 🤔
@keymaker.3d
@keymaker.3d 10 месяцев назад
me, too!
@andreassteinbrecher458
@andreassteinbrecher458 10 месяцев назад
today i did another UPDATE ALL in comfy-ui, and now animatediff is working fine again :)
@keymaker.3d
@keymaker.3d 10 месяцев назад
@@andreassteinbrecher458 yes,'UPDATE ALL' is the key
@flareonspotify
@flareonspotify 10 месяцев назад
I have a M1 16 unified memory MacBook Air I wonder how has it would be on it
@ADZIOO
@ADZIOO 10 месяцев назад
Not working for SDXL. Always bad quality, it should be also 8 steps/1CFG Scale at SDXL?
@sebastiankamph
@sebastiankamph 10 месяцев назад
Works great for me with the LCM sampler. Not well without it.
@ADZIOO
@ADZIOO 10 месяцев назад
@@sebastiankamph Okay then, now I know. I am at A1111, there is still not patch with LCM sampler, at least 1.5 working with Euler A.
@2008spoonman
@2008spoonman 8 месяцев назад
​@@ADZIOOinstall the animatediff extension, this will automatically install the LCM sampler.
@claudiox2183
@claudiox2183 8 месяцев назад
Thank you! It works nice, Both A1111 and Comfy as well. But I have a rookie question. I can't save the Comfy workflow explained in the video, with the Lora loader node installed. If I save it as a .JSON file or PNG image it does not reload....
@ortizgab
@ortizgab 10 месяцев назад
Hi! Thanks for the lessons, they are great!!! I cant set the sample steps below 20... Am I missing something?
@donschannel9310
@donschannel9310 9 месяцев назад
mine is not even generating any pic
@ComplexTagret
@ComplexTagret 10 месяцев назад
And how to manage weight of lora in that upper menu? If you add lora to prompt field it is possible to manage as .
@MatichekYoutube
@MatichekYoutube 10 месяцев назад
test LCM on stable diffusion - seems that img2img lcm and vid2vid has an error - TypeError: slice indices must be integers or None or have an __index__ method
@jordanbrock4142
@jordanbrock4142 6 месяцев назад
I'm kinda new, but isnt it a problem if i have to use this LoRA? I mean, I can only use 1 LoRA at a time right? And if Im using this one it means I can't use another, which sort of defeats the purpose...
@zuriel4783
@zuriel4783 5 месяцев назад
You can use as many LoRAs at a time as you like, there could possibly be a limit that i'm not aware of, but I know for sure you can use at least 4 or 5 at a time
@타오바오-h8l
@타오바오-h8l 9 месяцев назад
Thanks as always! I have an off-topic question, is there any way to make StableDiffusion not show people but only clothes? I put no human, no girl, etc. in the negative prompt and it still shows people.
@_trashcode
@_trashcode 10 месяцев назад
i would like to find a way to use that with deforum and control net. does anybody have an idea how to make it work in automatic1111?
@unowenwasholo
@unowenwasholo 10 месяцев назад
This is WILD! This ecosystem continues to boggle the mind. There's certainly some amount of "too good to be true" in here, such as the lora not playing nice with a lot of samplers, but cool nonetheless. Btw, a couple things I would have liked discussed / to see is how this performs with common current settings (i.e. higher steps ~20 / CFG ~5), and on other models even if just sd1.5 / sdxl based models. Even if it was just like 15-30 seconds showing a good model vs a bad model that you've found. ofc, there's always the whole "try it in your workflow to see how it is for you," just would be nice to know if I can expect this to work outside of vanilla sd.
@povang
@povang 10 месяцев назад
Not optimized for a1111 yet. Im using a custom checkpoint, a1111, 1.5 same settings as in video. Im on a 1080ti, and the quality is worse and the generation speeds are faster, but lower quality image.
@victorvaltchev42
@victorvaltchev42 10 месяцев назад
Great video. What I don't get is why the CFG needs to be so low?
@PerChristianFrankplads
@PerChristianFrankplads 10 месяцев назад
Will this work on Apple silicon like M1?
@sebastiankamph
@sebastiankamph 10 месяцев назад
Actually, Apple M1 reached the most speed improvements (10x). I haven't tested myself, but the claims seem to be solid.
@_trashcode
@_trashcode 10 месяцев назад
you mentioned animateDiff? how can you use LCM with animateDIff? great video, btw
@FearfulEntertainment
@FearfulEntertainment 7 месяцев назад
does having A1111 installed on a HDD or SSD matter?
@scarekrow1264
@scarekrow1264 6 месяцев назад
absolutely - ssd is way faster
@ferluisch
@ferluisch 9 месяцев назад
How much faster is it really? A comparison would be nice, also this could be used with the new tensorCoreRT?
@dastpaster
@dastpaster 10 месяцев назад
Strange, I did everything you said, but it took 7 seconds longer to generate. cinematic, techwear car Steps: 30, Sampler: DPM++ 3M SDE Exponential, CFG scale: 7, Seed: 4128880464, Size: 1024x1024, Model hash: 74dda471cc, Model: realvisxlV20_v20Bakedvae, Version: v1.6.0-400-gf0f100e6 Time taken: 17.2 sec. cinematic, techwear car Steps: 30, Sampler: DPM++ 3M SDE Exponential, CFG scale: 7, Seed: 4128880464, Size: 1024x1024, Model hash: 74dda471cc, Model: realvisxlV20_v20Bakedvae, Lora hashes: "lcm-lora-sdxl: 2fa7e8e56b09", Version: v1.6.0-400-gf0f100e6 Time taken: 24.1 sec. Tried it on another sampler, get a 2 second gain. Apparently it doesn't work well enough on all samplers
@sebastiankamph
@sebastiankamph 10 месяцев назад
You need to use 8 steps and preferrably the lcm sampler
@NamikMamedov
@NamikMamedov 10 месяцев назад
How to make common image like yours? With all generations results in one table with methods and scalers?
@sebastiankamph
@sebastiankamph 10 месяцев назад
Xyz plot in script at bottom. Can see my settings in video
@rakibislam6918
@rakibislam6918 10 месяцев назад
how to add cinematic styles file?
@henrischomacker6097
@henrischomacker6097 10 месяцев назад
Hmm... why is it working for you and not for a lot of us in automatic1111? * Downloaded and renamed both Loras and put them into their Lora directory * Enabled sd_lora in User-Interface Option in main UI * Reloaded UI * Updated complete automatic1111 with all extensions * Restarted automatic1111 (ORIGINAL) * lcm Loras do NOT appear in the Lora Tab Gallery, Only in the unusable dropdown list if you have a lot of Loras * Tried all my Models AND Samplers for 1.5. and XL, all with really bad results with 8 sampling steps My Options in main UI (like the "Add network to prompt" dropdown is shown in the left column under CFG scale, seed, etc. Are you using a different version of automatic1111 or ist there something else that has to be anabled what a lot of us maybe don't have?
@jonathaningram8157
@jonathaningram8157 10 месяцев назад
I have also very bad results.
@fjccommish
@fjccommish 6 месяцев назад
I used LCM sampler in A111 - the results were awful.
@solidkundi
@solidkundi 8 месяцев назад
can you use it with turbo xl?
@neuraldee
@neuraldee 10 месяцев назад
Thanks, but it's not working on mac m2(
@metanulski
@metanulski 10 месяцев назад
I am confused. my pictures look worse using this :-(
@sebastiankamph
@sebastiankamph 10 месяцев назад
Make sure to use the LCM sampler in Comfy for best results.
@metanulski
@metanulski 10 месяцев назад
@@sebastiankamph I used Auto1111. I did Put the 1.5 lora in the lora folder, loaded a 1.5 model, added the lora to the prompt and set the steps to 8 with euler. Result looks worse than without the lora.
@metanulski
@metanulski 10 месяцев назад
I did not use the lora dropdown Like you did. Ist this a must?
@sebastiankamph
@sebastiankamph 10 месяцев назад
Not at all. Just an easy way of using it. But it limits the use of weights.@@metanulski
@metanulski
@metanulski 10 месяцев назад
@@sebastiankamph thanks. will try again today. :-)
@AndyHTu
@AndyHTu 10 месяцев назад
Does this trick only work with the dreamshaper model or would it work on any models?
@SupremacyGamesYT
@SupremacyGamesYT 10 месяцев назад
I assumed this video would be about the RT in A111, what's going on with that is it out yet? I've break from AI since March.
@davoodice
@davoodice 3 месяца назад
Unfortunately, nothing change for me.
@juschu85
@juschu85 10 месяцев назад
The video title is wrong. 10 times faster is 900% faster. The percentage is always 100% lower than you would intuitively expect from the factor. Just like 50% more is 1.5 times as much and 100% more is 2 times as much.
@Jammy1up
@Jammy1up 10 месяцев назад
Well 900% doesn't sound as cool. Prolly not worth nit picking here, not like it's misleading or anything lol.
@LewGiDi
@LewGiDi 10 месяцев назад
​@@Jammy1upin a krita tuto I saw this in the thumbnail 'edit 900x faster'
@amafuji
@amafuji 10 месяцев назад
900% faster = 1000% as fast
@ronbere
@ronbere 10 месяцев назад
As always.. 😂
@arnavkumar7970
@arnavkumar7970 10 месяцев назад
He is probably asian
@MisterWealth
@MisterWealth 10 месяцев назад
But does this work with sdxl?
@maestromikz
@maestromikz 9 месяцев назад
will this work on mac m1?
@consig1iere294
@consig1iere294 10 месяцев назад
I am super confused, when I go to download the LCM model for SDXL, are we downloading the "pytorch_lora_weights.safetensors" file? I did that and used it as LORA, it is stuck! I am using a RTX 4090.
@sebastiankamph
@sebastiankamph 10 месяцев назад
Yes! One for 1.5 and one for SDXL. Rename so you know which is which. Put in Loras folder
@bladechild2449
@bladechild2449 10 месяцев назад
i played with it for a while and decided the quality was vastly subpar compared to what you'd get using better samplers and schedulers
@sebastiankamph
@sebastiankamph 10 месяцев назад
I would indeed say it's a trade off. I wouldn't call it vastly subpar with the LCM sampler and some finetuned settings. This is a good step in the right direction. If we would have bashed on Stable Diffusion day 1, we wouldn't be where we are today. This is a fantastic step forward where these ideas can be developed further!
@BabylonBaller
@BabylonBaller 10 месяцев назад
That was my hunch as well. No point of being able to generate a ton of garbage images just for bragging rights.
@sebastiankamph
@sebastiankamph 10 месяцев назад
The images you can get with the LCM lora and sampler is in no way garbage. Run it in Comfy today and you'll probably be amazed by the results at that speed@@BabylonBaller
@BabylonBaller
@BabylonBaller 10 месяцев назад
@@sebastiankamph cool. Will check it out.
@wholeness
@wholeness 10 месяцев назад
It can only get better from here even an idoit could see that. Haven't you learned anything?
@nermal93
@nermal93 10 месяцев назад
Is this working with img2img?
@sebastiankamph
@sebastiankamph 10 месяцев назад
I don't see why not 😊
@sinanisler1
@sinanisler1 10 месяцев назад
sdxl doesnt work, not sure why. probably need latest pips. will test again later.
@sebastiankamph
@sebastiankamph 10 месяцев назад
You need LCM sampler for that.
@jonathaningram8157
@jonathaningram8157 10 месяцев назад
It gives me trash result with a lot of noise no matter what sampler I chose even with a CFG
@Ekkivok
@Ekkivok 10 месяцев назад
hmmmm for SDXL the result is a total mess : D it's like the cfg is at 30 and steps to 1 xD
@sebastiankamph
@sebastiankamph 10 месяцев назад
Did you use the LCM sampler? Without it, it's not great.
@jibcot8541
@jibcot8541 10 месяцев назад
It does work for SDXL, use "Euler a" sampler and a CFG 1-2 and 4-8 steps.
@Ekkivok
@Ekkivok 10 месяцев назад
@@sebastiankamph yes I activated SD Lora in a1111 cause I use sdxl on a1111 and I tried ....and..... Was a massacre, but I use 1.5 with Vlad (SD.NEXT) But problem ..... Sd_lora not appearing:/
@Ekkivok
@Ekkivok 10 месяцев назад
@@jibcot8541 already used that settings same problem...
@jonasstare1551
@jonasstare1551 10 месяцев назад
For extra speed set CFG to 1.0. Using that value will make the negative prompt irrelevant and it will be ignored, making generation much faster.
@sebastiankamph
@sebastiankamph 10 месяцев назад
Vroooooooom! What speeds are you getting?
@DerXavia
@DerXavia 10 месяцев назад
that makes the quality even worse lol
@ginglyst
@ginglyst 10 месяцев назад
@@DerXavia sssssht, don't mention the quality. it's all about speeeeeeeed now. to be serious: community will find out eventually: to get the same quality, we will end up with the old settings again.
@jonasstare1551
@jonasstare1551 10 месяцев назад
@@sebastiankamph Just over 20it/s :) It is really nice to be able to get a bunch of images, pick a nice one and then use img2img or controlnet to refine it.
@amafuji
@amafuji 10 месяцев назад
If you use 8-12 steps you can get better quality than 20 steps of euler. Each step is also faster.
@thegreatujo
@thegreatujo 10 месяцев назад
How do I make the interface like yours ? At the top where you select the model/checkpoint you have two more dropdowns to the right called SD_VAE and Add Network to Prompt. If somebody else than the video creator has the answer feel free to reply
@drabodows
@drabodows 10 месяцев назад
Watch the video, he shows you how...
@xyzxyz324
@xyzxyz324 10 месяцев назад
01:38 - 01:57
@ragnarmarnikulasson3626
@ragnarmarnikulasson3626 10 месяцев назад
tried this with sdxl with no good resaults. sdv1-5 worked great though. any ideas? was using sd_xl_base_1.0.safetensors [31e35c80fc] with the lcm-lora-sdxl on mac m1 if that makes any difference
@ragnarmarnikulasson3626
@ragnarmarnikulasson3626 10 месяцев назад
figured it out. I forgot to turn up the resolution :D lol
@ywueeee
@ywueeee 10 месяцев назад
should have shown the animations with some high upscale
@alekmoth
@alekmoth 10 месяцев назад
I am not seing this effect on a Macbook pro. Yes I get a speed increase from doing 8 steps instead of 20. And yes the image has better quality, but the low cfg scale means i get a highquality image that isnt what I asked for. I am not seing any improvement in it/s .
@sebastiankamph
@sebastiankamph 10 месяцев назад
It/s won't change
@neonaaat6850
@neonaaat6850 9 месяцев назад
Nice voice.
@Wunderpuuuus
@Wunderpuuuus 10 месяцев назад
I am seeing a lot of Comfy UI and Automatic 1111. Is there and advantage to use one over the other? Is one better at "A" and another at "B"?
@jonathaningram8157
@jonathaningram8157 10 месяцев назад
It's a very different philosophy. I would recommend automatic1111 for beginner and also for flexibility. ComfyUI in my opinion is more specialized but you don't have as much creative power (the inpaint for instance is quite annoying to setup). I tried ComfyUI and I'm back to automatic1111, it gives me the best results (also I kinda lost my node setup for ComfyUI and it's a pain to do).
@Wunderpuuuus
@Wunderpuuuus 10 месяцев назад
@@jonathaningram8157 thank you! I also have been using automatic 1111 atm, but saw so many videos for ComfyUI so I thought i'd ask. thanks for the response!
@Krougher
@Krougher 10 месяцев назад
works on comfyUI but not on A1111
@the_smad
@the_smad 10 месяцев назад
Need to try for my gtx 1060. yesterday, with xformers and medvram it took 30 minutes to do a single image with sdxl and no refiner
@sebastiankamph
@sebastiankamph 10 месяцев назад
Let me know what speed improvements you get 😊
Далее
NEW Prompt Cheat Sheet
5:13
Просмотров 67 тыс.
Watermelon magic box! #shorts by Leisi Crazy
00:20
Просмотров 2,7 млн
Megascans is Now Free for All 3D Software
7:47
Просмотров 86 тыс.
Revealing my Workflow to Perfect AI Images.
13:31
Просмотров 319 тыс.
Stable Diffusion Performance Optimization Tutorial
5:02
18 Weird and Wonderful ways I use Docker
26:18
Просмотров 149 тыс.
23 AI Tools You Won't Believe are Free
25:19
Просмотров 2 млн