Тёмный

AMAZING SD Models - And how to get the MOST out of them! 

Olivio Sarikas
Подписаться 233 тыс.
Просмотров 108 тыс.
50% 1

The BEST Models for Stable Diffusion. And how to get the MOST out for them. This Video is a deep dive into how and why these Models Work so well. Well look at Clip Skip, Models, Loras, Negative Embeddings, Weights, Steps and CFG Scale to get the best results.
#### Links from the Video ####
1000 Prompts in 1 Click: • 1000 Prompts in 1 Clic...
Negative Embeddings: huggingface.co...
Realistic Vision: civitai.com/mo...
Rev Animated: civitai.com/mo...
Deliberate: civitai.com/mo...
Ghost Mix: civitai.com/mo...
Phoenixdress Lora: civitai.com/mo...
840000 VAE: huggingface.co...
#### Join and Support me ####
Buy me a Coffee: www.buymeacoff...
Join my Facebook Group: / theairevolution
Joint my Discord Group: / discord
Support my Channel:
/ @oliviosarikas
Subscribe to my Newsletter for FREE: My Newsletter: oliviotutorial...
How to get started with Midjourney: • Midjourney AI - FIRST ...
Midjourney Settings explained: • Midjourney Settings Ex...
Best Midjourney Resources: • 😍 Midjourney BEST Reso...
Make better Midjourney Prompts: • Make BETTER Prompts - ...
My Facebook PHOTOGRAPHY group: / oliviotutorials.superfan
My Affinity Photo Creative Packs: gumroad.com/sa...
My Patreon Page: / sarikas
All my Social Media Accounts: linktr.ee/oliv...

Опубликовано:

 

1 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 199   
@OlivioSarikas
@OlivioSarikas Год назад
#### Links from the Video #### 1000 Prompts in 1 Click: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-bQK5diN59NA.html Negative Embeddings: huggingface.co/nolanaatama/embeddings/tree/main Realistic Vision: civitai.com/models/4201?modelVersionId=6987 Rev Animated: civitai.com/models/7371/rev-animated Deliberate: civitai.com/models/4823/deliberate Ghost Mix: civitai.com/models/36520/ghostmix Phoenixdress Lora: civitai.com/models/48584/phoenixdress 840000 VAE: huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main #### Join and Support me #### Buy me a Coffee: www.buymeacoffee.com/oliviotutorials Join my Facebook Group: facebook.com/groups/theairevolution Joint my Discord Group: discord.gg/XKAk7GUzAW
@letsplaymultiview
@letsplaymultiview Год назад
Almost all of the negative embeddings are pickle Tensors - are they safe to use?
@SouthbayCreations
@SouthbayCreations Год назад
Great video, thank you!! A video on training Loras and Models would be fantastic!!
@Elwaves2925
@Elwaves2925 Год назад
I agree, along with training TI's as well. There are three ways to do this from what I can tell, for two of them the process works for me but never actually produces anything even vaguely close. I've tried multiple datasets, settings and tutorials (everyone disagrees with each other), so a more 'complete' set would be great.
@HermitagePrisoner
@HermitagePrisoner Год назад
Absolutely. I'd vote for a series of tutorials, please :)
@OlivioSarikas
@OlivioSarikas Год назад
I will do one soo :)
@Elwaves2925
@Elwaves2925 Год назад
@@OlivioSarikas Thank you kindly.
@SouthbayCreations
@SouthbayCreations Год назад
@@OlivioSarikas Fantastic!! Thank you!!
@ahsenia
@ahsenia Год назад
Thanks for the tricks Olivio! Also, it would be super cool if we could get an updated LoRA training tutorial because everything has changed. ❤
@JohnEliot1978
@JohnEliot1978 Год назад
yes this! updated lora training would be awesome
@HestoySeghuro
@HestoySeghuro Год назад
The only changed is Dreamartist+, kohya is the same afaik
@wakegary
@wakegary Год назад
Same. Would love this. Thanks for this vid, too
@PhilippSeven
@PhilippSeven Год назад
I love rev style too but I hate to always fighting with the “standard rev’s faces”.
@HCforLife1
@HCforLife1 Год назад
I actually don't like many aspects of these models. They are overtrained with some assets. This is the issue of most models. I have same issues with my own. It just takes ton of time to label images accurately. I am undergoing my own project now which will take weeks to come - the main goal is to not overtrain the model
@ryaven
@ryaven Год назад
Yes please tell us how to train Lora, and someone's Lora . Thank you
@BobFudgee
@BobFudgee Год назад
May I recommend chapters on the video, would help tremendously! But still great video.
@AB-wf8ek
@AB-wf8ek Год назад
Phoenix is pronounced FEE-nicks Great video, thanks!
@davidvigil8697
@davidvigil8697 Год назад
Olivio goya ss no longer uses powershel and Im getting weird error when training lora. Would it be possible to make a new video about how to create LORAs?
@devonandersson300
@devonandersson300 Год назад
Why is everybody sticking to SD 1.5? The censoring? I feel mostly limited by the low 512x512 generation resolution even if it can be upscale and / or tweaked somewhat higher with a high res fix. Usually I use 768x1152 (not upscaled, high res strength / denoise at 0.55) in InvokeAI. Anything higher increases the chance for twinning to much.
@sznikers
@sznikers Год назад
You can often push to 768px and then if you upscale carefully end with 2000px image. SD1.5 has all the models, embeddings, loras and other add ons. Without those community projects you would be left with just a toy to generate random cute img.
@silverbullet126
@silverbullet126 Год назад
A video on how to train ones own Lora's would be cool :) Otherwise as always a great video packed with good info :)
@zendao7967
@zendao7967 Год назад
+1
@Rick-wm6zm
@Rick-wm6zm Год назад
@@zendao7967 he already has a video on this from a few months ago.
@zendao7967
@zendao7967 Год назад
@@Rick-wm6zm A lot has changed since and the guide he made is not very detailed.
@JohnEliot1978
@JohnEliot1978 Год назад
yes plz
@ryaven
@ryaven Год назад
Yeah i approved this, with the latest tutorial
@Not4Talent_AI
@Not4Talent_AI Год назад
I didn't really understand what clip skip does, but I thought it just re-formatted the prompt in second hand so it adapted better to the model.
@OlivioSarikas
@OlivioSarikas Год назад
You are right. I should have explained that a bit more. Basically a Prompt has 12 layers of depth of meaning and this is skipping one of these layers with Clip Skip 2. So it will be less deep in it's understanding of what the words could mean, but that can lead to better results with models that use Clip Skip. That's why you need to check the model page to see if that is the case
@tripleheadedmonkey6613
@tripleheadedmonkey6613 Год назад
Pretty sure that Clip Skip literally skips some of the processing steps in the early generation. This gives it less initial detail to work with, but allows it to fill in more of those details in further processing based on the existing details. In other words the higher Clip Skip value, the more generic the resulting image will end up looking instead of following your prompt. But at lower values it actually compensates for minute hallucinations and keeps it on track.
@Not4Talent_AI
@Not4Talent_AI Год назад
@@OlivioSarikas @Triple Headed Monkey I see!! Thanks a lot!
@stuartclements6243
@stuartclements6243 Год назад
I'm just getting started in Stable Diffusion, and while this was a lot of information, I'm excited to try it all out!
@OlivioSarikas
@OlivioSarikas Год назад
Thank you. Have a go at them. This tricks will give you really nice results with very little effort
@Sebastian78120
@Sebastian78120 Год назад
A LoRA training will be very useful for make the same face to different poses/situations. Thank you for the tips !
@theReal_Truth_XL
@theReal_Truth_XL Год назад
What is clip skip ?and where do I get it?
@initial3d998
@initial3d998 Год назад
Yes, I would really like to know how to train a proper LORA, like what best example of photos do you need most, different angles and expression for human? Different details for buildings? how to add better tags etc.? Thank you so much, you have been helping me a lot. Thanks.
@FutonGama
@FutonGama Год назад
I like your videos, I'm glad you got mentioned on ReVAnimated.
@OlivioSarikas
@OlivioSarikas Год назад
thank you
@Steamrick
@Steamrick Год назад
He uses it for basically every video, it would be highly surprising if it didn't feature here...
@YaBoySL
@YaBoySL Год назад
You are on a dang roll, sir! I'm just getting into SD and it feels like every time I have a new question you have an answer for it that you just uploaded that day! I'm testing out new models and just discovered Deliberate. I wasn't sure about it, come to youtube and here you are lol
@gamersgold4984
@gamersgold4984 Год назад
Yes please show us how the Lora works Olivio.
@jonascale
@jonascale Год назад
I love your vids. And yes, please would you do a training vid. I would love to see how people are training models and how they finetune them to get the results they do. thanks again!
@ВикторияКузьмина-м6з
WTF going on with the RU-vid interface?
@OlivioSarikas
@OlivioSarikas Год назад
what do you mean?
@gohan2091
@gohan2091 Год назад
I thought Textual Inversion Negative Embeddings should have the < > brackets around the name? When clicking with the mouse they go into the negative prompt as just the name without the arrow brackets. For example I would expect it to say instead of bad-hands-5. Anyone know?
@AiNomadArt
@AiNomadArt Год назад
Stable Diffusion will be renamed OlivioDiffusion sooner I think I do have problems generating images with LORA, some of the LORA really lower the image quality badly. try to use different VAE but it doesn't help much. Is it because the LORA were trained using checkpoint model that conflict with the model I use?
@zeuszl1566
@zeuszl1566 Год назад
Can you make a tutorial for triaing your own model and implemenmting Realistic Vision? im not too sure how too train my own model other than using Dreambooth Colab then using in A1111, but idk how to use that with Realistic Vision too since its also a downloaded model and i can only choose one model at a time to load in A1111
@sirmiluch6856
@sirmiluch6856 Год назад
Best models are more based on preferences. I don't like most of these 2.5d semi-realistic and realistic models. Btw, I'd like to know what exactly clipskip is doing. I know that many models works the best set to clipskip 2, but what is exactly doing in the backend? Same for hypernetworks. What exactly are hypernetworks. I have some hypernetworks, they are adding certain visual styles to backgrounds etc, but what exactly is this? Btw 2, for my usage (I'm mostly doing anime art) DPM++ SDE Karras works absolutely the best (for details and content itself). Btw 3, CFG in my experience is the best in range 6-9.
@gumilangindra
@gumilangindra Год назад
thank you for all of your videos. really helpful for me. I just curios about is it any effect of using prompt like Camera type and lens camera. as an example "photographed with Sony Alpha a7 III camera with a Sony FE 16-35mm f/2.8 GM lens" is it have an effect?? as i try is didnt have much effect but maybe you have tried before. thanks
@ex0stasis72
@ex0stasis72 Год назад
Do you have a video explaining how I can speed up opening the "Extra Networks" UI in Automatic1111? I have probably over 100 various embeddings and loras installed, and when I open the "Extra Networks" UI, it spends about 45 seconds processing something. In the console, it spams "No Image data blocks found." for each one. I just want to have it cache all that once, and let me manually refresh it whenever I add to it, not every time I start Automatic1111.
@ex0stasis72
@ex0stasis72 Год назад
I prefer to set my CFG scale to 3.5, and to compensate for such a low CFG scale, I crank up the Steps to anywhere between 80 and 120.
@BalrogBait
@BalrogBait Год назад
The "With Lora" and "No Lora" images in this video are rather pointless. I'm not doubting the utility of making a Lora for a specific purpose, but the examples shown here can be easily achieved without training a Lora and instead setting your (randomized) variation seed strength to 5-10%.
@graphiydesign
@graphiydesign Год назад
I am just start learning, which PC or Mac should I use for the best results? I was thinking Mac mini M2 Pro… any suggestions? Thanks for your tutorial…
@wykydytron
@wykydytron Год назад
Its all fun and games then you realize you have 50+ checpoints and over 100loras and same number of embeddings... Seriously there should be some a1111 addon that would list recommend settings and keywords. I kinda get we can press i when selecting lora to see most used prompts but its not very effective way. Great video btw. didnt know i need to get more itterations if im setting cfg scale higher. Also depending on what i want to achive i found going as low as cfg 3-4 can produce absolutely amazing results with some checpoints i feel like its very usefull when we made mess of prompts or when we want to do something abstract like steam locomotive flying in space, AI just didnt want to do that so low cfg and img2img with image of actual spacecraft finally convinced AI that trains can be used as spaceships 🤣
@smarthalayla6397
@smarthalayla6397 Год назад
it's those software and an open source projects? If so, why Isn't there a portable version of this? Instead of installing, simply double click and it works.
@NamikMamedov
@NamikMamedov Год назад
Can I store my models files on other local drive? And how to set path to them?
@SylvainSangla
@SylvainSangla Год назад
I see that Stable Diffusion (and others AI) are now mostly used to create anime/manga style images. Sorry for being so ignorant, but what do you do guys with these images ? I mean is it just for fun or do you plan to create and commercialise whole mangas ? I mostly use SD to create psychedelic or creepy landscapes and I begin to struggle to find new models and tools for this purpose, everything is about manga girls wearing bikini...
@quantumaquarius
@quantumaquarius Год назад
I keep on getting This site can’t be reached when I run stable diffusion, it works for a bit, then I get that message, have no idea why
@grandwizardnoticer8975
@grandwizardnoticer8975 Год назад
Deliberate is my current fave, but it does like to put hands in pockets that shouldn't exist! Space suit? Victorian Dress? 😅
@AraShiNoMiwaKo
@AraShiNoMiwaKo Год назад
Amazing info, definitely interested on the Lora training.
@BradMurphy0
@BradMurphy0 Год назад
PLEASE i have been struggling to train a lora, a friend want's me to turn her in to, as she says, 'an ai art so i can me in all the pictures' ... and i just can't figure out how to do this... i have a bit over 100 pics, and using blip, i have them tagged, but i just can't figure it out
@Smudgie
@Smudgie Год назад
Is it possible to save all settings when you have found a good balance between models, Loras, prompts, negative embeddings etc?
@ghostsquadme
@ghostsquadme 8 месяцев назад
With all the new changes in the last year, can you make another video with your new favorite models, loras, and embeddings?
@Puppetmaster2005
@Puppetmaster2005 Год назад
1:53 😏 Very interesting prompt you have there, Olivio. What were you trying to achieve hmmmm? hahaha
@carbendi
@carbendi Год назад
Some mdoels have trigger words, are they used for anything?
@yolamontalvan9502
@yolamontalvan9502 Год назад
Why do they have crossed eyes? All the samples in RU-vid from other channels have crossed eyes.
@Seethesmall
@Seethesmall Год назад
hi olivio !your video is great! let me leave a lot! and you have a photographer background it's very unique!
@johnesfioravante1828
@johnesfioravante1828 Год назад
Sure we want a video explaining Lora training!!! Please!!!
@imsystem
@imsystem Год назад
@Olivio Sarikas they already added the link to this video the the rev animated page
@vickkyyy
@vickkyyy Год назад
thankyou so much for this info i really need this info to improve my skill once again thankyou so much❤
@Icewind007
@Icewind007 Год назад
Very interested in Loras! I am constantly looking for new things to bring a consistency to my image sets.
@JohnEliot1978
@JohnEliot1978 Год назад
+1
@senteixTrader
@senteixTrader Месяц назад
awesome , can you make an update of this video with recent version of those models
@АлександрГерасимов-с3щ
Lora weights can also be changed by ctrl+up/down
@MrFedemoral
@MrFedemoral Год назад
Hey olivio, which tut should i check to understand better the use of embeddings
@emrahonemli
@emrahonemli Год назад
Please use dark mode on your browser. Super helpful tutorial, thanks a lot by the way
@milestrombley1466
@milestrombley1466 Год назад
I like Rev Animated, but it took up most of my storage space on my g drive.
@paulodonovanmusic
@paulodonovanmusic Год назад
Great tips, thanks. Training video please! :)
@HermitagePrisoner
@HermitagePrisoner Год назад
Extra useful for beginners and also helpful for those who just don't have much time to dig into themselves. Thanks!
@OlivioSarikas
@OlivioSarikas Год назад
Thank you
@Sakur.aiMusic
@Sakur.aiMusic Год назад
If you click on the little i icon on the LoRAs and such it will tell you what the keywords are.
@earthequalsmissingcurvesqu9359
there is so so so much more you can do with LoRas. this info here barely scratches the surface.
@generalawareness101
@generalawareness101 Год назад
Sad that these are all for 1.5 when 2.1 has some fantastic models as well.
@allin6262
@allin6262 Год назад
Can you add these onto photos shop stable diffusion?
@grandtitan7967
@grandtitan7967 Год назад
I could really use a lora training video as it seems most of the videos are outdated.
@booker210
@booker210 Год назад
I just want to copy a face onto one of the fantasy models...is there an easy way to do this?
@JohnChildressURock
@JohnChildressURock Год назад
I would love to know how to train my own models and create Lora
@zeblacktiger
@zeblacktiger Год назад
also i love xcelsior for girls :D :D with lora detail :D also ^^ or low ra
@jimrobinson4755
@jimrobinson4755 11 месяцев назад
The link for Deliberate is now a 404.
@AndyHTu
@AndyHTu Год назад
Yessir! Please make the Lora videos :) I watched through the old ones and since then it has been updated so I'm too dump to improvise lol.
@worldofgames2000
@worldofgames2000 Год назад
You are amazing! Thank you!
@KlausRosenberg-et2xv
@KlausRosenberg-et2xv Год назад
Can I use Stable Diffusion online?
@jm8186
@jm8186 Год назад
My fav videos thank you!! 🎉
@Adam-kx9gi
@Adam-kx9gi Год назад
share those Loras you made on Civitai
@BillRob
@BillRob Год назад
Bro it feels like you make the same video every day lol
@marsgamecolony
@marsgamecolony Год назад
sd_vae, CLIP_stop_at_last_layers
@tatianaz1237
@tatianaz1237 Год назад
I love you 😁☺. such a useful channel
@astroni8803
@astroni8803 Год назад
Thank you so much !
@Steamrick
@Steamrick Год назад
Hey Olivio - if you're gonna watermark your coffee link everywhere, can you find it in higher resolution? That pixel mash is hurting my eyes...
@OlivioSarikas
@OlivioSarikas Год назад
i have the link also in my video description ;)
@GurpsSingh01
@GurpsSingh01 Год назад
Your best vid so far 👌👌👌
@gawni1612
@gawni1612 Год назад
Dude, they can't all be the best.
@Maisonier
@Maisonier Год назад
Can I train with 4 or 5 faces each model, local in a rtx 3090? It's possible to put 2 trained faces in 1 generated picture? Thank you. I'm new in this I'm still learning and watching all your videos.
@stanleywtang
@stanleywtang Год назад
No ChilloutMix, fail!
@snowflakeandyou5983
@snowflakeandyou5983 Год назад
Thanks for the video🎉 I'm new in the stable diffusion web ui thing, so i don't really familiar and not really understand the function for each terms there. If anyone can suggest me some website, or RU-vid video to learn the terms that exists in web ui A111 would be really appreciate.
@Seethesmall
@Seethesmall Год назад
olivio ,can i ask you a question ? Can you tell me which stable diffusion model, Lora,Checkpoint,can use for business? because I think a lot of people is very confuse,and afraid for this.
@johnmenezes2031
@johnmenezes2031 Год назад
LoRA for style (I have seen your video on people/your face) would be very valuable, thanks. Great stuff -- as always
@huevonesunltd
@huevonesunltd Год назад
Any recommendations for models/merges to make hyerrealistic cosplay images that work with Loras trained on anime models?
@jcvijr
@jcvijr Год назад
Excelent video, thank you very much. It is pratically a fast tutorial for Automatic 1111 and use of models
@praneshchow
@praneshchow Год назад
when I saw clip skip 1 and 2 just shocked by how it created. Oh, man!!!!!
@myronkoch
@myronkoch Год назад
if there's a normal AND inpaint model, are they pretty much the same, but inpaint is more thorough? Would it be ok just to get inpaint versions and use them for everything, or are they specified for only inpainting tasks?
@date2077
@date2077 Год назад
I want a big photorealistic model with guys and boys 20-25 years old
@OlivioSarikas
@OlivioSarikas Год назад
Each of these Model can generate that!
@Pauluz_The_Web_Gnome
@Pauluz_The_Web_Gnome Год назад
Hi Olivio, is there a way to separate your positive and negative embeddings, so that you can choose from 2 categories?
@JonathanScruggs
@JonathanScruggs Год назад
You can rename files, like Positive_Name... and Negative_Name.... It's like they are separated when sorted alphebetically.
@Comic_Book_Creator
@Comic_Book_Creator Год назад
hi, thanks for a another great video, may I ask you if you have a tutorial that show how install all this from webui ? I has one, but cant find anymore. that list all this models and install with one click
@mufeedco
@mufeedco Год назад
Great video. I saw that you still didn't update to the latest version of Automatic1111.
@AnTiBoDyNL
@AnTiBoDyNL Год назад
Your video's are easy to understand as a beginner of stable diffusion, thanks a lot for your helpfull videos!
@scottownbey9340
@scottownbey9340 Год назад
Olivio you had me at "Did you want to see Lora Training " - Hell yes!!
@ai_Rhapsody
@ai_Rhapsody Год назад
This is a very helpful video. Thank you.
@ahtoshkaa
@ahtoshkaa Год назад
Thank you! I finally learned how to do the "Clip skip"! Couldn't find it on my own
@SayLY-i3e
@SayLY-i3e Год назад
Hi, thank you for your videos. I have a little problem for two or three days with stable diffusion 1.5. First, I couldn't open it. With some research I installed this and it works again: set COMMANDLINE_ARGS=--disable-safe-unpickle. But for 2 or 3 days, when I generate images, the quality of the face is not great while the rest of the body is great (hair, body, clothes). Could you tell me why? (I haven't changed my settings or my model in the meantime). I generate my images with your method: samplings steps 20, then hires fix: 20, denoising strengh: 0.5 (2) and Extras: 4xultrasharp (2)Thank you very much in advance.
@baddealrage
@baddealrage Год назад
I am just playing arround with these tools so my advise might be wrong. But for me adding the ARG "--precision full" did help alot for the details sharpness. Also in automatic1111 you have a checkbox to fix faces it is not perfect, but usually this give better symetry. Most of the times, I will img2img the faces with a different model with more steps because some models requiere more steps to finish the faces details. Hope it was helpfull. :)
@swisste5704
@swisste5704 Год назад
@@baddealrage Oh, but be careful using that --precision full tag. If you don't know the difference between 16 bit and 32 bit floating point numbers, just know the smaller number uses less memory. For VRAM constrained setups, (like mine), it could potentially increase image generation time. But with my experience, (having a Quadro card), 16 bit floating anything results in worse performance. (For reference, I get about 8 seconds for one iteration, or 1 step per 8 seconds. 2GB of VRAM will do that to you.) And that img2img tip requires some learning, but its a pretty powerful tool if you want to get those details just right!
@Aviator-ce1hl
@Aviator-ce1hl Год назад
I actually get most of the time better resulps using DPM Adaptive but it is a bit slower than others
@Jfreek5050
@Jfreek5050 Год назад
Would you be willing to release the model trained on brutalist architecture? Would love to see what could get crafted from it.
@JesFinkJensen
@JesFinkJensen Год назад
for photorealism I prefer DDIM
@Aguiraz
@Aguiraz Год назад
hey yes please do teach us when you have time how to train models/ loras, the difference between each approach!
@resh6701
@resh6701 Год назад
I strongly recommand the extensions CivitAI helper and/or CivitAI shortcut
@SouthbayCreations
@SouthbayCreations Год назад
What is that?? Chrome extensions??
@OlivioSarikas
@OlivioSarikas Год назад
Cool, thank you. I will check that out
@codadragon
@codadragon Год назад
I noticed when you hover over CFG that it gave a brief description of what it does and how affects the generation of your image. I don't have that on A1111 when I hover over the text of stuff, is there a setting I am missing?
@Braulio_Cyberyisus
@Braulio_Cyberyisus Год назад
Can you make a video talking about regional prompt?
Далее
ControlNET 1.1 - What you NEED to know!
16:54
Просмотров 147 тыс.
BETTER than PROMPTS - The Future of AI Composition
9:23
Свадьба Раяна Асланбекова ❤️
00:12
RealVisXL V5.0  - You NEED to try this!!!
6:07
Просмотров 28 тыс.