Тёмный

Offset Noise for Stable Diffusion: Model, LORA and Embeddings 

Olivio Sarikas
Подписаться 230 тыс.
Просмотров 94 тыс.
50% 1

Offset Noise for Stable Diffusion brings the ability to have dark scenes with bright areas. Create a much higher dynamic ragen. Dramatic Light can now be easily achieved. Is this the Midjourney Killer that brings amazing results and expressive light to all of your images? The Illuminati Diffusion v1.1 Model is trained on Stable Diffusion 2.1 and brings very high quality. It also works with three negative Embeddings: Nfixer, Nartfixer and Nrealfixer. Nfixer is for general coherence improvement and always recommended. Nrealfixer makes the colors and light more photo realistic. Nartfixer will create more artistic coherence in your render. The epi_noiseoffset LORA is trained on Stable Diffusion 1.5 and will work with any SD 1.5 model. This makes it very easy to bring Offset Noise to your AI Art and no negative Embeddings are required.
😍WIN A Nvidia RTX 3080Ti !!!!!😍
Sign up for free to the Nivida GTC Conference with this link for a Chance to win a RTX 3080Ti
nvda.ws/3D7nNY4
#### Links from the Video ####
civitai.com/models/11193/illu...
civitai.com/models/13941/epin...
/ 1630389962129432578
/ 1630053866027794434
www.crosslabs.org/blog/diffus...
#### Join me ####
Join my Discrod: / discord
Join my Facebook Group: / theairevolution
Support my Channel:
/ @oliviosarikas
Subscribe to my Newsletter for FREE: My Newsletter: oliviotutorials.podia.com/new...
How to get started with Midjourney: • Midjourney AI - FIRST ...
Midjourney Settings explained: • Midjourney Settings Ex...
Best Midjourney Resources: • 😍 Midjourney BEST Reso...
Make better Midjourney Prompts: • Make BETTER Prompts - ...
My Affinity Photo Creative Packs: gumroad.com/sarikasat
My Patreon Page: / sarikas
All my Social Media Accounts: linktr.ee/oliviotutorials

Хобби

Опубликовано:

 

27 фев 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 153   
@Orangesnake221
@Orangesnake221 Год назад
Mind blowing how quick everything is developing. Great video as always!
@OlivioSarikas
@OlivioSarikas Год назад
Thank you :)
@pluckypluckster
@pluckypluckster Год назад
yea imo there are lots of things out there now that are illegal about how these apps are developing, but its growing at such a rate and apps upon apps are building off these illegal foundational settings that eventually the greed factor will win out and the owners of these apps(midjourney) might just have to merge with copyright holders(think getty images/or w/e) or pay minor fees(in the millions) to legitamize their process. The legal process is waaay behind the birth of all these new companies. You can get away with it right now and make $$$$ before you get caught. It's better to ask foregiveness, then ask permission is probably the corrupt, greedy mindset of what these people are thinking.
@frankschannel2642
@frankschannel2642 Год назад
Wow. Can't wait to experiment with this on the week-end. Thanks for the knowledge drop, Olivio!
@douchymcdouche169
@douchymcdouche169 Год назад
Correction: Embeddings folder is in "stable-diffusion-webui\embeddings" not "stable-diffusion-webui\models\embeddings".
@OlivioSarikas
@OlivioSarikas Год назад
right! sorry i said that wrong
@nokta7373
@nokta7373 Год назад
Gonna try this right away, thank you Olivio.
@riccardogiovanetti
@riccardogiovanetti Год назад
Always very informative, clear and concise. Thank you Olivio!
@OlivioSarikas
@OlivioSarikas Год назад
Thank you :)
@n8wn8wn8w
@n8wn8wn8w Год назад
awesome videos as always. Thank you 🙏🙏
@spockjones1521
@spockjones1521 Год назад
Thanks for continuing to cover all these lightning speed developments!!
@OlivioSarikas
@OlivioSarikas Год назад
you are welcome :)
@KadayiPolokov
@KadayiPolokov Год назад
Great work as ever with the presenting Olivio 👋
@misterm6677
@misterm6677 Год назад
I want to give out a correction regarding noise offset as a trial and error tester, it has been shown that using two LORAs trained with noise offset will result in the noise offset basically being multiplied with each other, it messes with noise wavelengths. The model is more able to learn long-wavelength (image-wide) components like contrast. If two LoRA have fairly different datasets in terms of contrast, the clash would be exacerbated if noise offset was used. Identical datasets would cause the learned brightness/darkness to get exaggerated. You can use noise offset with only one LORA but using noise offset with two LORA and combining them will result in this issue, the one thing I will recommend noise offset for is style LORA and the rest of your character LORA not being trained with it.
@OlivioSarikas
@OlivioSarikas Год назад
Interesting, thank you
@misterm6677
@misterm6677 Год назад
@@OlivioSarikas However merging the noise offset model that was provided in the blog talking about it with another model will yield better results, so offset noise model + your model + sd 1.5 interpolation 1.0 and there's less chances for negative effects unless you plan on merging that noise offset version of your model with another model, it's just better to do your personal merge and then do the offset noise model method
@h20dancing18
@h20dancing18 Год назад
looks awesome
@pixelinitrate
@pixelinitrate Год назад
Great video, again!
@ryry9780
@ryry9780 Год назад
Ooooh awesome!
@heiko4297
@heiko4297 Год назад
Never ever was I able to create such beautiful images to start with xD
@OlivioSarikas
@OlivioSarikas Год назад
now you can :)
@heiko4297
@heiko4297 Год назад
@@OlivioSarikas I really have to dig deeper into that field. Have to watch more of your videos :D
@OlivioSarikas
@OlivioSarikas Год назад
@@heiko4297 watch them all #pokemoslogan 😅
@heiko4297
@heiko4297 Год назад
@@OlivioSarikas I gotta watch em all!
@theiadesignlab
@theiadesignlab Год назад
nice info thanks!!!
@serta5727
@serta5727 Год назад
Offset Noise Rocks ❤🤓
@OlivioSarikas
@OlivioSarikas Год назад
Totally nooooice nooooise ;)
@gu9838
@gu9838 Год назад
some nice updates from this thing might have to look into it!
@MrMsschwing
@MrMsschwing Год назад
mind blowing stuff on a daily base... I wonder where we will be in 12 months 😱😱🤯🤯🤯
@OlivioSarikas
@OlivioSarikas Год назад
Me too. Probably already playing with Captrain Picard in the Holodeck ;)
@serta5727
@serta5727 Год назад
Offset Noise is so simple and cool
@OlivioSarikas
@OlivioSarikas Год назад
the rule of cool :)
@RhysAndSuns
@RhysAndSuns Год назад
is the fourier transformation so simple? 🤣
@HB-kl5ik
@HB-kl5ik Год назад
Thanks to theovercomer8 who implemented it :). Then everyone came to know about it and applied in their respective trainers.
@nic-ori
@nic-ori Год назад
Thanks.
@amj2048
@amj2048 Год назад
this is awesome news, thanks for sharing!
@OlivioSarikas
@OlivioSarikas Год назад
you are welcome :)
@iKNuDDeL
@iKNuDDeL Год назад
Great Video and thanks for mentoining the Lora. You also could have mentoined the weighting, which for Loras is also essential ;)
@EllenVaman
@EllenVaman Год назад
what a Legend :)
@godned74
@godned74 Год назад
I find I can achieve all the above using the basic model simply by adjusting my prompt, resolution, steps, cfg and finding the right seed. Its all how you set up your prompt and negative prompt. For best settings: VRAM usage polls per second during generation. Set to 0. Move face restoration model from VRAM into RAM after processing, Move VAE and CLIP to RAM when training if possible. Saves VRAM. on, Turn on pin_memory for DataLoader. Makes training slightly faster but can increase memory usage. on, SD VAE off or 0 , Checkpoints to cache in RAM 0.
@michaelli7000
@michaelli7000 Год назад
Nice one, now it does look like sth from mj😀
@milestrombley1466
@milestrombley1466 Год назад
Man, there are so many text ro image applications to use!
@Puckerization
@Puckerization Год назад
Great video, thanks. The only problem with the Illuminati Textual Inversions is they can really change the original, they can act like a seed. ( I like some of the results though)... Whereas the epi_noiseoffset Nora only affects the lighting.
@OlivioSarikas
@OlivioSarikas Год назад
yes, makes sense. It's a different model. I will look into the chaging just the lighting with a lora, that sounds cool
@76abbath
@76abbath Год назад
Let's go!
@OlivioSarikas
@OlivioSarikas Год назад
go go go!
@TheGalacticIndian
@TheGalacticIndian Год назад
As always a very informative video! I just signed up for the NVIDIA conference. It would be cool to win 3080Ti😍I wish you luck!🍀
@ShawnFumo
@ShawnFumo Год назад
Nice video! Just one correction, I think it isn't really about dynamic range per se. Usually lack of dynamic range would mean either losing details at the ends of the range (lost detail in the shadows or the highlights are blown out), or the image just low contrast without anything actually bright or dark. What has been happening with Stable Diffusion is a weird thing that doesn't have an analogue in real photography. Basically the light and dark in the image has to balance out almost perfectly. A low contrast image might fit that description, but one with high dynamic range could fit it as well. The place the problem comes in is if you have an image that should be mostly bright or mostly dark. Like say you have a snowy landscape with a white cat on it. That would normally be an overall bright image, but SD won't allow it. So it may make things grayer, but it also might keep it bright white but then add a bunch of dark rocks in shadow on one half of the image. The same goes for a night shot. It might make it grayish but it also might keep part of the image dark but then happen to have some bright neon lights in the foreground taking up some space. This also meant it was impossible to just have a small logo made of black lines on a white background. It'd either have to add a black border around the edges of the image or put the logo on gray instead, since otherwise the amount of white would be too much compared to the small black lines. So the offset noise eliminates this need for overall balance in the image, letting there be a mostly dark or mostly bright image. Basically "high key" and "low key" photography. And since those styles are pretty common for cool looking images, it ended up being a big limitation. What's really interesting about all of this to me, is that the AI ended up being very creative about how it fulfilled this limitation. So much so that no one even noticed it was happening until recently!
@artoke84
@artoke84 Год назад
very useful info!
@okachobe1
@okachobe1 Год назад
Thanks for the info!
@AC-zv3fx
@AC-zv3fx Год назад
Great explanation! Needs to be pinned
@OlivioSarikas
@OlivioSarikas Год назад
Thank you, that's really great info!
@vincentcarlucci1259
@vincentcarlucci1259 Год назад
Excellent clear explanation!
@Airbender131090
@Airbender131090 Год назад
Well it can. You can make grey fill image and use it in image2imahe with denoising about 0.9 and you get better contrast
@OlivioSarikas
@OlivioSarikas Год назад
on a regular SD 1.5 model?
@venpetrov
@venpetrov Год назад
So basically after each of your videos everything else seems sooo "last week" technology and you update us so we don't lack behind too much 😂.
@EditArtDesign
@EditArtDesign Год назад
In which folder should the models and auxiliary configurations be located??
@martinteadrinkereklund4285
@martinteadrinkereklund4285 Год назад
There is minor difference using the yaml file compared to not. But I guess better to use it as this is the intended setup.
@Vlow52
@Vlow52 Год назад
Definitely not a v5 killer, but SD has much more bedevils in terms of usability. We shall wait for SDXL
@carpepoulet4943
@carpepoulet4943 Год назад
your channel is excellent, the best of the bevy of Ai channels that have developed recently, kudo's sir, and many thanks.
@EditArtDesign
@EditArtDesign Год назад
What are these additions like lora offset and epi_noise for?? If I understand correctly, when are they needed for darker images and how does this affect the Illuminati diffusion model - after all, it already gives contrasting photos?? And where should these additions be, how to work with them??
@JimYeo
@JimYeo Год назад
Great review again. Just an FYI, there's a better place to put the Lora model, in the extra extensions folder. You can then stack multiple Lora models on top of each other with different weights as well as not having to call the Lora model in the text. There's a slider function that acts to increase or decrease the Lora model.
@iKNuDDeL
@iKNuDDeL Год назад
That’s an extension, automatic has built in Lora usage, so the explanation is correct until you will install the extra networks extension.
@99errorcode-sparrow
@99errorcode-sparrow Год назад
I love your videos and can't wait to get my hands on some of the tools you showcase in the channel. So I'm thinking of getting a ASUS 2060 8GB to get into using AI tools. My old AMD card doesn't support any AI tool and my budget is a bit low. Any advice form some experts is appreciated guys. thanks!
@Wtfukker
@Wtfukker Год назад
just a heads up . my 2080s 8gb is doing 768s very fast.. so for now we're good :)
@OlivioSarikas
@OlivioSarikas Год назад
look into online benchmarks and ask other users on discord and reddit about their results to get the best bang for the buck
@glitter_fart
@glitter_fart Год назад
if you havent already bought it look into using datacenter tesla cards, p40's are very nice once you have it all setup (some installs might be easier than others depending on your setup)
@IbisFernandez
@IbisFernandez 8 месяцев назад
what if i want the opposite. less darkness, less highlights? Looking to generate skin textures for 3d models.
@MrPlasmo
@MrPlasmo Год назад
we are coming for you MJ
@OlivioSarikas
@OlivioSarikas Год назад
and in big leaps too :)
@soren-1184
@soren-1184 Год назад
Can this be used with Deforum?
@ayokrezy
@ayokrezy Год назад
Can you pls make a video on how to train a style in stable diffusion like spiderman into the spiderverse, arcane, etc. There are only videos about how to train a character, but no videos for style 🥲
@tripleheadedmonkey6613
@tripleheadedmonkey6613 Год назад
It's called Dynamic Range. FYI. Great Video! Thanks for all the work!
@OlivioSarikas
@OlivioSarikas Год назад
What did i say in the video? pretty sure i say dynamic range - also, thank you
@tripleheadedmonkey6613
@tripleheadedmonkey6613 Год назад
@@OlivioSarikas I don't think you specifically said the phrase. You called it Range and Dynamics separately though and I just didn't make the connection. Thanks for all the effort.
@tripleheadedmonkey6613
@tripleheadedmonkey6613 Год назад
@@OlivioSarikas Also there is a Typo in the description and you've called it "Ragen" instead of Range lol.
@tripleheadedmonkey6613
@tripleheadedmonkey6613 Год назад
And while I have your attention check this video out from Mickmumpitz. If you've not already seen what he is doing with Control Net masks and Blender, it's absolutely incredible stuff and adds a new layer of realism and life-like depth to the work that is possible. /watch?v=0tFe9dashgI
@ExzertVR
@ExzertVR Год назад
Quick question. I just set up the Koyha folder for Lora because i wanted to use these Lora models. Did I even have to do that? Do these Lora models work without that, and that Koyha folder is just for training?
@RohithMusic
@RohithMusic Год назад
Can I use these embeddings in comfyui?
@iamtheoaa
@iamtheoaa Год назад
you do have an epic beard!
@OlivioSarikas
@OlivioSarikas Год назад
Needs some more color though ;)
@waynelai354
@waynelai354 Год назад
Big reveal soon. It's AI generated!
@Vincent-mx4rk
@Vincent-mx4rk Год назад
I wish there was a good way to dreambooth my face on this.
@OlivioSarikas
@OlivioSarikas Год назад
maybe with the lora i show at the end and a dreambooth that is already trained on you
@adarwinterdror7245
@adarwinterdror7245 Год назад
I didnt understand how to install the embeddings. in what folder those 3 files should be?
@duubhs
@duubhs Год назад
I think I'll train a RAW / log model, so I can color grade images myself...
@springheeledjackofthegurdi2117
are there any 1.5 versions for the embeddings? as I can't get v2 embeddings to not get skipped on load up
@OlivioSarikas
@OlivioSarikas Год назад
The negative embeddings are specifically for this model
@Bizarriada
@Bizarriada Год назад
Please a tutorial for people that use google colab
@Xenocyde3000
@Xenocyde3000 Год назад
Illuminati does not work in Invoke AI because it does not actually support SD 2.1 models other than the base one just yet.
@OlivioSarikas
@OlivioSarikas Год назад
oh! i didn't know that. strange. I thought if it supports the base model it will work with other SD 2.x models too
@theplayerformerlyknownasmo3711
Midjourney is social. Me and my friends all have subscriptions and we have a private server where we enjoy hours of entertainment bringing our creative, funny, and downright wrong ideas to life with a few commands and some words. Been like this for months now.
@zeynepzap
@zeynepzap Год назад
Olivio thank you for your great videos. Im looking for someting like facial expression transfer. If you know any other one exist rather than Thin-Plate-Spline-Motion-Model can you please consider to make a tutorial about it
@OlivioSarikas
@OlivioSarikas Год назад
have you tried controlnet? it's really good with the canny model, but i could imagine that with a scribble and just a crop of the face you could create any face expression. Or a 3D model input
@zeynepzap
@zeynepzap Год назад
@@OlivioSarikas Yeah i tried it, it doesnt give me anyting clear, so many blur or mixed of 2 faces even i have the negative propmts, also im using image to image for consistant-character expression
@OlivioSarikas
@OlivioSarikas Год назад
@@zeynepzap how do you get two faces with canny active? You sure you enabled controlnet? Also it doesn't work with 2.1 only 1.5 so far
@zeynepzap
@zeynepzap Год назад
@@OlivioSarikas ​ Yes it was active. Two faces on top of eachother was weird. I activated ControlNET. Without it i can get a output close to my input but with enabling it everyting goes weird. you can see my process .Im trying too add a link to my comment to an image website but im guessing youtube deletes my reply bc of it. Where can i send it to you?
@OlivioSarikas
@OlivioSarikas Год назад
@@zeynepzap can you send some images in my discord group? please tag me in the messages. i would like to see that.
@brunolopezmosconi8430
@brunolopezmosconi8430 Год назад
im trying to make consistent charcater for comics. with MJ is almost impossible, i find a los tof stuff that could be well in SD. The problem is my laptop... is there a way to use of these amzing stuff on cloud or an app?
@iKNuDDeL
@iKNuDDeL Год назад
Google collab
@user-ms2is2qs9g
@user-ms2is2qs9g Год назад
do for 1.5, for many it simply does not work, there is not enough memory.
@OlivioSarikas
@OlivioSarikas Год назад
😍WIN A Nvidia RTX 3080Ti !!!!!😍 Sign up for free to the Nivida GTC Conference with this link for a Chance to win a RTX 3080Ti nvda.ws/3D7nNY4 #### Links from the Video #### civitai.com/models/11193/illuminati-diffusion-v11 civitai.com/models/13941/epinoiseoffset twitter.com/cac0e/status/1630389962129432578 twitter.com/camenduru/status/1630053866027794434 www.crosslabs.org/blog/diffusion-with-offset-noise #### Join me #### Join my Discrod: discord.gg/XKAk7GUzAW Join my Facebook Group: facebook.com/groups/theairevolution
@TomiTom1234
@TomiTom1234 Год назад
@@LeonardoTheAlien Me neither
@OlivioSarikas
@OlivioSarikas Год назад
@@LeonardoTheAlien that's a secial give away Nvidia is doing for my Channel
@venpetrov
@venpetrov Год назад
done! 🤞
@flkadjsfklajfkl
@flkadjsfklajfkl Месяц назад
Nfixer, Nartfixer and Nrealfixer cause especially faces to be glitchy and artifacted. Any ideas why or how to fix this?
@ArmandoGuevaraCamargo
@ArmandoGuevaraCamargo Год назад
Do you know something to fix hands? 👍
@sock501
@sock501 Год назад
ControlNet’s new Guidance Start feature is what you want
@OlivioSarikas
@OlivioSarikas Год назад
I will do a video on that soon
@paulocoronado2376
@paulocoronado2376 Год назад
I didn’t download the .yaml file and it worked the same 🤔
@TerryThayer
@TerryThayer Год назад
where can i find stable diffusion 2.1 for automatic1111 and how to install info? illuminat diffusion isnt on civitai anymore? link doesnt work
@GerardoHdz23
@GerardoHdz23 Год назад
Why do some instructions in the prompt have parentheses, and even double parentheses? I have seen several people do it but I cannot find any documentation on the matter
@joakimkling1072
@joakimkling1072 7 месяцев назад
parentheses is about weight
@littleprincessnene
@littleprincessnene Год назад
dumb question, how to get LORA and how to install LORA?
@DenLe222
@DenLe222 Год назад
Where is a link for Offset Noise? How can I try it?
@PaintDotSquare
@PaintDotSquare Год назад
How can I use SD 2.1 with control net?
@OlivioSarikas
@OlivioSarikas Год назад
doesn't work yet
@MacS7n
@MacS7n Год назад
First
@OlivioSarikas
@OlivioSarikas Год назад
Yay! You win a heart like :)
@jeffdavis5196
@jeffdavis5196 Год назад
Custom LORA models killed midjourney for me some time ago. This offset noise looks like it gives me even less reason to use Midjourney.
@Comic_Book_Creator
@Comic_Book_Creator Год назад
some images like the black women with glasses are stock images
@OlivioSarikas
@OlivioSarikas Год назад
yes, because i'm talking about photography at the point in the video.
@VaibhavShewale
@VaibhavShewale Год назад
lol my system is so weak that i cant even install the diffussion it just hangs my system
@nomorejustice
@nomorejustice Год назад
thank you daddy
@jozsefmihalyi2818
@jozsefmihalyi2818 Год назад
others tend to do it by giving the link of what they are talking about. Why don't you do this? Secret?
@hogstarful
@hogstarful Год назад
I'm getting nan error, why is that
@Whoisthisg
@Whoisthisg Год назад
isnt midjourney uses stable diffusion?
@OlivioSarikas
@OlivioSarikas Год назад
not anymore
@TheWuCepticon1981
@TheWuCepticon1981 Год назад
No way of using this in colab?
@OlivioSarikas
@OlivioSarikas Год назад
Just use automatic 1111 colab and upload this model into it
@displacegamer1379
@displacegamer1379 Год назад
The reason why stable diffusion hasn't beaten midjourney is because stable diffusion doesn't do non people images as well as midjourney does. People are getting better at modeling for stable diffusion for people but images that aren't people related still can't even get close to what midjourney does.
@OlivioSarikas
@OlivioSarikas Год назад
hm.... interesting point
@HavoJavo
@HavoJavo Год назад
@@OlivioSarikas We actually spent a lot of time/effort finetuning non-person images for our 3dkx v2 model :)
@Ssquire11
@Ssquire11 Год назад
SD seems hard to setup. MJ will reign so long as it stays easier to use.
@dominiclynx8886
@dominiclynx8886 Год назад
If it's hard to setup but worth it to get same results with MJ then it's time so switch up.
@KevinSandy2
@KevinSandy2 Год назад
Lost. I cant follow any of this. It”s now out of my reach.
@pluckypluckster
@pluckypluckster Год назад
i wonder if ai will go the way of DA and ban using artist names in prompts, opting more to ~describe the style~ you want. U can't copyright style but using the artist names to get the style (@6:10 in the prompt behind you) might be something in the future or a path anyways that i can see becoming legal issues, much like all the ai aps that clearly used high quality copyrighted works to train their aps on. but for the meantime who cares... apparently. :D
@pluckypluckster
@pluckypluckster Год назад
cause really these aps aren't trained on style they are trained on artist names. If you type the same style an artist uses in place of the artist name you will get a totally different image. yes there are more fine quality details then that, but the general idea holds true.
@ChadGauthier
@ChadGauthier Год назад
Ugh just tell us what's the best
@OlivioSarikas
@OlivioSarikas Год назад
depends on what you need ;)
@yeastydynasty
@yeastydynasty Год назад
Why I keep using Midjourney is its rendering on their systems. I'd have to shell out thousands upon thousands of dollars on GPU's to render detailed renders in under a minute. The con is Midjournery doesn't deliver exactly what you as an artist wants.
@fpvx3922
@fpvx3922 9 месяцев назад
This resource has been removed by its owner It is no longer available
@relaxandlearn7996
@relaxandlearn7996 Год назад
the model is down because midjourney dont want someone compete with them
@jendabekCZ
@jendabekCZ Год назад
There will be a "killer" each few months ...
@saintcyberchaos265
@saintcyberchaos265 Год назад
not light, contrast higher contrast in images simulates our eye adjustment to various light intensities in real life(for example: darker coloured background gets darker around bright parts) and images look more realistic even though they are actually sort of not
@mybakwaaschannel
@mybakwaaschannel Год назад
mistake at 4:57 , you said models folder then embeddings folder.. but embeddings folder is in main directory not inside models folder
@Wtfukker
@Wtfukker Год назад
can ya people just create one mega model diffuser whateva,, and call it a day ? :P cant keep up with all this shit
@OlivioSarikas
@OlivioSarikas Год назад
Let us have some fun before we go back to the "one fits all solutions" like Lightroom or Facebook ;)
Далее
Create consistent characters with Stable diffusion!!
26:41
Why You're Prompting Wrong, Do This (Per Leonardo AI)
18:31
The TRUE Resolution of FILM
12:31
Просмотров 47 тыс.
LORA Training - for HYPER Realistic Results
36:20
Просмотров 97 тыс.