Тёмный

ULTIMATE Upscale for SLOW GPUs - Fast Workflow, High Quality, A1111 

Olivio Sarikas
Подписаться 233 тыс.
Просмотров 117 тыс.
50% 1

Опубликовано:

 

25 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 174   
@BenjiPhoto
@BenjiPhoto Год назад
I like the 'Nickleback' upscaling model. Ot feels more natural and gives a photo feel
@Pahiro
@Pahiro Год назад
You're very calm for the game changer that this is. I've created two amazing dual screen wallpapers today and a high quality mobile wallpaper because of this. Incredible!
@dominicjan
@dominicjan Год назад
yeah, saying that this is a game-changer is underselling it. this stuff is bonkers
@orangehatmusic225
@orangehatmusic225 Год назад
lol just wait till you all see what's next then.. heh your brains might explode. better prepare yourselves.
@megal0maniac
@megal0maniac Месяц назад
This works so well!!! I didnt have the esrgan folder so i created one in the path shown and it worked like a charm! Some of the shown settings were removed so i paid them no mind too. I use meinaunreal with 0.2 denoise, esrgen 4x anime 8b and the seams fix. I will play around some more and update if i find a better result. Thank you so much again Olivio!!
@gasia112
@gasia112 Год назад
Your work reminds me of the Office handshake meme. Even though I trained this model, I didn't realize it could be used in this way.🤣 Thank you for your video as always!
@stibo
@stibo Год назад
I've learned from my research, that using exactly the same prompt and removing most of the negative prompt gives you by far the best results.
@blizado3675
@blizado3675 Год назад
Need to test that, thanks for this hint.
@stibo
@stibo Год назад
@@blizado3675 Let me know if it works. I'm curious if thats only working on my images. :)
@ronaldp7573
@ronaldp7573 Год назад
Great delivery, no fluff, overall an underrated channel. Keep it up brother, you are a pillar of the AI art community
@dominicjan
@dominicjan Год назад
This upscale method is just what I have been looking for. Now I don't need to buy an expensive computer do do stuff. wow
@vonpheusarts6948
@vonpheusarts6948 Год назад
Love you man! Tried this on my AMD GPU and it works great! Super clear and it kinda fixes some bad anatomies on my images too! This is a great help thanks a lot! You're always the best!
@Lansolot
@Lansolot Год назад
I found out about this 3 days ago and have spent so much of my free time in A1111. You can upscale to a level of detail the eye cannot even see in real life. I recommend when upscaling over 1800px to gradually increase your denoising and SD RES (512 to 1024). This allows it to add more detail. If you don't things will begin to get blocky. (I do not use controlnet Tile in my workflow)
@Chrono..
@Chrono.. Год назад
Do you think it's better to upscale images without the ControlNet's Tiles model?
@hypnotic852
@hypnotic852 Год назад
I tried your method but the ai is adding faces and warping the skin of my subject, how did you mitigate it. My prompt also only included quality tags and nothing pertained to my subject.
@divtag1967
@divtag1967 Год назад
@@hypnotic852 change your sampling methode to DPM++ 2s a Karras, this will most likely fix 95% of your problem with warped skin and strange detailes, also for the best result, do not jump strait to 4x upscaling, go slow and upscale 2x at a time, this will keep the small detail from the original image must better. i only use SD upscale and 4xUltrasharp.
@lithium534
@lithium534 Год назад
​@@hypnotic852 if the option mentioned bellow does not help use controlnet. That will almost 100% work.
@Lowbow0
@Lowbow0 Год назад
but guys does this all also work as batch? "upscale animations " ??
@betodeth
@betodeth Год назад
I will thank you OIivio, the hard work you put and the quality of your videos and the best part how your explain are excepcional. The community need you now, please explain a guide about How to install and use Deforum , the trend it's bigger now and we need to move into video, that's the future of Generative IA.
@S4SA93
@S4SA93 Год назад
Okay I LOVE his videos a lot too, it helps so much, he's like the best youtuber for this right now. But come on, those don't look like "hard work and quality of videos". I mean he is basically sitting in his stable diffusion and explaining you the buttons.
@ArielTavori
@ArielTavori Год назад
Note that depending on your settings and model, in some cases with other controlnets ive gotten great results even at 8 samples, which can speed things up dramatically, especially in the concept stage!...
@noxious_2.06
@noxious_2.06 Год назад
0:15 hahahaha xD In french we could say "la bonne cam" ou "les bons tuyaux" ou "du lourd" hahaha btw, thanks for all your videos and tutorials ! They are very clear an easy to understand :D
@blacksage81
@blacksage81 Год назад
This process is indeed 100% worth it for me. I'm living life on Toaster Status so the more I can rope in my CPU into the process the better.
@OlivioSarikas
@OlivioSarikas Год назад
#### Links from the Video #### Ultimate Upscale Extension: github.com/Coyote-A/ultimate-upscale-for-automatic1111 Ultrasharp Model: upscale.wiki/wiki/Model_Database Ghostmix Model: civitai.com/models/36520/ghostmix PhoenixDress Lora: civitai.com/models/48584/phoenixdress #### Videos from this Video #### ULTRA Sharp Upscales: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-A6dQPMy_tHY.html Amazing SD Models: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ezNDCWhv4pQ.html Controlnet 1.1 Install: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-zrGLEgGFJY4.html
@v-ia
@v-ia Год назад
(wrong link for extension)
@v-ia
@v-ia Год назад
/Coyote-A/ultimate-upscale-for-automatic1111
@OlivioSarikas
@OlivioSarikas Год назад
@@v-ia Thank you! I fixed it. That was the link from the video i wanted to do before this ;)
@hellblazerjj
@hellblazerjj Год назад
Best AI Art tutorials on the Internet hands down. Thanks so much for all of the help.❤
@BabylonBaller
@BabylonBaller Год назад
Bruh, u da man! So many golden nuggets. Stay blessed brother
@callagain77
@callagain77 Год назад
Awesome!!! The upscaling not only worked wonderfully, but it improved inpaint additions, such as necklace, longer sleeve, bracelet, and ring to perfection on my very first try. Thankyou!🙂
@MHumanoid
@MHumanoid Год назад
Thanks for sharing, especially the ControlNet part. I'm getting significantly better upscales this way
@kofteburger
@kofteburger Год назад
To my surprise, this works with the DirectML version on Radeon cards. Tried it with my Rx 6700 10 GB. Worked without a flaw.
@Maisonier
@Maisonier Год назад
Your videos are amazing! thank you so much for sharing this.
@blizado3675
@blizado3675 Год назад
The ears often also need a fix, you can see that on the last image very well. So pay attention not only to the eyes, but also to the ears to revise them.
@pathet11cc
@pathet11cc Год назад
Автономность зависит от вас напрямую. Если нужна автономность, то нет смысла брать рог эллай вместо дека. Дек в ведьмаке 30 фпс, фср и соответствующими настройками tdp под 3.5 часов спокойно выдаёт
@baxter987
@baxter987 Год назад
Olivio is always one day behind Sebastian Kamph. If Sebastian releases something, expect Olivio's take shortly thereafter!
@Pahiro
@Pahiro Год назад
Still, i love Olivio's delivery.
@CoreDump07
@CoreDump07 Год назад
your videos are very clear, very nice work
@morganandreason
@morganandreason Год назад
Thank you for this very lucid and pedagogical tutorial.
@evantspurrell
@evantspurrell Год назад
Thank you, your videos are very well put together and highly watchable
@macronomicus
@macronomicus Год назад
Nicely done, so many tips & tricks mixed in, all very helpful. Cheers! I was already using this method but wasnt quite getting the best results as often as I can now.
@thomassynths
@thomassynths Год назад
Results vary. Most of the time, I get artifacts due to inconsistencies between tile information. So for like 10% of the time, it looks amazing and the other 90% has serious inconsistencies.
@liakong
@liakong Год назад
I get the same result as you. Severe tile generation.
@chansantawisuk
@chansantawisuk Год назад
What denoising strength do you guys use? Have it at the lowest and see if it help. Ex (0.1) It keeps the ai from changing your image and stick to original in short. Contronet also do help as it keeps the ai from altering the image too much although I do not use it
@TheDocPixel
@TheDocPixel 10 месяцев назад
@@chansantawisuk except it doesn't add detail and just enlarges the image, which then looks like an illustration.
@chansantawisuk
@chansantawisuk 10 месяцев назад
@@TheDocPixel Yeah that is true. I dont always use denoising strength 0.1, Only when I got what I wanted and have to enlarge image.
@neverend9302
@neverend9302 Год назад
Thanks for doing these. I have been working on night time cities with all of the little office lights and it isn't very forgiving.
@johiny
@johiny Год назад
amazing video olivio!!!
@leebz
@leebz Год назад
Very insightful, thank you.
@djwhispers3157
@djwhispers3157 Год назад
thank for this, awesome video.
@BloodRedKittie
@BloodRedKittie Год назад
Thank you so much for your Tutorials!
@DJHUNTERELDEBASTADOR
@DJHUNTERELDEBASTADOR 11 месяцев назад
Genial Olivio !!! muchas gracias
@farfletched
@farfletched Год назад
Been loving your content. Finally subbed! Keep it up, amazing info.
@OlivioSarikas
@OlivioSarikas Год назад
thank you and welcome to my channel
@Med-ou7we
@Med-ou7we Год назад
It's really working guys...
@alwayswhiningjew
@alwayswhiningjew 8 месяцев назад
I advise you to set the "padding" to 160, it gives a much better result, you will get rid of strange artifacts. also ''contl net is more impornant'' must have
@danieldroidser3275
@danieldroidser3275 Год назад
Amazing results. Super detailed and with lots of ornaments. Nice to see such a nice composition. Mind sharing the model and the prompts (pos and neg) you used?
@danny6855
@danny6855 Год назад
Very thanks bro!
@RonnieMirands
@RonnieMirands Год назад
its important always to keep in the seed other value than -1, otherwise the tiles will change
@CustomGraphixYT
@CustomGraphixYT Год назад
Should give Remacri a try, way better than ultrasharp imo, the results speak for themselves. speed is faster and output looks way better
@ddra9446
@ddra9446 Год назад
Thank you very much!
@MrSongib
@MrSongib Год назад
😊Noice😊 Ty sir and Have a nice day.
@PhilippSeven
@PhilippSeven Год назад
Finally control with tile is working as it should! Btw use Tile model with “one pass” upscale is useless - I tested it out.
@OlivioSarikas
@OlivioSarikas Год назад
what do you mean by "one pass"?
@PhilippSeven
@PhilippSeven Год назад
​@@OlivioSarikas I mean when you upscale in img2img without script, for example 512x512 image to 1024x1024. In this case, there is no point in using this model in control. For preserve the details you can use "depth" or "canny" instead.
@rickarroyo
@rickarroyo Год назад
Hi Olivio at 06:11 Why turn on "seams fix" if the type is set to "none"?
@Barzing0
@Barzing0 Год назад
After mulitple (a lot of multiple) test i advice to push Ultimate SD uspcale Width to Max (1024) to lower seams impact and you can enable Tiled Diffusion (diffuser only) to have more control on denoising (no upscale)
@Ryuuko3
@Ryuuko3 Год назад
Thank you
@ksiobrga
@ksiobrga Год назад
Very nice!
@miclasher8884
@miclasher8884 Год назад
I want to add that you can use built in script (SD upscale). Also you can try adding only "detailed' into prompt and using heun sampler.
@NotThatOlivia
@NotThatOlivia Год назад
Awesome tutorial!!!
@OlivioSarikas
@OlivioSarikas Год назад
Thank you :)
@NotThatOlivia
@NotThatOlivia Год назад
@@OlivioSarikas no, this time - TY!
@AorSoda
@AorSoda 9 месяцев назад
Wow this is work with rtx 1650 thanks
@treksis
@treksis Год назад
again thank you...!
@sylvainh2o
@sylvainh2o Год назад
Wow this room looks amazing with the lights and everything! Do you have a video talking about your setup?
@blizado3675
@blizado3675 Год назад
Uhm, this is a green screen, that background is also a AI generated image which makes a lot of sense with the topic of his channel.
@ddra9446
@ddra9446 Год назад
Here is an suggestion from me: If you think that steps are so high and you canno wait or you pc cannot handle it, just decrease the denoising strength to something around 0.1 or 0.2 and the steps will reduced significantly
@NateWilliams-e3w
@NateWilliams-e3w Год назад
Have you done a video on the Block Weight extension? I find it really cool how you can control LORA interactions with the image by telling it to only affect the start of the image generation, or the middle, or the end, or every other step. Would be cool to see you play around with this extension if you don't got a video on it already. Was in the Available list of extensions in A1111. Btw., the Ultimate SD Upscale is in the Available list of "officially supported" extensions as well. Can just install it from there.
@DeSibyl
@DeSibyl Год назад
every time I do this in img2img, the console writes out for each tile render: "could not find upscaler named , using None as a fallback."... I am using the R-ESRGAN 4x anime6b one, and I don't think I get this issue when using the High-Res fix. I'll try using a different upscaler and see if it still gives that error... But the process doesn't get interrupted at all and it does actually finish the image, but not sure if it actually ran it through the upscaler model or w.e
@EllenVaman
@EllenVaman Год назад
you da best !!!!! :)
@FrizFroz
@FrizFroz Год назад
I wonder how this works in tandem with Tiled Diffusion/Tiled VAE, if at all worth the effort to run both together with controlnet tile and SD upscaler? Will it increase the quality even further?
@karikaturdigital6123
@karikaturdigital6123 Год назад
for me .. I use the third party application Topaz Photo AI, which can be more detailed and faster
@teambellavsteamalice
@teambellavsteamalice Год назад
Awesome video again! I do wonder, is there a way to deconstruct an image into various elements, background, body shape, clothing, face? Iirc with Lora you can train your own model so you have a consistent face from any angle. If you like a certain item or clothing, could you do the same for that? Maybe save it in a wide multipose image (front, 45°, side and other side if not symmetric) with all the png info attached like you had here? I've seen you split images in "tiles", not squares but with an auto detect for the borders of certain areas. Could you use this to "tag" each part of the image as background, props, model, face, etc and the use controlnet to reference the right part and gain consistency? Maybe then things like eye color or the embroidery doesn't change with upscaling. This would give greater control if you want to tweak an image, turn a model slightly or change the expression. Even better if you could change outfits and swap props in and out (like jewelry).
@isarapp6262
@isarapp6262 Год назад
Thats very nice, but Euler should be still Euler, not "Youler"! Er war ein Schweizer.
@yiluwididreaming6732
@yiluwididreaming6732 9 месяцев назад
Why Eula? Tends to change things from original image. That's why botched eyes in first take. Try instead DPM++ 2M Karras 20 steps - Denoise 0.2 or lower. Consistent results (and faster!) . Efficient work flow. Nuff said.
@afccceb
@afccceb Год назад
It's kinda slow to process 1 image, but the result are amazing.
@nomad11811
@nomad11811 Год назад
There is often the problem of seams, with obvious seams appearing at the edges of the each tile....
@marcdevinci893
@marcdevinci893 Год назад
How would this compared to upscaling in Gigapixel AI in terms of details? I would think this would give better results (of course slightly different since it's generating details)
@BF-non
@BF-non Год назад
this is better, Gigapixel does not really add that much details
@cyberhard
@cyberhard Год назад
I don't understand how this is for "slow" GPU. You gave a workflow but don't describe how it benefits a "slow" GPU. What is a "slow" GPU versus a GPU with less than x GB of RAM?
@RonnieMirands
@RonnieMirands Год назад
Cause you cant upscale before with a "cheap, low memory GPU", now you can, cause this operation is made by "tiles" so you dont load all in the vram. I tested and i could 10x upscale! Before that tile operator, that was impossible.
@RonnieMirands
@RonnieMirands Год назад
@Alita You have to show printscreen, just talking no idea what is wrong :(
@RoboMagician
@RoboMagician Год назад
I followed the instruction to upscale, it took me 12 mins but the image looked like every tile/mini section did a generation of its own, and the overall picture looks deformed! what can I do to change that?
@rikofdp9459
@rikofdp9459 Год назад
I think you should use controlnet tile, otherwise it will look messed up
@eyevenear
@eyevenear Год назад
my 3090ti reads the title and instantly concluded: "NOT TODAY" context: recently upgraded from a 10yo 980. the best decision ever made, if you need it for work.
@Steamrick
@Steamrick Год назад
Control Net updated compared to the version you have (1.1.144 now). The downsampling slider is gone and the preview is incredibly low-res and blurry?! Seems to be basically broken right now.
@a7medetman150
@a7medetman150 8 месяцев назад
cool !! i have a problem with model more than 4 G show to me can't collect the model because size ?! any one have same error ? i have 4g vram by the way.
@lithium534
@lithium534 Год назад
Great videos as always. The title is crear of what to expect and easy for me to look it up later and the step by step explanation is great for beginners and non as well. My question is about generating a normal image. Not an upscale. I have custom models and I notice that if I stop the generation at 70-80% the image is more like the subject. Is there a command or a setting to have the generation stop before the end? Thank you in advance and looking forward to the next one.
@xvi1921
@xvi1921 Год назад
Yes, there is! It's called Clip Skip. Go to your Settings tab, then User Interface, then look for the quick settings list. In there, make sure you have the following text: sd_model_checkpoint, CLIP_stop_at_last_layers Then, click apply settings at the top and restart your UI (and maybe the SD program, too, if that doesn't work). At the top of your SD window next to your checkpoints you should see the Clip skip interface. The default setting is 1, which will stop the image generation at the last layer as it normally would. If you set it to 2, it will stop image generation 1 layer sooner, etc.
@lithium534
@lithium534 Год назад
@@xvi1921 Thank you for the reply. From my tests it should be the same but it's not. You can see it right at the start of the generation that the image is different depending on the clip skip. But by interrupting the generation before the end you are able to stop the ai from adding detail. This is also different then having less steps.
@stevenlu7324
@stevenlu7324 Год назад
I'm using this technique and finding that my 24GB VRAM isn't enough for directly inpainting messed up face features inside 5x upscaled (2560x3840) images, it seems like even though it only needs to repaint a small area it's still doing something on the whole frame. Gotta see if I can find what's causing this.
@guilhermecastro3671
@guilhermecastro3671 Год назад
im having the same problem, did you find a solution ?
@relaxation_ambience
@relaxation_ambience Год назад
@OlivioSarikas I tried to delete everything from positive prompt, but on 8k image I get terrible tiling. Also I tried other methods and all the time on 8k images got very bad tiling. Did you try with 8k images ?
@noritsueco
@noritsueco Год назад
i get this error after i downloaded xformers, and i already reinstall everything but keeps appearing: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
@Gust52
@Gust52 9 месяцев назад
I can't find the ESRGAN folder in models; there's no folder with that name.
@Nitrowing1500
@Nitrowing1500 Год назад
You deleted the ESRGAN.pth from it's folder? I tried this and upscaling gives errors in the console
@fidelcrisis
@fidelcrisis Год назад
Facing an error where it says, " TypeError: unhashable type: 'slice' " while using this ultimate upscaler and ControlNet tiling together. Any idea about this?
@semantickascadesyt
@semantickascadesyt Год назад
how faster could the workflow like that be with someone having, say, 1060 6Gb?
@dthSinthoras
@dthSinthoras Год назад
Following this I run out of memory (OutOfMemoryError: CUDA out of memory. Tried to allocate 9.96 GiB (GPU 0; 24.00 GiB total capacity; 13.27 GiB already allocated; 8.09 GiB free; 13.29 GiB reserved in total by PyTorch) when going over 4K. Any idea what could went wrong? I use the settings like in the video.
@PawFromTheBroons
@PawFromTheBroons Год назад
In one of your latest videos, you used SD Upscale instead of Ultimate Upscale, could you do a video on why one instead of the other?
@JiraMelaj
@JiraMelaj Год назад
That ESRGAN 4x-UltraSharp has a license that is both non-commercial and must attribute. Any alternatives that aren't so restrictive?
@francoisneko
@francoisneko Год назад
I would love to be able to use this workflow with comfyUI, unfortunately I am not able to recreate it
@orirune3079
@orirune3079 Год назад
When I do this ultimate upscale, I often end up with some of the tiles on the upscaled image coming out black. Does anyone know how to fix this?
@Robstercraw
@Robstercraw Год назад
the problem with tiles is that they show in images.
@ppn7
@ppn7 Год назад
Can you tell me what is exactly the second and third example ? The second one is using the upscale script X4 ultra sharp only at scale 4 and the third one is using with control net tile ??
@Rasukix
@Rasukix Год назад
12:03 when u say alternatively does this mean a different method from the previous one?
@Omfghellokitty
@Omfghellokitty Год назад
I'm having a hard time getting the images to look the same after the upscale. The quality of the images is really high though.
@Rasukix
@Rasukix Год назад
could I technically use highres fix pre generation and then send the output to img2img and do the end method on top? So highresfx 4x, and then 4x again after?
@bakuryuu1009
@bakuryuu1009 Год назад
When i try to use tile upscale i get an error
@miguelarce6489
@miguelarce6489 Год назад
So we can say this is something like stitching of jax diffusion?
@AiNomadArt
@AiNomadArt Год назад
Im using laptop RTX3060 graphics card and it has 6GB VRAM, every time I run the upscaler it pop me CUDA out of memory 😅, is it normal because my VRAM is not enough to run the upscaler?
@yuvenilofficial
@yuvenilofficial 6 месяцев назад
ESRGAN folder no longer available, where should the file go?
@megal0maniac
@megal0maniac Месяц назад
Just create the folder in the path he shows
@Niiwastaken
@Niiwastaken Год назад
no matter what I do I get pretty bad tiling :/
@MilesBellas
@MilesBellas Год назад
Pan and zoom in SD next ?
@ppn7
@ppn7 Год назад
I don't understand which one is the best between directly upscale with ultimate at x4 or img2img X2 then ultimate X2 ?
@tvitcvrkutovic
@tvitcvrkutovic 8 месяцев назад
Can I upscale Midjourney creataed images this way?
@TheMagista88
@TheMagista88 9 месяцев назад
HI, I don't have restore face on mine :/
@lookimnotracistbut5695
@lookimnotracistbut5695 Год назад
I accidentaly did this with 0,75 denoise and it turned every tile into it's own image. The preview looked amazing but then sadly I got a VAE exception :(
@Jordan-my5gq
@Jordan-my5gq Год назад
Add the --no-half-vae property to your .bat file.
@lookimnotracistbut5695
@lookimnotracistbut5695 Год назад
@@Jordan-my5gq nah it's just behaving weird sometimes. I can generate up to 200 different batches but then suddenly without changing anything I get the exception. Then (again without changing anything) if I try again it works.
@dk_2405
@dk_2405 Год назад
Can i just simply cut-and-paste my Web UI folder to the new drive ? ?
@RedshireX
@RedshireX Год назад
Funciona, pero me deja manchas púrpuras en algunas partes.
@N1ghtR1der666
@N1ghtR1der666 Год назад
Hey, can you possibly do a video on creating consistent characters in different poses?
@dacentafielda12
@dacentafielda12 Год назад
Use a name. Any name. I just make one up
@N1ghtR1der666
@N1ghtR1der666 Год назад
@@dacentafielda12 Do you mean, if put for example John in the prompt it will generate similar looking people each time?
@dacentafielda12
@dacentafielda12 Год назад
@@N1ghtR1der666 yes but add a last name as well. Like John Johnson.
Далее
Сняли домик блин🏠
23:19
Просмотров 841 тыс.