You're very calm for the game changer that this is. I've created two amazing dual screen wallpapers today and a high quality mobile wallpaper because of this. Incredible!
This works so well!!! I didnt have the esrgan folder so i created one in the path shown and it worked like a charm! Some of the shown settings were removed so i paid them no mind too. I use meinaunreal with 0.2 denoise, esrgen 4x anime 8b and the seams fix. I will play around some more and update if i find a better result. Thank you so much again Olivio!!
Your work reminds me of the Office handshake meme. Even though I trained this model, I didn't realize it could be used in this way.🤣 Thank you for your video as always!
Love you man! Tried this on my AMD GPU and it works great! Super clear and it kinda fixes some bad anatomies on my images too! This is a great help thanks a lot! You're always the best!
I found out about this 3 days ago and have spent so much of my free time in A1111. You can upscale to a level of detail the eye cannot even see in real life. I recommend when upscaling over 1800px to gradually increase your denoising and SD RES (512 to 1024). This allows it to add more detail. If you don't things will begin to get blocky. (I do not use controlnet Tile in my workflow)
I tried your method but the ai is adding faces and warping the skin of my subject, how did you mitigate it. My prompt also only included quality tags and nothing pertained to my subject.
@@hypnotic852 change your sampling methode to DPM++ 2s a Karras, this will most likely fix 95% of your problem with warped skin and strange detailes, also for the best result, do not jump strait to 4x upscaling, go slow and upscale 2x at a time, this will keep the small detail from the original image must better. i only use SD upscale and 4xUltrasharp.
I will thank you OIivio, the hard work you put and the quality of your videos and the best part how your explain are excepcional. The community need you now, please explain a guide about How to install and use Deforum , the trend it's bigger now and we need to move into video, that's the future of Generative IA.
Okay I LOVE his videos a lot too, it helps so much, he's like the best youtuber for this right now. But come on, those don't look like "hard work and quality of videos". I mean he is basically sitting in his stable diffusion and explaining you the buttons.
Note that depending on your settings and model, in some cases with other controlnets ive gotten great results even at 8 samples, which can speed things up dramatically, especially in the concept stage!...
0:15 hahahaha xD In french we could say "la bonne cam" ou "les bons tuyaux" ou "du lourd" hahaha btw, thanks for all your videos and tutorials ! They are very clear an easy to understand :D
Awesome!!! The upscaling not only worked wonderfully, but it improved inpaint additions, such as necklace, longer sleeve, bracelet, and ring to perfection on my very first try. Thankyou!🙂
The ears often also need a fix, you can see that on the last image very well. So pay attention not only to the eyes, but also to the ears to revise them.
Автономность зависит от вас напрямую. Если нужна автономность, то нет смысла брать рог эллай вместо дека. Дек в ведьмаке 30 фпс, фср и соответствующими настройками tdp под 3.5 часов спокойно выдаёт
Nicely done, so many tips & tricks mixed in, all very helpful. Cheers! I was already using this method but wasnt quite getting the best results as often as I can now.
Results vary. Most of the time, I get artifacts due to inconsistencies between tile information. So for like 10% of the time, it looks amazing and the other 90% has serious inconsistencies.
What denoising strength do you guys use? Have it at the lowest and see if it help. Ex (0.1) It keeps the ai from changing your image and stick to original in short. Contronet also do help as it keeps the ai from altering the image too much although I do not use it
I advise you to set the "padding" to 160, it gives a much better result, you will get rid of strange artifacts. also ''contl net is more impornant'' must have
Amazing results. Super detailed and with lots of ornaments. Nice to see such a nice composition. Mind sharing the model and the prompts (pos and neg) you used?
@@OlivioSarikas I mean when you upscale in img2img without script, for example 512x512 image to 1024x1024. In this case, there is no point in using this model in control. For preserve the details you can use "depth" or "canny" instead.
After mulitple (a lot of multiple) test i advice to push Ultimate SD uspcale Width to Max (1024) to lower seams impact and you can enable Tiled Diffusion (diffuser only) to have more control on denoising (no upscale)
Here is an suggestion from me: If you think that steps are so high and you canno wait or you pc cannot handle it, just decrease the denoising strength to something around 0.1 or 0.2 and the steps will reduced significantly
Have you done a video on the Block Weight extension? I find it really cool how you can control LORA interactions with the image by telling it to only affect the start of the image generation, or the middle, or the end, or every other step. Would be cool to see you play around with this extension if you don't got a video on it already. Was in the Available list of extensions in A1111. Btw., the Ultimate SD Upscale is in the Available list of "officially supported" extensions as well. Can just install it from there.
every time I do this in img2img, the console writes out for each tile render: "could not find upscaler named , using None as a fallback."... I am using the R-ESRGAN 4x anime6b one, and I don't think I get this issue when using the High-Res fix. I'll try using a different upscaler and see if it still gives that error... But the process doesn't get interrupted at all and it does actually finish the image, but not sure if it actually ran it through the upscaler model or w.e
I wonder how this works in tandem with Tiled Diffusion/Tiled VAE, if at all worth the effort to run both together with controlnet tile and SD upscaler? Will it increase the quality even further?
Awesome video again! I do wonder, is there a way to deconstruct an image into various elements, background, body shape, clothing, face? Iirc with Lora you can train your own model so you have a consistent face from any angle. If you like a certain item or clothing, could you do the same for that? Maybe save it in a wide multipose image (front, 45°, side and other side if not symmetric) with all the png info attached like you had here? I've seen you split images in "tiles", not squares but with an auto detect for the borders of certain areas. Could you use this to "tag" each part of the image as background, props, model, face, etc and the use controlnet to reference the right part and gain consistency? Maybe then things like eye color or the embroidery doesn't change with upscaling. This would give greater control if you want to tweak an image, turn a model slightly or change the expression. Even better if you could change outfits and swap props in and out (like jewelry).
Why Eula? Tends to change things from original image. That's why botched eyes in first take. Try instead DPM++ 2M Karras 20 steps - Denoise 0.2 or lower. Consistent results (and faster!) . Efficient work flow. Nuff said.
How would this compared to upscaling in Gigapixel AI in terms of details? I would think this would give better results (of course slightly different since it's generating details)
I don't understand how this is for "slow" GPU. You gave a workflow but don't describe how it benefits a "slow" GPU. What is a "slow" GPU versus a GPU with less than x GB of RAM?
Cause you cant upscale before with a "cheap, low memory GPU", now you can, cause this operation is made by "tiles" so you dont load all in the vram. I tested and i could 10x upscale! Before that tile operator, that was impossible.
I followed the instruction to upscale, it took me 12 mins but the image looked like every tile/mini section did a generation of its own, and the overall picture looks deformed! what can I do to change that?
my 3090ti reads the title and instantly concluded: "NOT TODAY" context: recently upgraded from a 10yo 980. the best decision ever made, if you need it for work.
Control Net updated compared to the version you have (1.1.144 now). The downsampling slider is gone and the preview is incredibly low-res and blurry?! Seems to be basically broken right now.
cool !! i have a problem with model more than 4 G show to me can't collect the model because size ?! any one have same error ? i have 4g vram by the way.
Great videos as always. The title is crear of what to expect and easy for me to look it up later and the step by step explanation is great for beginners and non as well. My question is about generating a normal image. Not an upscale. I have custom models and I notice that if I stop the generation at 70-80% the image is more like the subject. Is there a command or a setting to have the generation stop before the end? Thank you in advance and looking forward to the next one.
Yes, there is! It's called Clip Skip. Go to your Settings tab, then User Interface, then look for the quick settings list. In there, make sure you have the following text: sd_model_checkpoint, CLIP_stop_at_last_layers Then, click apply settings at the top and restart your UI (and maybe the SD program, too, if that doesn't work). At the top of your SD window next to your checkpoints you should see the Clip skip interface. The default setting is 1, which will stop the image generation at the last layer as it normally would. If you set it to 2, it will stop image generation 1 layer sooner, etc.
@@xvi1921 Thank you for the reply. From my tests it should be the same but it's not. You can see it right at the start of the generation that the image is different depending on the clip skip. But by interrupting the generation before the end you are able to stop the ai from adding detail. This is also different then having less steps.
I'm using this technique and finding that my 24GB VRAM isn't enough for directly inpainting messed up face features inside 5x upscaled (2560x3840) images, it seems like even though it only needs to repaint a small area it's still doing something on the whole frame. Gotta see if I can find what's causing this.
@OlivioSarikas I tried to delete everything from positive prompt, but on 8k image I get terrible tiling. Also I tried other methods and all the time on 8k images got very bad tiling. Did you try with 8k images ?
i get this error after i downloaded xformers, and i already reinstall everything but keeps appearing: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
Facing an error where it says, " TypeError: unhashable type: 'slice' " while using this ultimate upscaler and ControlNet tiling together. Any idea about this?
Following this I run out of memory (OutOfMemoryError: CUDA out of memory. Tried to allocate 9.96 GiB (GPU 0; 24.00 GiB total capacity; 13.27 GiB already allocated; 8.09 GiB free; 13.29 GiB reserved in total by PyTorch) when going over 4K. Any idea what could went wrong? I use the settings like in the video.
Can you tell me what is exactly the second and third example ? The second one is using the upscale script X4 ultra sharp only at scale 4 and the third one is using with control net tile ??
could I technically use highres fix pre generation and then send the output to img2img and do the end method on top? So highresfx 4x, and then 4x again after?
Im using laptop RTX3060 graphics card and it has 6GB VRAM, every time I run the upscaler it pop me CUDA out of memory 😅, is it normal because my VRAM is not enough to run the upscaler?
I accidentaly did this with 0,75 denoise and it turned every tile into it's own image. The preview looked amazing but then sadly I got a VAE exception :(
@@Jordan-my5gq nah it's just behaving weird sometimes. I can generate up to 200 different batches but then suddenly without changing anything I get the exception. Then (again without changing anything) if I try again it works.