Тёмный

Upscalers Roundup + Full Workflow - LDSR, Ultimate SD, Models, HiRes Fix, Latent Upscale + Topaz 

Stephan Tual
Подписаться 5 тыс.
Просмотров 13 тыс.
50% 1

Опубликовано:

 

28 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 107   
@stephantual
@stephantual 7 месяцев назад
Now go upscale something and show us the results! 👽
@TheCinefotografiando
@TheCinefotografiando 7 месяцев назад
@stephantual could it be used with SVD or Animatediff final videos?
@stephantual
@stephantual 7 месяцев назад
@@TheCinefotografiando The pixel based stuff (topaz, specific pixel models adapted to the type of frame you would feed it) could do it since the output image will just be an upscaled version without any changes. Latent based solutions would lose temporal consistency. There are also video-friendly algos for this type of thing. It's a good idea! Might include it in the SVD and AD videos. Cheers! 👽
@Kentel_AI
@Kentel_AI 7 месяцев назад
C'est moi qui doit avoir manqué ce moment. :)
@coreyhughes1456
@coreyhughes1456 Месяц назад
Is a basic latent upscale going to add more detail than upscaling with something like Foolhardy before a second pass?
@tartwinkler1711
@tartwinkler1711 7 месяцев назад
I have been waiting for an excellent roundup like this! Great video, thanks for taking the time to test these and share the results.
@stephantual
@stephantual 7 месяцев назад
And thank YOU for watching! 👽
7 месяцев назад
I appreciate the level of detail and analysis you put into this comparison. Also, thanks for pointing me to FastStone
@stephantual
@stephantual 7 месяцев назад
Yeah faststone is great and free! Thanks for commenting! 👽
@Injaznito1
@Injaznito1 7 месяцев назад
You are SO FUNNY! Ok, ok, I subscribed with notifications. 🤣🤣🤣. Oh, good info!!
@stephantual
@stephantual 7 месяцев назад
Thank you! These videos are fun to make! 👽
@autonomousreviews2521
@autonomousreviews2521 6 месяцев назад
Looks like you put a lot of time into this! Thank you for sharing, great info.
@ExpanderDJ
@ExpanderDJ 19 дней назад
Thanks for the in-depth review.
@wellshotproductions6541
@wellshotproductions6541 7 месяцев назад
I look always look forward to your videos. The amount of work do for these is bananas. I always learn a lot. Thanks!
@stephantual
@stephantual 7 месяцев назад
Thank you ever so much for your support - I'm so glad they're helpful! Working on the next one as we speak 👽
@robadams2451
@robadams2451 7 месяцев назад
An interesting round up. I think the method depends on your need. I mostly use upscale latent with a lowish denoise, but my intent is not photographic, I use a tiled decode which works well.. Mostly I build part of the upscale into my final refining pass. This results in a slightly soft generation which I then upscale using a model. I didn't like Ultimate Upscale so much trouble to get good results and it does nasty smoothing mixed with aggressive sharpening.
@stephantual
@stephantual 7 месяцев назад
Thank you! Yes you are 💯correct it does depend on need. Right tool for the job! The SIAX models can also help with compression artifacts (for those wanting to work with digital photos, not just AI content), and there are so many options (thousands!) to choose from depending on the use case. Cheers! 👽
@guygrenfell-dexter9273
@guygrenfell-dexter9273 4 месяца назад
Amazing! I need more of there compare vids
@rifz42
@rifz42 7 месяцев назад
I didn't know you can use bookmarks! thanks! : )
@stephantual
@stephantual 7 месяцев назад
Yeah it's pretty neat. Makes life a lot easier 👽
@LuxElliott
@LuxElliott 7 месяцев назад
Great video and thank you for creating it.
@stephantual
@stephantual 7 месяцев назад
Thanks for watching! The comfy community is growing fast! 👽
@a.akacic
@a.akacic 7 месяцев назад
haha nice instant sub due to adherence with no paywalling xD
@stephantual
@stephantual 7 месяцев назад
For sure! Pay walls will be zapped with alien lasers 👽
@ultimategolfarchives4746
@ultimategolfarchives4746 7 месяцев назад
Amazing breakdown man. Did you tried iterative upscaler?
@stephantual
@stephantual 7 месяцев назад
Yup that's the second one :)
@ultimategolfarchives4746
@ultimategolfarchives4746 7 месяцев назад
Ultimate sd upscale isnt iterative, its only doing one pass on the image. Im refering to some other different nodes that use iterative upscaling in the impack-pack that work great as well.
@stephantual
@stephantual 7 месяцев назад
@tegolfarchives4746 I have the impact pack, and it's fantastic, but i haven't reviewed it. On the plus side, that gives me something else to do. On the minus side, all iterative upscale working on latents work the same, and all iterative upscales on pixel space work the same. It's a trick to save vram. I can respect the hustle, but if you have 24gb just upscale to the final size 👽Edit: to be clear, USD is in the video, but that's not iterative indeed. The second one at 3:40 is iterative, it comes with comfyUI (and work in pixel space despite the confusing name).
@MichauxJHyatt
@MichauxJHyatt 7 месяцев назад
What about controlnet assisted hires fix? Some call it an enhanced version of the hires fix that yields better results. I typically don't use it as my main upscaler but only as a second pass 1.5x+ upscale and fix before the main upscaler. But I suppose it wouldn't be an ultimate upscale, which I get. Great video either way!
@stephantual
@stephantual 7 месяцев назад
Thanks for commenting! I have a video which is a Krea/magnify clone in the pipeline (there's more to those tools but that's roughly what they are), so don't worry, this one was for 1:1 upscales only 👽. Thank you!!! 🛸
@greypsyche5255
@greypsyche5255 6 месяцев назад
3:16 Your hires KSampler has denoise set at 1. You cannot do that otherwise the resulting image will look completely different, set it to 0.5 or 0.4.
@swannschilling474
@swannschilling474 7 месяцев назад
Great content and very nice presented!! I really like your style!!
@zaileron
@zaileron 2 месяца назад
the highresfix-script for the Efficient Nodes works beautifully, but it's def. not appropriate to use highres-fix to 4x an image without controlnet. It'll easily get between 1k and 2k for me, though.
@lawrence9239
@lawrence9239 5 месяцев назад
Learend a lot from you! Great work!!!
@nlmnx5763
@nlmnx5763 3 месяца назад
super bien fait mec merci beaucoup 💙
@luman1109
@luman1109 7 месяцев назад
bro you make good videos
@stephantual
@stephantual 7 месяцев назад
Takes a long time, but it's engaging and I enjoy sharing! Cheers! 👽
@TR-707
@TR-707 7 месяцев назад
one note about SD Upscale and samplers in general - imo karras and dpm* will always wash stuff out. Euler and DDIM samplers (sometimes heun and heun 2) are my gotos for retaining noise and normal, sgm and ddim uniforms for schedulers
@stephantual
@stephantual 7 месяцев назад
I should have added: I love that washed up look. Dpm++3 in particular with copax timeless stands as my current favorite, but evidently it's a matter of taste ;) 👽
@StéphaneMondoloni
@StéphaneMondoloni 7 месяцев назад
Thanks for this interesting video. Regarding generating Portrait and faces, have you consider using a Face detailer in your workflow just before the upscale? it gives a mind blowing result on my side.
@stephantual
@stephantual 7 месяцев назад
Yeah face detailer changes the image (obviously) so due to popular demand the next tut is basically exactly that: latent upscaler + face detailer, ipadpater, controlnets and so on :) Thank you! 👽
@razvanmatt
@razvanmatt 7 месяцев назад
Thank you very much for doing this.
@stephantual
@stephantual 7 месяцев назад
My pleasure! 👽👽
@michail_777
@michail_777 7 месяцев назад
I agree that for processing real photos with real people at the moment it is better to use Photoshop at all. I mean, to make sure there's no change and Topaz. But for my work with AnimateDiff I'm using High-Res Fix right now.
@stephantual
@stephantual 7 месяцев назад
AD is a different beast altogether, temporal consistency becomes a new factor in that case, try CCSR with AD , it's really dope. 👽
@michail_777
@michail_777 7 месяцев назад
Thanks.I tried "CCSR" but it is a very long process. AD also works very well with masks, but there is a limitation with mask quality. Unfortunately, at the moment, we can not refine the quality of the mask in Comfy, for example, as it can be done in Davinci or RunWay. Also, lately I started using "Apply IPAdapter from Encoder "+ "Prepare Image for Clip Vision "+ 4 input images (for character and background) instead of the usual "IPAdapter" and it works too. have a great generation!
@VArlaud
@VArlaud 7 месяцев назад
Super chaine, super comparatif et super accent ;)
@stephantual
@stephantual 7 месяцев назад
Allez les bleus! 🎉
@Bhushanmilkhe
@Bhushanmilkhe 7 месяцев назад
Very informative video bro i have searched all youtube but didn't find suitable video for me until now. Make using automatic 1111 too if u can
@stephantual
@stephantual 7 месяцев назад
That's interesting - I might extend to other platforms, yes! 👽
@ronnykhalil
@ronnykhalil 6 месяцев назад
so helpfulllll
@stephantual
@stephantual 6 месяцев назад
Glad it helped!
@AIWarper
@AIWarper 7 месяцев назад
You're missing arguably one of, if not the best - CCSR upscaling
@stephantual
@stephantual 7 месяцев назад
I love Kijai (he's one of my favorite devs) - but even him will tell you it wouldln't be too wise to upscale 100's of images using CCSR for a tutorial on upscaling 😄Great set of nodes by the way, I'll need to make videos for experts in the future it looks like! 👍👽
@ulamss5
@ulamss5 4 месяца назад
Topaz tends to make swirls from details, and make thin features like fingers look like smudged water painting streaks. It's not worth it at all, I wouldn't use it even if it was free.
@PinGuiNPL
@PinGuiNPL 7 месяцев назад
Ufff... i like your channel and your "AI news videos" but this one is surely not your best... HiResFix don't work - whaaaaat the....? 1.2x-1.5x ResFix after 1st Sampler can rescue hundreds of generations. Where is something about proper NoiseInjection before upscale, something about CCSR or even iterrative Upscale in Latent and pixel space... if we talk about upscale, we have to talk about _uniform schedulers, tiled segs, SDXL to SD15 tiled or Canny controlnet... and so on... even some of your examples (like SDupscale) are not set right. I understand that many "new" Comfy users need fast videos about everything but it helps no one if the results are heavily incomplete and partly wrong. Maybe next time ;)
@stephantual
@stephantual 7 месяцев назад
Thanks for the constructive feedback! I should have put in the intro the goal was a 1:1 representation of the original, as opposed to a krea/magnific clone (which I have a second video on). Thanks for the feedback and I'll spend more time on the next one! Cheers! 👽
@cyril1111
@cyril1111 7 месяцев назад
where are the comparisons with StableSR or CCSR ? Guessing you havent tried all the latest Upscale technics - Also a very upscale model: 4x_NKMD-Siax_200k
@stephantual
@stephantual 7 месяцев назад
Ah, a fellow model connoisseur! I I gave those a pass for the video because if you're an expert, well, you probably don't need my video and CCSR doesn't run well on everything while SIAX has niche use cases (it's good though). I batched generated for days on a 4090, and found that, if I was to be completly unbiased (read: i didn't check the titles of the images), at 2x or lower I couldn't tell the difference anymore on most non-specialized models, and at 4x, the specialized ones did better than their counterpart, but that's a) on average only, b) It would be unrealistic to try and find the 'right' model for any given image due to the randomness of some outputs. I'm not counting for compression artifacts either. You get to the point where the renders take forever with nodes coming out of everywhere, and you can't compare it well becaue you have to still run "the slow ones" for a 'fair' comparison should use non-ai inputs, but that defeated the purpose as I imagined 99% of the viewers would use AI content only 👽. Hope this explains my logic for the video! Thank you for watching!
@zerohcrows
@zerohcrows 7 месяцев назад
Can't find the "Automatic CFG channels multipliers" node for some reason. Disabled them all with only it and that didn't work. Uninstalled everything (including the manager) that didn't work either. Any idea what's going on? Edit: Reinstalled comfyui entirely and now I'm getting an issue with "Primitive integer [Crystools]" as well...
@stephantual
@stephantual 7 месяцев назад
Hi! Yes comfy with tons of nodes can have conflicts, or an update can break things without warning. Post the crystool issue on github.com/crystian/ComfyUI-Crystools/issues and I'll have a check against my own install 👽
@syntheticdelirium
@syntheticdelirium 5 месяцев назад
"Automatic CFG channels multipliers" is missing, and the only result for that is this workflow. What is it?
@Snydenthur
@Snydenthur 7 месяцев назад
I use hires fix on forge all the time and I've never gotten such results. I did try comfyUI at some point and hires fix there ended up being awful, so I think it's just comfyui thing or something.
@francisv1021
@francisv1021 7 месяцев назад
Yes, me too. I am in love with Forge since two weeks ago that I started using it and the Hires fix has been working marvels for me even with just the included upscalers, such as DAT x4. That plus Tile Control net and I believe is very respetable combo because I could get a good level of photorealism
@stephantual
@stephantual 7 месяцев назад
Thank you for your comment. I think I should have pointed out the goal was a 1:1 reproduction of the image, not just 'any upscale and small changes don't matter'. Any changed attribute is an instant-fail here. Evidently the public opinion is super divided on the matter so my next tut and worklow is exactly the opposite :) 👽👽
@WhySoBroke
@WhySoBroke 7 месяцев назад
Very cool!! I think most creative users would be interested in the Magnificai-Style workflow, since that’s what makes latent generation fun! Really looking forward to that video!
@stephantual
@stephantual 7 месяцев назад
Thank you for commenting! Yes, this video clearly demonstrated the need for a Krea/Magnify type clone in Comfy! Working on it as we speak! 👽
7 месяцев назад
You kinda lost me right at the beginning when you claim "highres fix" doesn't work... well, if you keep your Ksampler with Denoise set to 1.0... yeah, OF COURSE it doesn't work, you are telling your sampler to completely ignore the original image and mostly just make something new from the prompt, which can't handle making images twice or four times the model's standard size (512px for SD1.5, 1024px for SDXL) and starts repeating subjects, THAT is why it doesn't work. If using it with a latent upscaler before it, set the Ksampler to 0.5 (you know, how it's being shown in the ComfyUI Dev example) and while you will get some changes to the image, it won't be anything close to that mess you showed in your example. For closer resemblance to the original image, you use an upscaler model and ksampler with values under 0.3.
@stephantual
@stephantual 7 месяцев назад
Thank you for taking the time to comment! I understand your reaction, I simply wanted to introduce (parts) of the audience to the concept. We're very lucky our little community is growing fast, and we're seeing a lot of people from VFX/CGI and photography use comfyui - that's great! It also introduce some to an audience that doesn't want the image to be changed - at all - for example in one of my videos I pointed out that even the batch generation can introduce some unwanted noise (not seed randomization, but random noise from the samplers not found in a1111 even with --deterministic enabled). This may not look like a big deal - in fact it could be a 'wanted' feature to some, but not to all - for example take a quick look at this dicussion on the topaz forums where members are up in arms about something as innocuous as introducing LDSR to the product: community.topazlabs.com/t/hope-topaz-add-ldsr-stable-diffusion-img2img-upscale-model-concept-look-alike/40435. I have nothing 'against' it if that's the direction someone wants to take, but I also wouldn't be people to end up frustrated with the tool. In addition, I'll publish a 'magnify/krea' clone workflow since I know these can be fun, but can be quite costly on cloud services. I hope this answers your concern. Thank you! 👽
@binarybrian
@binarybrian 7 месяцев назад
Thanks for pointing that out. I thought I was being dense and missing something about latent upscale. It's certainly not worthless. A small 1.25X 1st pass latent upscale at 0.5 denoise can be very helpful at "automagically" fixing small detail flaws; eyes, fingers, jewelry, etc. before upscaling in pixel space and can even sometimes "restore" faces without needing that extra dedicated node/step.
@stephantual
@stephantual 7 месяцев назад
@@binarybrian I have an upcoming magnific/krea workflow in comfy you will like then! Cheers! 👽
@MilesBellas
@MilesBellas 6 месяцев назад
Why not make a few Comfyui videos!?😊👍😀
@user-pc7ef5sb6x
@user-pc7ef5sb6x 3 месяца назад
I don't use highres fix either. I use img2img in Auto1111, put denoising str at .3 - .4, then keep dragging the image over until enough details are added. Only problem is that I have to empty the prompt because the image can get too washed out.
@clearstoryimaging
@clearstoryimaging 7 месяцев назад
Amazing video! Thanks for all the work you put into it. Looking forward to see how you tackle recreating what's happening in Magnific and Krea upscale and enhance in a future video. I can get close with Ultimate SD, but to really add specific details across an image requires prompt control that is almost necessary on a per tile basis, particularly for complex scenes with a large variety of subjects, like a landscape for instance. I pay $99 a month for Magnific and use it professionally. I have a love hate relationship with it. The more I try to emulate it in Comfy, the more I understand how taxing it is on my GPU (4090) and where the server cost goes for processing, especially when you get above 4K.
@stephantual
@stephantual 7 месяцев назад
Absolutely, I have a Krea license too. It's important to set expectations right, given these companies have multi-million dollar budgets, you're entirely correct! Thank you!
@benoitguitard2887
@benoitguitard2887 6 месяцев назад
Thanks, I am interested in your Topaz settings because I started to use comfyui instead of topaz for upscaling existing low resolution old photos (to train Lora ) because the with topaz faces are totally smooth like everyone is twenty years old with the face restore option and the consistency between inside and outside the mask wasn’t good. I tried to tune the values but I wasn’t very successful. Artefacts are killing the Lora training.
@stephantual
@stephantual 6 месяцев назад
Yeah, that's a good point. If you don't have a reference you can't train a lora, if you can't train a lora you can't rebuild the face. It's a catch 22. I've been thinking about using multiple photos, pass them through SUPIR (just the face), then create a composite to faceid or faceswap on the upscaled/fixed version. But ... yeah it's a super niche use case. I feel your pain :(
@flisbonwlove
@flisbonwlove 7 месяцев назад
Nice one Stephan! 👏👏 Keep the good work! 🙌
@stephantual
@stephantual 7 месяцев назад
Will never stop! 👽I'm stuck in the mothership anyways 😂
@uk3dcom
@uk3dcom 7 месяцев назад
Thanks for your work here, but I have to say I'm surprised at the poor quality results you both start with and end with. I haven't seen these kind of results for quite some time even using the XLturbo models. The iterative and nose injection upscaling I now use have excelent results and I'm simply following other peoples tutorials both in ComfyUI and Forge.
@stephantual
@stephantual 7 месяцев назад
As mentioned in another comment, I think I should have pointed out the goal was a 1:1 reproduction of the image, not just 'any upscale and small changes don't matter'. Any changed attribute is an instant-fail here. Evidently the public opinion is super divided on the matter so my next tut and worklow is exactly the opposite :) 👽👽
@uk3dcom
@uk3dcom 7 месяцев назад
Hi @@stephantual I agree that an upscaler means different things to different people. For photographers I'm guessing that content, composition and colour have to remain true to the original. However I think it is desirable even in photographic work that adding detail is required to get that true upscale benefit. So long as context aware additions are used we should be okay. For example eyelashes on a model. We don't just want the lashes to be sharper we want them to be better defined same goes for pores on the skin. I imagine this should now be possible after all, many AI models have in their latent knowledge all the close up detail to render an acurate representation. Maybe Topaz is the one specialising in this area but I hope others in the open source community follow suite. (Something similar was attempted many years ago with fractal generation but that was definately not context aware and tended to get things wrong more oftem that right.) I look forward to your future videos. Thanks.
@TheCinefotografiando
@TheCinefotografiando 7 месяцев назад
Yours is my new favorite channel
@stephantual
@stephantual 7 месяцев назад
Well, that's too kind! Thank you! 👽
@juliana.2120
@juliana.2120 8 дней назад
I really needed that hires fix part 😂 finally I’m ready to move on. It can still give amazing results with a bit more trial and error but we have better options now
@Sujal-ow7cj
@Sujal-ow7cj 16 часов назад
just what wanted
@Filokalee999
@Filokalee999 6 месяцев назад
This is a great upscaler comparison! I can't even image how much time it took. There is one more scenario I have seen gives better results than latent upscaler. First, scale the VAE-decoded image (not the latent) with an AI Upscaler (OpenModelDB), re-encode the upscaled image as latent, and then use for a 2nd KSampler pass with low denoise (say 0.35)... very slow but gives good (althrough slightly changed) results.
@stephantual
@stephantual 6 месяцев назад
Yup, that works too. Things move so far, today we got SUPIR v2 and last week SDXL lightning upscales (which I used in the 'horror film' video). The disadvantage of YT is that I can't update videos, I can just upload more... so the next one is already in the pipeline! Thanks for the comment and see you in the next one! 👽👍
@ManoloPiquero
@ManoloPiquero 7 месяцев назад
fun to listen to your commentary, thanks
@stephantual
@stephantual 7 месяцев назад
👽 thank you! 👽
@internetperson2
@internetperson2 7 месяцев назад
goated content as usual brother
@stephantual
@stephantual 7 месяцев назад
👽🛸🐐
@Kentel_AI
@Kentel_AI 7 месяцев назад
You don't talk about Kohya Deep Shrink. In a lot of cases, it's a first good step for getting max size for the initial generation :)
@stephantual
@stephantual 7 месяцев назад
I use DS in the workflow 👽 I think my French Accent makes it sound like "dip shreenk" 🤣
@oliviertorres8001
@oliviertorres8001 6 месяцев назад
What do you think about "CCSR" node and models ? Is it a variant of LDSR ? Thank you.
@stephantual
@stephantual 6 месяцев назад
I got a video on both - in fact now the hot new thing when using SR for upscale is "SUPIR" - check out :) ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Q9y-7Nwj2ic.html 👽. PS: CCSR & SUPIR nodes are both wrappers developed by Kijai, you can't go wrong :)
@slashkeyAI
@slashkeyAI 6 месяцев назад
I would've liked to see this done with some inclusion of poor quality images, rather than just already-good-but-not-big images.
@stephantual
@stephantual 6 месяцев назад
Check out my SUPIR v2 video, I use utter trash as input :) 👽
@slashkeyAI
@slashkeyAI 6 месяцев назад
k thanks dude @@stephantual
@BryanHoward
@BryanHoward 7 месяцев назад
This is a very good comprehensive roundup of the current upscaling meta!
@stephantual
@stephantual 7 месяцев назад
👽👽👽👽
@JohnSmith-cw1lf
@JohnSmith-cw1lf 7 месяцев назад
no idea what you talking about bro but glad you're excited
@stephantual
@stephantual 7 месяцев назад
Mmm - It's always tough to find the right 'approach'! As I make more videos i hope to have something for everyone! Thank you! 👽
@EromancerGames
@EromancerGames 2 месяца назад
Your method of hi res fix is strange (4x upscale and 1.0 denoise will yield bad results). Latent hi res fix in general is bad. What's called non-latent hi res fix is much better. Use an image space upscale model at 2x, then run the upscaled image through a KSampler at 0.4-0.5 denoise. 0.4-0.5 is ideal as beyond 0.5 it begins to change composition. Doing hi res fix at 1.0 denoise would be the same as creating a new generation at a much higher resolution than the model was trained on, which will of course result in cloned bodies everywhere.
Далее
Катаю тележки  🛒
08:48
Просмотров 293 тыс.
The SIMPLEST workflow for FLUX Comfyui
8:03
Просмотров 16 тыс.
I Edited a Photo in Every Program.
25:28
Просмотров 72 тыс.
Why Unreal Engine 5.4 is a Game Changer
12:46
Просмотров 1,3 млн
Magnific/Krea in ComfyUI - upscale anything to real life!
1:01:02
Deep dive into the Flux
28:03
Просмотров 38 тыс.
How to AI Upscale and Restore images with Supir.
16:31