@@TheCinefotografiando The pixel based stuff (topaz, specific pixel models adapted to the type of frame you would feed it) could do it since the output image will just be an upscaled version without any changes. Latent based solutions would lose temporal consistency. There are also video-friendly algos for this type of thing. It's a good idea! Might include it in the SVD and AD videos. Cheers! 👽
An interesting round up. I think the method depends on your need. I mostly use upscale latent with a lowish denoise, but my intent is not photographic, I use a tiled decode which works well.. Mostly I build part of the upscale into my final refining pass. This results in a slightly soft generation which I then upscale using a model. I didn't like Ultimate Upscale so much trouble to get good results and it does nasty smoothing mixed with aggressive sharpening.
Thank you! Yes you are 💯correct it does depend on need. Right tool for the job! The SIAX models can also help with compression artifacts (for those wanting to work with digital photos, not just AI content), and there are so many options (thousands!) to choose from depending on the use case. Cheers! 👽
Ultimate sd upscale isnt iterative, its only doing one pass on the image. Im refering to some other different nodes that use iterative upscaling in the impack-pack that work great as well.
@tegolfarchives4746 I have the impact pack, and it's fantastic, but i haven't reviewed it. On the plus side, that gives me something else to do. On the minus side, all iterative upscale working on latents work the same, and all iterative upscales on pixel space work the same. It's a trick to save vram. I can respect the hustle, but if you have 24gb just upscale to the final size 👽Edit: to be clear, USD is in the video, but that's not iterative indeed. The second one at 3:40 is iterative, it comes with comfyUI (and work in pixel space despite the confusing name).
What about controlnet assisted hires fix? Some call it an enhanced version of the hires fix that yields better results. I typically don't use it as my main upscaler but only as a second pass 1.5x+ upscale and fix before the main upscaler. But I suppose it wouldn't be an ultimate upscale, which I get. Great video either way!
Thanks for commenting! I have a video which is a Krea/magnify clone in the pipeline (there's more to those tools but that's roughly what they are), so don't worry, this one was for 1:1 upscales only 👽. Thank you!!! 🛸
the highresfix-script for the Efficient Nodes works beautifully, but it's def. not appropriate to use highres-fix to 4x an image without controlnet. It'll easily get between 1k and 2k for me, though.
one note about SD Upscale and samplers in general - imo karras and dpm* will always wash stuff out. Euler and DDIM samplers (sometimes heun and heun 2) are my gotos for retaining noise and normal, sgm and ddim uniforms for schedulers
I should have added: I love that washed up look. Dpm++3 in particular with copax timeless stands as my current favorite, but evidently it's a matter of taste ;) 👽
Thanks for this interesting video. Regarding generating Portrait and faces, have you consider using a Face detailer in your workflow just before the upscale? it gives a mind blowing result on my side.
Yeah face detailer changes the image (obviously) so due to popular demand the next tut is basically exactly that: latent upscaler + face detailer, ipadpater, controlnets and so on :) Thank you! 👽
I agree that for processing real photos with real people at the moment it is better to use Photoshop at all. I mean, to make sure there's no change and Topaz. But for my work with AnimateDiff I'm using High-Res Fix right now.
Thanks.I tried "CCSR" but it is a very long process. AD also works very well with masks, but there is a limitation with mask quality. Unfortunately, at the moment, we can not refine the quality of the mask in Comfy, for example, as it can be done in Davinci or RunWay. Also, lately I started using "Apply IPAdapter from Encoder "+ "Prepare Image for Clip Vision "+ 4 input images (for character and background) instead of the usual "IPAdapter" and it works too. have a great generation!
I love Kijai (he's one of my favorite devs) - but even him will tell you it wouldln't be too wise to upscale 100's of images using CCSR for a tutorial on upscaling 😄Great set of nodes by the way, I'll need to make videos for experts in the future it looks like! 👍👽
Topaz tends to make swirls from details, and make thin features like fingers look like smudged water painting streaks. It's not worth it at all, I wouldn't use it even if it was free.
Ufff... i like your channel and your "AI news videos" but this one is surely not your best... HiResFix don't work - whaaaaat the....? 1.2x-1.5x ResFix after 1st Sampler can rescue hundreds of generations. Where is something about proper NoiseInjection before upscale, something about CCSR or even iterrative Upscale in Latent and pixel space... if we talk about upscale, we have to talk about _uniform schedulers, tiled segs, SDXL to SD15 tiled or Canny controlnet... and so on... even some of your examples (like SDupscale) are not set right. I understand that many "new" Comfy users need fast videos about everything but it helps no one if the results are heavily incomplete and partly wrong. Maybe next time ;)
Thanks for the constructive feedback! I should have put in the intro the goal was a 1:1 representation of the original, as opposed to a krea/magnific clone (which I have a second video on). Thanks for the feedback and I'll spend more time on the next one! Cheers! 👽
where are the comparisons with StableSR or CCSR ? Guessing you havent tried all the latest Upscale technics - Also a very upscale model: 4x_NKMD-Siax_200k
Ah, a fellow model connoisseur! I I gave those a pass for the video because if you're an expert, well, you probably don't need my video and CCSR doesn't run well on everything while SIAX has niche use cases (it's good though). I batched generated for days on a 4090, and found that, if I was to be completly unbiased (read: i didn't check the titles of the images), at 2x or lower I couldn't tell the difference anymore on most non-specialized models, and at 4x, the specialized ones did better than their counterpart, but that's a) on average only, b) It would be unrealistic to try and find the 'right' model for any given image due to the randomness of some outputs. I'm not counting for compression artifacts either. You get to the point where the renders take forever with nodes coming out of everywhere, and you can't compare it well becaue you have to still run "the slow ones" for a 'fair' comparison should use non-ai inputs, but that defeated the purpose as I imagined 99% of the viewers would use AI content only 👽. Hope this explains my logic for the video! Thank you for watching!
Can't find the "Automatic CFG channels multipliers" node for some reason. Disabled them all with only it and that didn't work. Uninstalled everything (including the manager) that didn't work either. Any idea what's going on? Edit: Reinstalled comfyui entirely and now I'm getting an issue with "Primitive integer [Crystools]" as well...
Hi! Yes comfy with tons of nodes can have conflicts, or an update can break things without warning. Post the crystool issue on github.com/crystian/ComfyUI-Crystools/issues and I'll have a check against my own install 👽
I use hires fix on forge all the time and I've never gotten such results. I did try comfyUI at some point and hires fix there ended up being awful, so I think it's just comfyui thing or something.
Yes, me too. I am in love with Forge since two weeks ago that I started using it and the Hires fix has been working marvels for me even with just the included upscalers, such as DAT x4. That plus Tile Control net and I believe is very respetable combo because I could get a good level of photorealism
Thank you for your comment. I think I should have pointed out the goal was a 1:1 reproduction of the image, not just 'any upscale and small changes don't matter'. Any changed attribute is an instant-fail here. Evidently the public opinion is super divided on the matter so my next tut and worklow is exactly the opposite :) 👽👽
Very cool!! I think most creative users would be interested in the Magnificai-Style workflow, since that’s what makes latent generation fun! Really looking forward to that video!
Thank you for commenting! Yes, this video clearly demonstrated the need for a Krea/Magnify type clone in Comfy! Working on it as we speak! 👽
7 месяцев назад
You kinda lost me right at the beginning when you claim "highres fix" doesn't work... well, if you keep your Ksampler with Denoise set to 1.0... yeah, OF COURSE it doesn't work, you are telling your sampler to completely ignore the original image and mostly just make something new from the prompt, which can't handle making images twice or four times the model's standard size (512px for SD1.5, 1024px for SDXL) and starts repeating subjects, THAT is why it doesn't work. If using it with a latent upscaler before it, set the Ksampler to 0.5 (you know, how it's being shown in the ComfyUI Dev example) and while you will get some changes to the image, it won't be anything close to that mess you showed in your example. For closer resemblance to the original image, you use an upscaler model and ksampler with values under 0.3.
Thank you for taking the time to comment! I understand your reaction, I simply wanted to introduce (parts) of the audience to the concept. We're very lucky our little community is growing fast, and we're seeing a lot of people from VFX/CGI and photography use comfyui - that's great! It also introduce some to an audience that doesn't want the image to be changed - at all - for example in one of my videos I pointed out that even the batch generation can introduce some unwanted noise (not seed randomization, but random noise from the samplers not found in a1111 even with --deterministic enabled). This may not look like a big deal - in fact it could be a 'wanted' feature to some, but not to all - for example take a quick look at this dicussion on the topaz forums where members are up in arms about something as innocuous as introducing LDSR to the product: community.topazlabs.com/t/hope-topaz-add-ldsr-stable-diffusion-img2img-upscale-model-concept-look-alike/40435. I have nothing 'against' it if that's the direction someone wants to take, but I also wouldn't be people to end up frustrated with the tool. In addition, I'll publish a 'magnify/krea' clone workflow since I know these can be fun, but can be quite costly on cloud services. I hope this answers your concern. Thank you! 👽
Thanks for pointing that out. I thought I was being dense and missing something about latent upscale. It's certainly not worthless. A small 1.25X 1st pass latent upscale at 0.5 denoise can be very helpful at "automagically" fixing small detail flaws; eyes, fingers, jewelry, etc. before upscaling in pixel space and can even sometimes "restore" faces without needing that extra dedicated node/step.
I don't use highres fix either. I use img2img in Auto1111, put denoising str at .3 - .4, then keep dragging the image over until enough details are added. Only problem is that I have to empty the prompt because the image can get too washed out.
Amazing video! Thanks for all the work you put into it. Looking forward to see how you tackle recreating what's happening in Magnific and Krea upscale and enhance in a future video. I can get close with Ultimate SD, but to really add specific details across an image requires prompt control that is almost necessary on a per tile basis, particularly for complex scenes with a large variety of subjects, like a landscape for instance. I pay $99 a month for Magnific and use it professionally. I have a love hate relationship with it. The more I try to emulate it in Comfy, the more I understand how taxing it is on my GPU (4090) and where the server cost goes for processing, especially when you get above 4K.
Absolutely, I have a Krea license too. It's important to set expectations right, given these companies have multi-million dollar budgets, you're entirely correct! Thank you!
Thanks, I am interested in your Topaz settings because I started to use comfyui instead of topaz for upscaling existing low resolution old photos (to train Lora ) because the with topaz faces are totally smooth like everyone is twenty years old with the face restore option and the consistency between inside and outside the mask wasn’t good. I tried to tune the values but I wasn’t very successful. Artefacts are killing the Lora training.
Yeah, that's a good point. If you don't have a reference you can't train a lora, if you can't train a lora you can't rebuild the face. It's a catch 22. I've been thinking about using multiple photos, pass them through SUPIR (just the face), then create a composite to faceid or faceswap on the upscaled/fixed version. But ... yeah it's a super niche use case. I feel your pain :(
Thanks for your work here, but I have to say I'm surprised at the poor quality results you both start with and end with. I haven't seen these kind of results for quite some time even using the XLturbo models. The iterative and nose injection upscaling I now use have excelent results and I'm simply following other peoples tutorials both in ComfyUI and Forge.
As mentioned in another comment, I think I should have pointed out the goal was a 1:1 reproduction of the image, not just 'any upscale and small changes don't matter'. Any changed attribute is an instant-fail here. Evidently the public opinion is super divided on the matter so my next tut and worklow is exactly the opposite :) 👽👽
Hi @@stephantual I agree that an upscaler means different things to different people. For photographers I'm guessing that content, composition and colour have to remain true to the original. However I think it is desirable even in photographic work that adding detail is required to get that true upscale benefit. So long as context aware additions are used we should be okay. For example eyelashes on a model. We don't just want the lashes to be sharper we want them to be better defined same goes for pores on the skin. I imagine this should now be possible after all, many AI models have in their latent knowledge all the close up detail to render an acurate representation. Maybe Topaz is the one specialising in this area but I hope others in the open source community follow suite. (Something similar was attempted many years ago with fractal generation but that was definately not context aware and tended to get things wrong more oftem that right.) I look forward to your future videos. Thanks.
I really needed that hires fix part 😂 finally I’m ready to move on. It can still give amazing results with a bit more trial and error but we have better options now
This is a great upscaler comparison! I can't even image how much time it took. There is one more scenario I have seen gives better results than latent upscaler. First, scale the VAE-decoded image (not the latent) with an AI Upscaler (OpenModelDB), re-encode the upscaled image as latent, and then use for a 2nd KSampler pass with low denoise (say 0.35)... very slow but gives good (althrough slightly changed) results.
Yup, that works too. Things move so far, today we got SUPIR v2 and last week SDXL lightning upscales (which I used in the 'horror film' video). The disadvantage of YT is that I can't update videos, I can just upload more... so the next one is already in the pipeline! Thanks for the comment and see you in the next one! 👽👍
I got a video on both - in fact now the hot new thing when using SR for upscale is "SUPIR" - check out :) ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Q9y-7Nwj2ic.html 👽. PS: CCSR & SUPIR nodes are both wrappers developed by Kijai, you can't go wrong :)
Your method of hi res fix is strange (4x upscale and 1.0 denoise will yield bad results). Latent hi res fix in general is bad. What's called non-latent hi res fix is much better. Use an image space upscale model at 2x, then run the upscaled image through a KSampler at 0.4-0.5 denoise. 0.4-0.5 is ideal as beyond 0.5 it begins to change composition. Doing hi res fix at 1.0 denoise would be the same as creating a new generation at a much higher resolution than the model was trained on, which will of course result in cloned bodies everywhere.