Тёмный

ComfyUI: Scaling-UP Image Restoration, SUPIR (Workflow Tutorial) 

ControlAltAI
Подписаться 11 тыс.
Просмотров 13 тыс.
50% 1

This tutorial focuses on SUPIR for ComfyUI, some core concepts and upscaling techniques used with SUPIR. Image restoration, enhancement, and some mixed techniques are used with the workflow to achieve the desired results.
------------------------
JSON File (RU-vid Membership): / @controlaltai
SUPIR Comfy UI: github.com/kijai/ComfyUI-SUPIR
SUPIR GitHub: github.com/Fanghua-Yu/SUPIR
SUPIR Model Downloads: huggingface.co/camenduru/SUPI...
Gemini Pro API: aistudio.google.com/app/apikey
------------------------
TimeStamps:
0:00 Intro.
00:51 Requirements.
03:40 SUPIR Nodes.
17:02 Correct Image Resolution.
24:31 Understanding Tiling.
29:57 Workflow Organization.
33:33 Workflow Summary.
36:26 Image Restoration & Upscaling.
40:12 Image Enhancement & Upscaling.
44:26 Mixing Techniques.

Хобби

Опубликовано:

 

17 май 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 117   
@controlaltai
@controlaltai 26 дней назад
@21:29 Slight oversight. Connect the Width and the Height to the downscale from the Image Resize at the bottom (divisible by 32 group) and NOT THE ORIGINAL. Thanks to the user "dr.gurn420" for correctly catching this oversight. The oversight would affect only if Downscaling an non 32 divisible image. Would have no effect if using the rinse and repeat (upscale downscale method) as shown in the video.
@KQforever
@KQforever 14 дней назад
Do you mean from the "Divisible by 64" group? I'm reading what you wrote here as exactly what is already happening in the video.
@KQforever
@KQforever 14 дней назад
Additionally, are you supposed to connect the image output from the image crop in the "divisible by 64" group and apply that to the image resize image input in the "upscale factor" group? or are you supposed to use the original image as seen in the video? I dont understand why you wouldn't always use the cropped image especially if your upscale factor group is already changing the size to the cropped image heights & widths
@controlaltai
@controlaltai 14 дней назад
Please Check the video further. All output from the image resize is connected to the switch which passes on to the upscale. If you are doing directly connect from the bottom group to upscale factor.
@controlaltai
@controlaltai 11 дней назад
@@KQforever Yes divisible by 64 group (as per that section of the video), When making the video it I was still testing it was 64, then in between I had to switch to 32, finally went with 32. if it works with 32, it will work with 64.
@yasin6904
@yasin6904 18 дней назад
Great tutorial. Appreciate the fact that you didn't just dictate what needs to be done but explained as many relevant parameters are possible, going into the "why" of them.
@controlaltai
@controlaltai 18 дней назад
Thank you!
@RodrigoNishino
@RodrigoNishino Месяц назад
Very cool to see your workflow and learn about new nodes
@moviecartoonworld4459
@moviecartoonworld4459 Месяц назад
It was a very long video, but it was a class that got me hooked one by one. I think I'll be able to try many things thanks to this kind, detailed explanation. thank you!😍
@ysy69
@ysy69 Месяц назад
This is great. Thank you!
@Martin-bx1et
@Martin-bx1et Месяц назад
I think this is probably the most useful ComfyUI tutorial that I have ever watched (and I have watched tons). I am gathering together a suite of workflows that let me blend 3D generated product designs with AI actors and backgrounds and this workflow allows me to easily add a convincing level of blended detail. Most of all I was able to understand what I am doing thanks to your efforts.
@user-pn4jy7lm5z
@user-pn4jy7lm5z 19 дней назад
Thanks for all the effort you put into explaining every step!
@anduvo
@anduvo Месяц назад
Awesome tutorial, very detailed, many thanks for explaining all the parameters, that's very valuable!
@controlaltai
@controlaltai Месяц назад
Glad it was helpful!
@ErikWerlin
@ErikWerlin 17 дней назад
This video is awesome. I'm brand new to all of this but hoping to be able to use this to upscale my landscape photography for larger print sizes. Hopefully I'll be able to figure all of this out.
@user-yb5es8qm3k
@user-yb5es8qm3k Месяц назад
This is one of the most difficult tutorials I have ever heard, almost did not understand, I think I will re-watch several times, hoping to finally understand, thank you very much
@user-lj3qe7oz2i
@user-lj3qe7oz2i Месяц назад
www.youtube.com/@stephantual
@vivigomez5960
@vivigomez5960 Месяц назад
I agree with you. This is God level
@fonodelivery
@fonodelivery Месяц назад
Great nice work, I have a problem with the Gemini module, I have the api-key I put it in the json file as you explained but I have an error.
@controlaltai
@controlaltai Месяц назад
Thanks! Gemini will give error on any photo with a face. They have blocked people photos. So even if you upload on gemini pro in browser, it gives error. Everything else works.
@user-rk3wy7bz8h
@user-rk3wy7bz8h 13 дней назад
Great Tutorial iam thankful Somehow in the node ' image resize' i don't have the 'multiple_of', its missing idk why.
@controlaltai
@controlaltai 13 дней назад
Thank you! That image resize is from comfy essentials custom node, there are multiple image resizes look for the one with the wrench icon.
@dr.gurn420
@dr.gurn420 26 дней назад
hey - great video!
@controlaltai
@controlaltai 26 дней назад
Hey, Thanks! Okay, you got the oversight accurately. Let me explain what I was thinking when I did that. If the image is divisible by 32, the crop will not do anything. If not, it will crop the image to the nearest number, which is divisible by 32. For the downscale, you are correct. The image resolution is from the original, but the image source is from the cropped. The correct way would be to connect the cropped width and height to the downscale input instead from the original. Why it doesn't matter. Because if you have a high res image, we are not going to use SUPIR for restoration. The image is already high res and proper. The downscale method here is a rinse-and-repeat method, meaning you take a low-res image and correct it by making it divisible by 32, which is done during upscale, we then use that to downscale, upscale again (rinse and repeat). The divisibility by 32 is fixed during the first upscale pass. Therefor calculation will always be correct in the downscale in the second and subsequent passes, whether width and height are coming from the crop output or the new original. This is an oversight from my side, as the downscale was never meant to be used for directly inputting a 4 K image, for example, and downscaling to upscale. It was meant for low-res, to upscale through multiple passes. The first pass would correct the divisibility by 32. So, you are the first person to get this oversight correctly. To answer your query, the 100% correct way would be to take the width and height values of the downscale from the cropped output. I got the oversight; however, I did not correct it since I only use the downscale for the rinse and repeat, so the res is always divisible by 32, no matter if it is coming from the original or crop output. (by image crop output I mean, the width and height from the image resize node in the bottom group) I have pinned a message mentioning the correction for the oversight.
@gohan2091
@gohan2091 Месяц назад
As a newbie to comfyui, I'm overwhelmed both in the complexity and the amazing results. Do I understand it all? No but I would love to play around with it. I have 64GB RAM and a 4090 gpu with 24GB VRAM. To download your workflow, I have to be a paid subscriber?
@controlaltai
@controlaltai Месяц назад
Hi, yes for the json, however nothing is hidden behind a paywall in regards to the tutorial. I always showcase everything in the video, showing how to build the entire workflow. But do appreciate any support given. 👍🙏
@gohan2091
@gohan2091 Месяц назад
@@controlaltaiI see. How does your workflow differ from other Supir workflows available? do they have the divisible by 8 and 32 nodes etc that you have? ar they dynamic like yours?
@controlaltai
@controlaltai Месяц назад
@gohan2091 You can check out the SUPIR workflow by Dev himself, who implemented this, you will get it under custom node, SUPIR, examples folder. The SUPIR upscale group in the video that’s what the basic workflow consist off, everything else is additional and done by me. Also the settings used for the sampler are very different. The workflow will give consistent accurate results, with these settings. The only limiting factor I see is the hardware. If I had better hardware I would just use full tiling for every image and be done with it. The workflow has dynamic automation for practical use. For example, you don’t want to calculate every image manually, different tile setting is a unique approach, ability to switch things on the fly and so on….I cannot speak about other SUPIR workflows. I have only seen the default given by the dev. Edit: divisibility by 8 is taken care of by comfy, not 32. The issue comes to convert to 8/32 both. You can always manually convert crop outside comfy. Images generated using Stable diffusion won't have any compatibility issues.
@gohan2091
@gohan2091 Месяц назад
@@controlaltaiWe appear to have the same hardware. It's the best we can get at the moment (without spending tens of thousands) I must admit, I am not fully understand the 3 different modes but I will re-watch your video later. I am using the dev workflow now, cannot even figure out how to increase the size but it's definitely improving quality. I am having to guess settings though like s_chum, s_noise, DPMPP eta etc. I guess with your workflow, I would still have to guess the settings?
@controlaltai
@controlaltai Месяц назад
@@gohan2091 I do explain the settings in detail, it’s complicated. If you want consistent, high accuracy result, try with the settings I used, changes in the setting mostly result in creative output, which personally, should be avoided for upscaling. But yeah, for example if you want some color change or add more details etc, you have play around with it. The tiles matter, in terms of vram usage. Typically close to original, will always give best quality, but above 2k it’s very difficult on 24gb vram. Secondly, on that setting it takes more time, if doing in batches, then maybe a middle ground would be a good balance on speed and quality. It just good to have flexibility in the workflow.
@Gmlt3000
@Gmlt3000 Месяц назад
TNX! Great tutorial, but there is no way to get JSON File, cuz RU-vid membership not allowed in certain countries(. Any chance to get json from ur tutorial?
@controlaltai
@controlaltai Месяц назад
I show everything in the tutorial video. You can just follow that and make the workflow yourself.
@Gmlt3000
@Gmlt3000 Месяц назад
@@controlaltai Yup, TNX again)
@zGenMedia
@zGenMedia Месяц назад
@@Gmlt3000 Definity unfollowed. People are taking this "workflow" too far as if they are the people that MADE the software. peace.
@thisguy9279
@thisguy9279 Месяц назад
What AI voice do you use?
@andrewq7125
@andrewq7125 29 дней назад
It might be better to do an upscale or downscale image the first, and then resize the image to a multiple of 32?
@controlaltai
@controlaltai 28 дней назад
The multiple of 32 applies only if it's not a multiple of 32. Technically it's doesn't matter. You would want the image to be as close to the original as possible. There is no image upscaling before conversion otherwise supir gives error. Downscale is case to case bases. Once the image is a multiple of 32. The entire multiple of 32 process is ignore and nothing in terms of resolution happens to the image.
@DeadDude4
@DeadDude4 24 дня назад
I agree with you @andrewq7125 . This workflow makes the assumption that you don't start with a downscale. So as long as you start with an upscale it works fine. I have a partially blurry image I wanted to try this workflow on with a size of 3168 (w) by 4752 (h). The workflow cropped the image down to 3168 by 3168 but then when downscaled 4x it becomes 792 by 792 which cannot be divided by 32. If I changed the crop factor to being divisible by 128 instead of 32 it worked fine. It just doesn't make sense to have to manually calculate those things in advance considering how automated this workflow is, so best to change the workflow to start with the downscale at the very least, bypass it if not downscaling and then crop and upscale. That being said, it's a good tutorial though. As a beginner there's alot I've learned from this tutorial
@controlaltai
@controlaltai 24 дня назад
Hi, I saw the oversight some days back, as pointed out by another user, and provided a solution in a pinned comment. If you start downscaling from the beginning, you just have to make one change. "@21:29 Slight oversight. Connect the Width and the Height to the downscale from the Image Resize at the bottom (divisible by 32 group) and NOT THE ORIGINAL. Thanks to the user "dr.gurn420" for correctly catching this oversight. The oversight would affect only if Downscaling an non 32 divisible image. Would have no effect if using the rinse and repeat (upscale downscale method) as shown in the video." I hope with this change, the workflow is now fully automated, even in the case of downscale first.
@godorox
@godorox Месяц назад
SUPIT e Gemini ConfyUI is not available on ConfyUI Manager (there's no install button). I tried to install using git clone but without sucess. Someone can help?
@controlaltai
@controlaltai Месяц назад
Its SUPIR, try searching again. I can find it in manager. What environment are you running comfy in?
@godorox
@godorox Месяц назад
@@controlaltai Sorry, typing error. Now it works!, python is 3.11 (in the zip file ComfyUI is cu121); Congratulations for your explanation and patience.
@controlaltai
@controlaltai Месяц назад
Great and Thank You!
@tailongjin-yx3ki
@tailongjin-yx3ki Месяц назад
hi, after duplicating all your workflow, i can use restore dpm/edm sampler ,but when i use tiled restore dpm/edm sampler it caused error' The size of tensor a (48) must match the size of tensor b (128) at non-singleton dimension 3', how can i fix this? many ths
@controlaltai
@controlaltai Месяц назад
Hi, When using tiled restore, ensure that the tile size is default, this won't cause the error. Do not use the standard or full tile size with tile restore, for some images the number is not divisible correctly.
@tailongjin-yx3ki
@tailongjin-yx3ki Месяц назад
@@controlaltai cool😀, but cause another error'Error occurred when executing ColorMatch: stack expects a non-empty TensorList'🙏
@controlaltai
@controlaltai Месяц назад
Do one thing close and restart command prompt and run tiles fresh. Make sure the color match nodes source is correct. I never encountered that color error. Use the default one. Let me know.
@tailongjin-yx3ki
@tailongjin-yx3ki Месяц назад
@@controlaltai i tried several times, it occurs ocasionally, dont know why
@controlaltai
@controlaltai Месяц назад
Well only way to know for sure is to check the workflow. And the image. Sendnit to me via email will have a look mail @ controlaltai. com without spaces
@user-nm4hz3mi6t
@user-nm4hz3mi6t День назад
Hello, I would like to inquire whether it is possible to attempt running this workflow with a 6G graphics card and 32G of memory.
@controlaltai
@controlaltai День назад
Hi, no this workflow won't work with a 6Gb VRAM. I recommend 12 to 16 GB at least.
@user-nm4hz3mi6t
@user-nm4hz3mi6t День назад
@@controlaltai Yes, I tried this workflow, but it didn't work out, haha... Hello, I would like to ask if you have considered developing a workflow that involves sketching, coloring, and assigning material textures to objects. From a product design perspective, this kind of workflow aligns with the product design process and is highly practical, but I am unsure if current technology supports it to achieve ideal results. If it can be implemented, I believe many people would enjoy it.
@giusparsifal
@giusparsifal 11 часов назад
Hello and thanks for the video! I'm a newbie of ComfyUi and I am looking for a workflow img2img that let me "simply" improve realistic detail skin face (texture, pores, etc). Your workflow looks so great but to hard for me :) (my fault!) Do you think I can find what I'm looking for? Thanks again!
@controlaltai
@controlaltai 11 часов назад
Hi, yeah don't use this workflow, this one is complicated and its not image 2 image. Check out this Ultimate SD Upscale tutorial which allows you to do image 2 image in Comfy easily. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-NDw9AjjW1t4.html
@giusparsifal
@giusparsifal 10 часов назад
@@controlaltai Thanks for replying! I saw the tutorial and indeed is easy, but my problem remain, the img generated is still unrealistic (ok, I know fro the tutorial you used an art image) maybe I have to look for some upscaler that gives more realistic appearance to the skin? If it exist :) Thanks!
@controlaltai
@controlaltai 10 часов назад
@@giusparsifal skin is different, and realistic skin is very different, along with the upscale a specific checkpoint would be needed to add details only to skin, plus some Lora’s to make the skin natural. AI has a habit of making the skin like plastic. Civit AI would be a good resource for the checkpoint and Lora. Juggernaut, RealVisXL are two very good checkpoints for skin. Check the trained settings for steps and cfg. Hope this helps.
@giusparsifal
@giusparsifal 10 часов назад
@@controlaltai yes, I generally use epicrealism as checkpoint too, the problem, for me, is create a workflow to get what I wish... Thanks anyway, you're very kind!
@sudabadri7051
@sudabadri7051 Месяц назад
Supir video 😂
@FgAg-xl7yg
@FgAg-xl7yg 22 дня назад
Sorry for the off-topic question, but I was hoping to rely on your knowledge and expertise. Do you know any xl models that have a strong understanding of various backgrounds and environments, such as the interiors of public restrooms, gyms, or trains
@controlaltai
@controlaltai 22 дня назад
Ermm sorry, I am not aware on any such specifically trained models. However I am sure you will find something at civil ai. The only way is to download and test with your prompts as they don't say on what dataset the model is trained in.
@FgAg-xl7yg
@FgAg-xl7yg 22 дня назад
@@controlaltai 👍 so seems like the best approach is to experiment with different models and find the ones that work well..in my testing even SD3 doesn't seem to have a perfect grasp of room structures, so it might be challenging with XL but yeah I'll see what I can find. Thanks!
@sergetheijspartner2005
@sergetheijspartner2005 23 дня назад
This was a lot like this mathclass meme, starts out simple, get distracted for a second and suddely the whole board is full, luckily this is a video, pause rewind and replay. Phew that was a lot. What if your old images are discolored or faded? Like an old polaroid that turned very red over the years
@controlaltai
@controlaltai 23 дня назад
For old images discoloration it's a bit more complicated. The first thing is to get it to restore to some degree, 1k. Then you have to use another workflow probably the ControlNet re color one. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-hTSt0yCY-oE.htmlsi=bF4qIFUg3K7lnMrk The simplest trick would be to make the image black and white and then add color again. Finally use SUPIR again to upscale for 2k or higher. For cracks and other stuff you can use Fooocus inpainting as shown here... ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-wEd1wPlCBaQ.htmlsi=yQJibUveCYe_oaFl
@MrXotab111
@MrXotab111 20 дней назад
Good afternoon, how can I find json?the link does not work, the white screen please help!it is very necessary(((
@controlaltai
@controlaltai 20 дней назад
Hello, What link doesn't work? JSON is for channel members. And what does white screen help mean. Elaborate
@MrXotab111
@MrXotab111 20 дней назад
​@@controlaltaiI joined a RU-vid community, but nothing happens!!!
@MrXotab111
@MrXotab111 20 дней назад
​@@controlaltaia blank screen, dots and a cross...empty...
@controlaltai
@controlaltai 20 дней назад
Well I cannot help with that. Check with RU-vid support if memberships is supported in your country. As far as I can see you are not a member, maybe a subscriber. Contact RU-vid support for further help.
@ParrotfishSand
@ParrotfishSand Месяц назад
I don't know what I did wrong, but I get very poor results. Nearly imperceivable difference even after upscale, copy, paste and upscale from 512x512 all the way up to 9kx9k using all of your suggestions. I tried on different images also with very little change. I noticed that there were some parts edited out of the video where things were changed suddenly without explanation. and yes, I did back up and slow the video down to try to understand better!
@controlaltai
@controlaltai Месяц назад
This depends on your image, the sudden cut parts are to streamline the video, nothing regarding the workflow is actually cut. When you are using a 512 image, is the image clear or in poor not recognizable quality, that would determine the upscale factor. If the image in clear, try upscale from 512 to 1500 custom. Check, then proceed. Imperceivable difference comes with very clear images using 1x upscale. Unless I see your image I cannot tell you what is going wrong. You might want to send the image to me via email and I will have a go with it, reply back with the results. (mail @ controlaltai . com) without spaces.
@ParrotfishSand
@ParrotfishSand Месяц назад
I sent you 2 of my test images. 1 lowres photo and 1 512x512 AI generated @@controlaltai
@controlaltai
@controlaltai Месяц назад
Yup received and replied via email.
@ParrotfishSand
@ParrotfishSand Месяц назад
@@controlaltai thanks again 😊👍
@video-sketches
@video-sketches Месяц назад
Hello Friend. Thanks for such a detailed video. This is just a masterpiece. But there is an incomprehensible moment. We are going to engage in an increase. But you have a group to reduce the image (downscale). I reviewed the video several times, but no longer understand what the meaning is in this group? Why do we reduce the image? What is the role of this group? What will happen if you remove it?
@controlaltai
@controlaltai Месяц назад
Hi, please rephrase the exact name of the group as shown in the video. Is it downscale or are you talking about 4x upscale?
@video-sketches
@video-sketches Месяц назад
@@controlaltai Sorry friend, that's google auto-translate. I'm from Russia and I use a translator. I stand corrected. I'm interested specifically in the group with the image reduction. timecode 34:13
@controlaltai
@controlaltai Месяц назад
@video-sketches Hi, check this 44:31 on why downscale is there. Basically, you can upscale to add details downscale then upscale again, to get desired results.
@artemlt
@artemlt 22 дня назад
Hey, how can i get json file? Thx!
@controlaltai
@controlaltai 22 дня назад
Hi, Check the members post in community. You should have access to it.
@artemlt
@artemlt 22 дня назад
@@controlaltai Got it Gaurav Seth! Thanks for your work!
@user-yb5es8qm3k
@user-yb5es8qm3k Месяц назад
How do you fix a damaged photo
@controlaltai
@controlaltai Месяц назад
Thats complicated to explain in a comment. I do explain it in the video. If you have any specific questions, feel free to ask.
@supernielsen1223
@supernielsen1223 Месяц назад
​@@controlaltai i would absolutely love if you could make a tutorial of how to remove bends scratches and such, if possible 🤩
@controlaltai
@controlaltai Месяц назад
A tutorial is not warranted for that, as it’s dependant on the image. But basically using fooocus in paint technique, masking, cssr or SUPIR bend and scratches can be removed. The workflows depends on the image, hence the tutorial won’t matter, as what will work for a single image won’t work for any other.
@supernielsen1223
@supernielsen1223 Месяц назад
​@@controlaltai okay i saw it a while ago in a tutorial in stable diffusion automatic1111 tho.. But They did use inpainted and it actually worked for a couple of pictures.. Maybe i should see if i can find that again and see if i can replicate in comfyui..
@controlaltai
@controlaltai Месяц назад
@@supernielsen1223 check out the refinement method in the yolo world in paint out paint workflow. I don’t have a cssr workflow but SUPIR or cssr both should work for restoring.
@thevoid6756
@thevoid6756 Месяц назад
why not just autocrop the image with a-amod64?
@controlaltai
@controlaltai Месяц назад
I don’t know what that is, but am learning new things every day. If you could elaborate…
@thevoid6756
@thevoid6756 Месяц назад
@@controlaltai Sorry, I guess % is more common for Modulo. So the function above should be "a-(a%64)" where a is your input resolution, which allows you to get the nearest resolution divisible by 64 (or 32 etc. as needed). To elaborate, assuming an image with a width of 550px, the function above would be 550-(550%64) = 550 - (38) = 512. Regardless, thanks for the elaborate Tutorial.
@controlaltai
@controlaltai Месяц назад
​@@thevoid6756 Thanks for the explanation. Yup I am aware of this.
@DJVARAO
@DJVARAO Месяц назад
Well, it didn't work for me. Some images got sharp but most of them got noisy.
@controlaltai
@controlaltai Месяц назад
What's the upscale factor? For good enough images (very clear) 2x upscale will work, 1x spoils it, 1.5x little better....
@tiowillsan
@tiowillsan 28 дней назад
Error 400 - Invalid API :(
@tiowillsan
@tiowillsan 28 дней назад
I refreshed it and it worked
@ulkiora777
@ulkiora777 15 дней назад
how to download json? i am the member
@controlaltai
@controlaltai 14 дней назад
JSon is only made available for paid channel members who monetarily support the channel. You can, however, just watch the entire tutorial and create the workflow. Everything is shown in the video. We make it a point with every tutorial that the entire workflow can be created for free and not hide the creation process behind the paywall.
@yasin6904
@yasin6904 18 дней назад
Resolved
@controlaltai
@controlaltai 18 дней назад
There is no image to seed node.
@yasin6904
@yasin6904 18 дней назад
@@controlaltai my bad, I've edited the comment above!
@pfbeast
@pfbeast Месяц назад
How to use "instruct pix 2 pix" & "SDXS" in comfyui?
@controlaltai
@controlaltai Месяц назад
Tutorial for Pix2Pix will be out next week....:)
@kishirisu1268
@kishirisu1268 День назад
“Restoration” - means they just do img2img, and final image has nothing common with old image. But who cares on youtube, where viewers have -10 IQ level.
@controlaltai
@controlaltai 17 часов назад
Please don't call people on RU-vid with 10 IQ level. Most of the viewers are smart business professionals. On the technical level I don't want to get to an argument cause such a statement would not been made in the first place. Technically every pixel has data and if you read the research paper you would know how the model is trained and what actually happens. So yeah details are added to the police officer, if you notice the hat and everything else the whole image is indeed restored from broken pixel. It did not put a party hat or a different hat design. Restoration means taken 5% pixels even 5% and building on that. If the image is that broken even a human won't be able to restore from that level of data. I never comment on opinions however Channel viewers should not be insulted for no reason. Thank you for your opinion and offcourse your are entitled to your views regarding the AI tech. It's a free world.
@sudabadri7051
@sudabadri7051 15 часов назад
My mans just said that about watching youtube... while watching youtube 😂 bro what does that make your iq? 5??
@meadow-maker
@meadow-maker Месяц назад
those images to start with of the policeman were awful. If you're going to grab attention you need to start with something good. I doubt that policeman actually looked anything like that final image.
@supernielsen1223
@supernielsen1223 Месяц назад
The thing is... People request these kinda things.. Often another family member or such will tell you how close it is.. It could be way off it could be pretty close... No way to tell really..
@bxl2012
@bxl2012 12 дней назад
​@@supernielsen1223 That's exactly why it was a terrible showcase. I am interested in a process to restore some detail, not to completely reimagine a photo. After seeing that horrible example at the start, I stopped watching this video. Waste of time.
@Aurora12488
@Aurora12488 2 дня назад
@@supernielsen1223 It's very easy to tell with the policeman, lol. The original image is of a dorky, bewildered English-looking guy. The after image is of a less bewildered, somewhat middle-eastern man. It's not just hallucinating new detail; it actually completely changed the obvious remaining detail from the original image.
@zintoki8211
@zintoki8211 Месяц назад
Please don't use word "restore" when you generate new data from nothing.
@amorgan5844
@amorgan5844 29 дней назад
Grandpa is now an AI model.
@n0f4ke74
@n0f4ke74 12 дней назад
jupp, its a replace generator with a guide.
@user-ek3mt7rm3n
@user-ek3mt7rm3n Месяц назад
Am I correct in understanding that this workflow does not use the juggernaut model? I saw different tutorials and they used juggernaut sdxl model everywhere
@controlaltai
@controlaltai Месяц назад
Yes, it uses RealVis_XL version 4 (baked vae), normal one.
@paultsoro3104
@paultsoro3104 Месяц назад
Specs: 4070ti 12gb 32gb ram i9 I got this error when running it Error occurred when executing SUPIR_first_stage: Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 2.47 GiB Requested : 4.39 GiB Device limit : 11.99 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB
@controlaltai
@controlaltai Месяц назад
Switch to default tile, use custom resolution, don't upscale 2x. Check original image size. If higher than 1024 then downscale to 1024. One of the above issues, low vram/system ram
Далее
Я СКУФ!
06:12
Просмотров 971 тыс.
How to use PuLID in ComfyUI
20:53
Просмотров 13 тыс.
ComfyUI for Everything (other than stable diffusion)
32:53
INSANE OpenAI News: GPT-4o and your own AI partner
28:48
New Supir Workflow ComfyUI
9:12
Просмотров 4,8 тыс.
Галина, к черту Роберта!
1:00
Просмотров 3,6 млн
顔面水槽がブサイク過ぎるwwwww
0:58
Просмотров 103 млн