Тёмный

ComfyUI Fundamentals - Masking - Inpainting 

Ferniclestix
Подписаться 4,9 тыс.
Просмотров 49 тыс.
50% 1

A series of tutorials about fundamental comfyUI skills
This tutorial covers masking, inpainting and image manipulation.
Discord:
Join the community, friendly people, advice and even 1 on 1 tutoring is available.
/ discord
Workflow: drive.google.com/file/d/19ExS...

Опубликовано:

 

4 авг 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 149   
@human-error
@human-error 6 месяцев назад
I love the way you explain, you start from the basics and add complexity as you explain in detail. It is also noteworthy the neatness in the nodes, THANK YOU!
@Puckerization
@Puckerization 11 месяцев назад
Excellent tutorial, thank you! I've learned a lot from this series. "Set Latent Noise mask" is a revelation. I would never have thought to use that instead of the default.
@arnaudcaplier7909
@arnaudcaplier7909 9 месяцев назад
The best tutorial on this topic. Bravo and thank you !
@marjolein_pas
@marjolein_pas 10 месяцев назад
Thank again. Your Comfyui tutorial series are very informative and valuable
@randomscandinavian6094
@randomscandinavian6094 5 месяцев назад
This is the one! Seen a few tutorials here and there and read a lot of Reddit posts that didn’t make a whole lot of sense but this is solid! Thank you!
@ferniclestix
@ferniclestix 5 месяцев назад
Happy to help :D
@shiftyjesusfish
@shiftyjesusfish 10 месяцев назад
This was literally the best tutorial. Thank you
@ted328
@ted328 2 месяца назад
Another life-saver! Thanks so much!
@Dingle.Donger
@Dingle.Donger 8 месяцев назад
This is exactly what I was looking for. Thank you!
@linkmanplays
@linkmanplays 6 месяцев назад
You are a life saver, I kept trying and trying and messing around with the mask and it turns out what I was missing was using a second load image for the mask. That put me back so much for so long.
@ferniclestix
@ferniclestix 6 месяцев назад
Happy to help, once you master masking you can do almost anything in comfyUI Theres a cool technique I recently found thats not in this tutorial. If you make an image out of three colors red, green and blue you can use a single image to make 3 different masks using the image to mask node and setting the appropriate channels. :D
@crobinso2010
@crobinso2010 11 месяцев назад
Wow would never have figured that out myself. Thanks!
@RyanAdorben
@RyanAdorben 2 месяца назад
Wow great video very helpful!
@travotravo6190
@travotravo6190 6 месяцев назад
Despite for some reason being a nightmare to install the nodes today (doesn't want to do it automatically through comfyui manager and written instructions aren't great) once I finally got it loading this is absolutely amazing and trounces other ways of masking, positioning, and inpainting. Thanks a lot for the heads up about these very cool nodes!
@Ranoka
@Ranoka 4 месяца назад
Thanks for the video, you really helped me understand the different approaches and the pros/cons! I'm going to watch the Masquerade video now
@ferniclestix
@ferniclestix 4 месяца назад
Glad it was helpful :D
@AnnisNaeemOfficial
@AnnisNaeemOfficial 4 месяца назад
Thank you for this. This was a great tutorial.
@0A01amir
@0A01amir 10 месяцев назад
Very well done, thank you.
@AIAngelGallery
@AIAngelGallery 11 месяцев назад
Wow, thx for the correct method for inpainting in comfyui
@AL_Xmst
@AL_Xmst 4 месяца назад
Thank you! Very useful! Also thanks for including the json file!
@carstenli
@carstenli 11 месяцев назад
Great tutorial, thank you.
@TheArghnono
@TheArghnono 11 месяцев назад
Very useful! Thank you.
@ns_the_one
@ns_the_one 4 месяца назад
Great video. Thank you a lot
@trobinou47
@trobinou47 11 месяцев назад
Really useful ! Thanks
@tobiasmuller4840
@tobiasmuller4840 11 месяцев назад
Eventually someone who addresses those topics! Thank you! For some reason this inpainting process for me has the same issues like you had with the VAEEncodeForInpaint. In your example the VAEEncodeForInpaint works surprisingly well anyway. I still get inpaint areas where the subject of the new prompt is not even visible with the Latent Noise Mask. At least when the new subject has a much different size (like mouse vs. elephant). I feel like I'd have to crop & upscale the masked area first and then put it back into position (with something like WAS nodes). However I didn't figure out yet how to do this with latents only.
@ferniclestix
@ferniclestix 11 месяцев назад
with latent noise mask, if you lower the denoise ammount your result should conform more to the original and becomes more likely to fit within the masked area. if you use vaeencodeforinpaint, it gets greyer and greyer the lower the denoise. Latents are bad for image type processes like cropping and stuff. its better to go to images for that.
@DarkPhantomchannel
@DarkPhantomchannel День назад
Quick thing i found out: "IMAGE" input/outputs only have R,G,B channels, while "MASKS" only works with ALPHA (or, at least, have a single channel). So "Load Image" has two outputs: "Image" (RGB) and "Mask" (A); it splits the channels that way. So if you try to convert from "Load Image" to mask as alpha you get an error. Following the reasoning, nodes as "Mix Color By Mask" use "image" as input (and not mask) so we have more freedom. Also they have only r, g, b options and not alpha because "image" data in ComfyUI does not have it, but on the other hand if they used "mask" they should use a mask (1 channel only) already processed previously
@user-ot6mg1tu3e
@user-ot6mg1tu3e 9 месяцев назад
Merci beaucoup pour ce tutoriel :)
@gb183
@gb183 9 месяцев назад
Many thanks!!😀
@calvinkao7612
@calvinkao7612 6 месяцев назад
the vae inpaint was screwing me up. thank you for showing us the right way!
@PRLLC
@PRLLC 11 месяцев назад
At the 17:04 mark, you were about to explain how to paste characters from different renders into one picture. I wish you would have shown examples! If you ever find the time, a demo of that would be great. Thank you!
@ferniclestix
@ferniclestix 11 месяцев назад
Ill be making a tutorial on 'compositing' for that process I think. however that doesnt help you right now. Grab the masquerade nodes pack, use crop by mask and paste by mask, there are other tools which can be used in conjunction with this to direct where to paste masked stuff. Hopefully thats enough to work with for you to figure it out.
@PRLLC
@PRLLC 11 месяцев назад
@@ferniclestix Thank you; I’ll experiment with that for now! I’m eagerly anticipating the new tutorial. I truly love the content you’ve been releasing. You’re among the few teachers who can be technical without being overly complicated. Kudos for consistently releasing tutorials, especially given the fast pace of updates-it must be challenging to keep up. So, thanks once more! 🤝
17 дней назад
thank you
@Chad-xd3vr
@Chad-xd3vr 4 месяца назад
great tutorial, clipseg looks like a useful node, thank you
@ferniclestix
@ferniclestix 4 месяца назад
I'll be doing an updated tutorial using something better than clipseg soon :D
@Chad-xd3vr
@Chad-xd3vr 4 месяца назад
I look forward to it@@ferniclestix
@AkshayTravelFilms2
@AkshayTravelFilms2 8 месяцев назад
thanks for workflow
@ferniclestix
@ferniclestix 8 месяцев назад
Enjoy
@Enricii
@Enricii 11 месяцев назад
That different node to set a latent noise mask is gem, I wasn't happy about comfyUI inpainting but that should improve the results. However, I still think inpainting in A1111 is better.
@ferniclestix
@ferniclestix 11 месяцев назад
I'd love if there were more tools in the comfyui inpainting window, like brush blur, transparancy and stuff. I love easydiffusion for that reason.
@nickb9342
@nickb9342 10 месяцев назад
i agree. so i use comfyUi for faster generation and then a1111 for inpainting.
@Satscape
@Satscape 10 месяцев назад
Well yes, I made that mistake too, then I set up a controlnet for inpainting, that didn't work, then I found your video! Thank you, liked and subscribed.
@ferniclestix
@ferniclestix 10 месяцев назад
:D happy to help
@juanchogarzonmiranda
@juanchogarzonmiranda 9 месяцев назад
Thanks a Lot!
@donkeyplay
@donkeyplay 11 месяцев назад
I just realized that "pipes" like kpipeloader that I just saw on the Impact Pack custom nodes tutorial page might run your bus a little more effeciently for you. And thanks for these vids, they help a lot!
@ferniclestix
@ferniclestix 11 месяцев назад
Yes, impact packs pipes are cool! however I find using a split up bus more flexible for my purposes... plus its easier to make a tutorial where people can see all the things im doing, if I fill my workflow with custom nodes then it creates a barrier for learning so yeah, I generally avoid too many custom nodes if I can :) Thanks for the advice tho! there are some amazing nodes in impact pack.
@donkeyplay
@donkeyplay 11 месяцев назад
@@ferniclestix I figured you had a special reason, lol. I'm still very new obviously, but now I can inpaint in comfy! Thanks again! 🖌
@trongnguyenquoc2940
@trongnguyenquoc2940 3 месяца назад
i love you
@dougmaisner
@dougmaisner 10 месяцев назад
informative!
@EH21UTB
@EH21UTB 11 месяцев назад
you should be able to control-shift V and paste with wiring intact for your bus
@ferniclestix
@ferniclestix 11 месяцев назад
doesnt work on reroute nodes, unless there has been a recent update im unaware of.
@ferniclestix
@ferniclestix 11 месяцев назад
AH, your right, they patched it, nice!
@VIKclips
@VIKclips 5 месяцев назад
what if i want to edit the image i already inpainted, how do i do that?? For example the image you have in the end, what if i want to change that one, maybe change a nest, and keep the dragon?? how can i do this? It doesnt work for me.
@ferniclestix
@ferniclestix 5 месяцев назад
You would want to use image load nodes to re-run it maybe, or add some more samplers and masks, you can prettymuch extend the workflow by building it out and connecting the different outputs. It helps if you build the workflow yourself but basically you can bring in a finished image using 'image load' then do a vae encode and send this to your samplers instead of 'empty latent'
@excido7107
@excido7107 7 месяцев назад
Thank you, your tutorials are actually the best. I'm trying to put something into an image but it doesnt seem to be working. I'd love a tutorial on like a img2img masking sort of thing. Like putting a dragon in my backyard for example 🤣
@ferniclestix
@ferniclestix 7 месяцев назад
I cover img2img masking in the compositing video and artist inpainting video :D hope they can help.
@excido7107
@excido7107 7 месяцев назад
Yes thank you I watched it and achieved what I wanted :)
@tuurblaffe
@tuurblaffe 10 месяцев назад
i love how we went from here's some text go figure it out. to planning and customizing processes which are used to optimize the process. One almost could say it feels like factorio from the future. which i love! the idea behind it is allready a good base to build on. stuff like native linux support, modular and people can mod the software to their own liking. It builds community. Where we were sharing blueprints of factorio we're now sharing png's to make images. What a wonderful time to be alive!
@ferniclestix
@ferniclestix 10 месяцев назад
only until the corps find a way of ruining it. screeeeeeee, capitalism screeee! :P
@olexryba
@olexryba 8 месяцев назад
This works very nicely! But it is also quite slow for larger images, even when the mask is very small. I suspect that it denoises the full image and then leaves only the mask. Is there a way to constrain denoising only to the masked area, plus some padding for additional info (like it can be done in a1111)? I imagine facedetailer nodes do something like that, because they operate much faster with smaller masks.
@ferniclestix
@ferniclestix 8 месяцев назад
I think I use the Impact pack detailer in my inpainting for artists video which just denoises a selected area.
@Commodore_1979
@Commodore_1979 5 месяцев назад
Excelent tutorial, thanks! Sadly the "Image Blend by mask" is missing in my confyui... I have searched for it on the web, and in the manager, with no luck... Where could I find it? Thanks again!
@ferniclestix
@ferniclestix 5 месяцев назад
I'm fairly sure it is a WAS suite node.
@schrodinger5091
@schrodinger5091 4 месяца назад
I'm trying to figure out how to use the 'Mix Color by Mask' node, but I'm having some trouble locating it. I've searched in the manager tab, but can't find anything with that name. Any guidance on where I can find this node or how to use it would be greatly appreciated!
@ferniclestix
@ferniclestix 4 месяца назад
You would need the Masquerade node pack, which is what this node belongs to in order to use it.
@w_chadly
@w_chadly 10 месяцев назад
could you do an in-depth tutorial on clipseg? everytime I use it with the cut/paste-by-mask, it creates this fuzzy mask that pastes with this non-100% opacity making the thing I cut out with clipseg look "ghostly" when I don't want that and I'm struggling to figure out how to fix that.
@ferniclestix
@ferniclestix 10 месяцев назад
I mean, clipseg theres not a huge ammount there, put in a keyword, set sensitivity settings and output mask... my advice, pull out preview nodes from all your clipseg outputs and see if they look un-usual, off colored and stuff. It could also be a downstream issue somewhere and not related to clipseg. If you want to get in touch via reddit, ill take a look at your workflow.
@mikealbert728
@mikealbert728 7 месяцев назад
Thanks for this. Can you explain the setup in comfyui for inpainting models?
@ferniclestix
@ferniclestix 7 месяцев назад
inpainting model setups just use the inpainting model in the checkpoint loader really. you could use vae for inpainting or set latent noise mask. its really up to you. generally I find inpainting models less useful than normal ones but it really depends what im doing .
@eucharistenjoyer
@eucharistenjoyer 6 месяцев назад
Very instructive video! Does ComfyUI have anything similar to "Inpaint Conditional Mask Strength"?
@ferniclestix
@ferniclestix 6 месяцев назад
im unfamiliar with what you refer to. for most things in A1111 there are similar things in comfyui, however comfyUI's inpainting works differently than A1111 so they don't really behave the same way.
@RuliKoto
@RuliKoto 11 месяцев назад
Great vids you have here, help a lot , thanks! Just wondering if you can do video about Roop / face swap in comfyUi
@ferniclestix
@ferniclestix 11 месяцев назад
This will be the topic of my next video, although I haven't tried roop specifically so Ill have to do some more research. probably take me a couple of days.
@ferniclestix
@ferniclestix 11 месяцев назад
Annnnd just finished it, ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-FShlpMxbU0E.html
@RuliKoto
@RuliKoto 11 месяцев назад
Been watching it, thank you very much, really appreciate it. Will wait for another great videos 🤟🤟🤟
@martinniesyto1656
@martinniesyto1656 7 месяцев назад
Very helpful, thanks! Unfortunately I cannot find the Clipseg node in the search window after I installed it with manager which shows that it is installed. Do you have eventually have any clue why?
@ferniclestix
@ferniclestix 7 месяцев назад
unfortunately I cannot help with debugging nodes. My advice head to the github page of the node in question and make a bug report. I would love to help but there are simply too many different possible setups for a comfyUI install and as a result I'm not really able to devote time to this, my work and tutorials. About clipseg though, it like some other nodes relies on external modules to do its magic and when those are broken often it will break the node. recently i think clipseg and blip implimentations have been a little glitchy. second possible issue. make sure you restart the server after an install and reload your comfyUI tab. or it wont update new nodes.
@martinniesyto1656
@martinniesyto1656 7 месяцев назад
@@ferniclestix thanks for your suggestions! today after uninstalling and reinstalling it for severel times over some days clipseg seems to show up in the search bar...perhaps they fixed something.
@ferniclestix
@ferniclestix 7 месяцев назад
yes, there are people working behind the scenes all over the place to get these kind of things sorted out, its a good idea to keep an eye on the github pages of nodes you used or better yet find one of the places where all us AI artists hang out and chat. the comfyui reddit is a great place to get info.
@hleet
@hleet 11 месяцев назад
Inpainting video for hands please. How can I use the "Set latent noise mask" if it needs a "samples" input ? I mean, the normal way would be to input an image with a drawn mask on it that goes straight to a VAE Encode (for Inpainting) that accepts input from pixels / vae and mask. Should I have to redo the whole image process with a fixed seed and take it out to a Latent Noise mask node ?
@ferniclestix
@ferniclestix 11 месяцев назад
I'd completely remove the 'vae encode (for inpainting) node because its not needed and not fit for purpose. instead just use the latent noise mask node.
@virtualpantherx
@virtualpantherx 8 месяцев назад
3:25 Because it's not a mask in a pixel image format (blue). It's only the values of the alpha channel (green), I think.
@ferniclestix
@ferniclestix 8 месяцев назад
alpha channels are still stored as a black and white pixel image, it is black and white with no rgb which is the difference.
@BrandosLounge
@BrandosLounge 8 месяцев назад
i'm trying to do a group photo using roop, and i was able to draw my friends face, but i am having trouble drawing a long beard on the one guy without it targeting the wrong person. Any advice on how i could fix that?
@ferniclestix
@ferniclestix 8 месяцев назад
Reactor let you pick which face to replace, its 'rooplike' i think I cover it in the face restoration tutorial.
@TrungCaha
@TrungCaha 11 месяцев назад
I can;t file " mix color by mask".Where I can find it ? Thank for making great video series
@ferniclestix
@ferniclestix 11 месяцев назад
it belongs to the masquerade node pack.
@TrungCaha
@TrungCaha 11 месяцев назад
@@ferniclestix tks
@gb183
@gb183 9 месяцев назад
I have downloaded your workflow, it's very useful to me, many thanks. Will you launch fix hand tutorials? It's very hard to fix hands for me. Please arrange fix hands tutorial, thank you again.
@ferniclestix
@ferniclestix 9 месяцев назад
try the face restore tutorial, one of the nodes there can do hands although you may have to use clipseg to find the hands.
@ArielTavori
@ArielTavori 10 месяцев назад
Thank you so much, this is incredibly helpful. I don't understand why they did not take the open source code from Blender Nodes, with years of Polish behind it and tons of functionality and extensions available for it. Instead there is this absurdly unconventional interaction (ctrl to select and shift to drag? Really? Wow guys!..) I'm amazed at the utter lack of documentation in this age of writting documentation... There's so much I admire and respect about StabilityAI, and then I still can't see the full file name of a file on huggingface on mobile? Come on! There are paid professionals with budgets and timelines behind these tools right? I don't understand why I can't even find a document explaining how to code a node for ComfyUI? Honestly quite absurd. I get that this is early development, but these seem like very strange priorities.
@ferniclestix
@ferniclestix 10 месяцев назад
I dont do coding, but I believe ive seen someone reference a node template if that helps, no idea what it is though. someone was using chatGPT to make nodes.
@swextr
@swextr 10 месяцев назад
As for the Blender Nodes - ComfyUI is mainly made in Python - not C (as blender is), so it'd be difficult to take and/or use code from Blender itself. But I do agree Blender's nodes are better, and ComfyUI could definitely have taken more inspiration from them - because everything in Comfy just feels messy. Reroute nodes are awful, IMO.
@nothingrhymeswithferg3744
@nothingrhymeswithferg3744 7 месяцев назад
great tutorial - but I can't find the CLIPseg node. is this a custom install, where can I find it?
@nothingrhymeswithferg3744
@nothingrhymeswithferg3744 7 месяцев назад
oops just got to the end of the video and found my answer ... great tutorial thank you!
@ferniclestix
@ferniclestix 7 месяцев назад
:D must have missed that issue I try to mention important nodes near the start. eh. ill do better in future :)
@brmawe
@brmawe 10 месяцев назад
Wow!...all that for inpaint? Insane but nonetheless great tutorial, keep up the good work.
@Resmarax
@Resmarax 10 месяцев назад
Thanks for adressing this. But how can i inpaint using latent noise mask on a png i created earlier?
@ferniclestix
@ferniclestix 10 месяцев назад
image load to load the png, vae encode to latent, latent noise mask then sampler. easy.
@Resmarax
@Resmarax 10 месяцев назад
@@ferniclestix alright, thanks!
@jhPampoo
@jhPampoo 3 месяца назад
is there any tutorial on inpainting that it can disturb the mask to make the transition between the mask and the background seamlessly sir, i found that the transition of the mask and the outside is hard
@ferniclestix
@ferniclestix 2 месяца назад
Ive found fixes for this kind of stuff but usually its related to the models/inpaint method being used. you can do some stuff like blurring the mask to try and smooth the edges but this can be unreliable with certain inpaint methods.
@jhPampoo
@jhPampoo 2 месяца назад
​@@ferniclestix thank you Differential inpaint which had just been released may do the great job, i'm going to try it
@carll.2330
@carll.2330 11 месяцев назад
This is great but why can't I find a single tutorial/workflow for outpainting? Literally can find 0 discussions on ComfyUI + outpainting. Is it just as simple as adding extra whitespace to the image in Paint and painting a mask over that area? Because I tried that and it did nothing. Please help.
@ferniclestix
@ferniclestix 11 месяцев назад
I mean, you would crop your image into a larger image and then mask the white space and sample it using inpainting.... basically. - This assumes you don't have an outpainting node or special workflow. I may cover this in my compositing tutorial if I have time, currently its going to be over 20 minutes.
@musicandhappinessbyjo795
@musicandhappinessbyjo795 11 месяцев назад
The tutorial is just so awesome. What kind of change would you recommend if I wanted to inpaint using control net?
@ferniclestix
@ferniclestix 11 месяцев назад
I haven't made much use of control net so far in comfyUI, I used it plenty in A1111 though. I think because control net acts on the conditioning of the image... you should be able to use it in conjunction with this without any problem. just make sure its applying to the ksampler and it will probably influence the output correctly.
@musicandhappinessbyjo795
@musicandhappinessbyjo795 11 месяцев назад
@@ferniclestix Hello , here is an issue that I am facing. You can use this workflow when you are directly feeding the latent node from a Ksampler to the set latent noise mask node. But If I am using a preexisting image and I plug into it using VAE encode . Then it's not working at all ,it just gives me the same image back. what may be the cause.
@ferniclestix
@ferniclestix 11 месяцев назад
www.reddit.com/r/comfyui/comments/15ldwds/can_anyone_help_me_with_this/ was helping someone with similar issues, have a look in there and you may find a solution.
@musicandhappinessbyjo795
@musicandhappinessbyjo795 11 месяцев назад
@@ferniclestix 😂😂 . I am actually the guy named darkmeme 9. I accidentally replied to your post. Also I have no issue with image to image . But the moment I use inpaint in it is when issue happens
@mstew8386
@mstew8386 10 месяцев назад
I am having trouble finding where to put the masking.json. I usually use the png images to load workflows.
@ferniclestix
@ferniclestix 10 месяцев назад
click load from the comfyui interface and go find the masking.json
@mstew8386
@mstew8386 10 месяцев назад
Thank you !@@ferniclestix
@gaulllum8802
@gaulllum8802 10 месяцев назад
What ksampler do you use in order to have an output while generating ?
@ferniclestix
@ferniclestix 10 месяцев назад
its a command line argument, open your bat file that starts comfyUI, add the line --preview-method auto to the end of your command line. restart the server and now your samplers will have a preview.
@gaulllum8802
@gaulllum8802 10 месяцев назад
@@ferniclestix thanks
@Mootai1
@Mootai1 4 месяца назад
You're pretty bad at baby dragons :D... but your tutorial is very well explained and interesting. Thanks a lot !
@8561
@8561 5 месяцев назад
Do you know how to avoid artifacts being created around a masked out subject lets say when Inpainting a background? For me, VAE Incode for Inpaint is working but adding artifacts, meanwhile SetLatentNoiseMask is not really inpainting much of the background. Maybe too little noise?
@ferniclestix
@ferniclestix 5 месяцев назад
for me its usually a matter of using setlatent noise mask and setting denoise lower. also making use of greyscale masking can really help, unfortunatley comfyUI uses binary masking which is what causes the artifacting issues.
@8561
@8561 5 месяцев назад
I'll look into greyscale masking. Right now I am trying to generate backgrounds around a masked out subject. I think SetLatentNoiseMask is struggling to have enough noise to inpaint the larger area. I also tried injecting more noise but still not much luck. VAE Incode for Inpaint has enough noise for larger areas but definitely too many artifacts. Would you suggest A1111 for this type of inpainting? What are the fundamental differences?@@ferniclestix
@ferniclestix
@ferniclestix 5 месяцев назад
ComfyUI is fine, the thing is in comfyUI you do have to do some extra steps after inpainting to fix the inpainted area. this usually involves a low level denoising pass. I've seen people attempt to denoise the area around the inpainted area and all kinds of stuff, you can get very complex, generally speaking though a simple 0.10 denoise pass tends to fix it.
@LeKhang98
@LeKhang98 9 месяцев назад
Wow, how did you discover such a useful and important trick about the Set Latent Noise Mask node? I have a few questions: - How did you make the Ksampler to node show preview images like that? I can only use Save/Preview image nodes to see the final image. - How to make invert mask? For example, I want to keep subject 1 and change everything else. Thank you very much for sharing.
@ferniclestix
@ferniclestix 9 месяцев назад
There is an invert node in... was suite? lets you invert an image, you can convert a mask to an image then invert it and plug it in again and it will do the opposite of what was masked. Alternatively using the word "background" in clipseg can be successful. how to do live previews: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-hdWQhb98M2s.html
@LeKhang98
@LeKhang98 9 месяцев назад
@@ferniclestix Nice thank you again I should try them asap.
@LeKhang98
@LeKhang98 9 месяцев назад
@@ferniclestix I just found out that there are Invert Image Node and Invert Mask Node and they are working great (I think that they are default nodes of ComfyUI). Thank you very much.
@zoemorn
@zoemorn 4 месяца назад
keep trying to incorporate this into an img2img workflow (so no starting ksampler, image loader and the maskloader) but results arent coming out at all. i can tell its trying to impact the mask but doing so in undesirable ways.. not seeming to take into account the big picture which you talk about and provided a solution for in a non-img2img way so not sure why it doesnt work in an img2img (or i've prolly got something wrong in my flow)
@ferniclestix
@ferniclestix 4 месяца назад
make sure you are masking correctly and that everything is plugged in correctly. for the most part img2img is no different from standard but you have to make sure you treat it as the first step in the workflow (the loaded image should replace the starting sampler at 1.0 denoise)
@zoemorn
@zoemorn 4 месяца назад
@@ferniclestix so blessed for a reply! so far its been interesting to compare vae inpainting vs setlatentnoise. i've got a flow that parallels both options with the same input image & mask to compare and sometimes theyre very close, and sometimes theyre not close and so far i dont know of a pattern, just depends on the seed and other variables i guess. i need to review your vids some more around cleaning up images, sometimes masking lines are noticeable though the generation is good, just needs cleaned up and i believe you mention being able to do that by sending it back through another ksampler or other. just need more time to dive into the videos more and play around.
@ferniclestix
@ferniclestix 4 месяца назад
I probably should do an indepth on getting good results from inpainting.. which is kind of a skill you have to learn. but its really dependant on your method of approach and with so many different ways of doing it... kinda hard to tailor a good tutorial for it.
@zoemorn
@zoemorn 4 месяца назад
@@ferniclestix Understandable, we don't have time to do everything. Thanks for what you can give time to.
@mstew8386
@mstew8386 10 месяцев назад
How are you able to see the rendering going on in Ksampler? Mine doesn't do that at all? I am running super low VRAM could this be the reason?
@ferniclestix
@ferniclestix 10 месяцев назад
ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-hdWQhb98M2s.html from the basic introduction tutorial
@mstew8386
@mstew8386 10 месяцев назад
You are amazing THANK YOU SO MUCH!!!!!! WORKS PERFECTLY@@ferniclestix
@Nevalopo
@Nevalopo 11 месяцев назад
What GPU are you using? Seems like fast prompts
@ferniclestix
@ferniclestix 11 месяцев назад
2080 8gb, 42gb system ram, the graphics card is dedicated for SD as there is no monitor plugged in nor is it being used by windows. this makes it quite fast, additionally these are 512x512 images and there is no upscaling.
@froztbytesyoutubealt3201
@froztbytesyoutubealt3201 11 месяцев назад
How to inpaint at full resolution?
@ferniclestix
@ferniclestix 11 месяцев назад
hopefuly the reply on reddit makes sense. I'm working on an example image atm to show how it might be achived
@ioio7408
@ioio7408 9 месяцев назад
looks good but in your workflow shared i dont have any image in my ksampler like in your video.
@ferniclestix
@ferniclestix 9 месяцев назад
ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-hdWQhb98M2s.html I show how to set that up in this tutorial
@SuperFunHappyTobi
@SuperFunHappyTobi 10 месяцев назад
I'v downloaded the workflow and I am following the tutorial but can't seem to get the workflow to actually inpaint it just renders a new image every time?? no clue why
@ferniclestix
@ferniclestix 10 месяцев назад
check the mode below seed on your sampler, youll want to run the one you are inpainting on as fixed so that the mask doesn't have to be re-built everytime.
@royimiron
@royimiron 10 месяцев назад
first of all tnx for the tutorial, i have the same problem, ksampler set to fixed and still it renders a new image every time@@ferniclestix
@ferniclestix
@ferniclestix 9 месяцев назад
huh, thats strange. you sure its plugged into the right places? may take a while to respond, youtube doesnt like showing comments in the depths of other comments :P if you have more questions post a fresh comment I'm more likely to spot it.
@USBEN.
@USBEN. 11 месяцев назад
Dude your audio is too low even on full volume without headphones. Please up them decibels so i hear it on speakers.
@ferniclestix
@ferniclestix 11 месяцев назад
Ill see what I can do, I've got a microphone on order. -id like to add, its as loud as it goes lol.
@spierdlajify
@spierdlajify 4 месяца назад
How to upload my photo to this workflow?
@ferniclestix
@ferniclestix 4 месяца назад
image load node, replace the first sampler. basically.
Далее
ComfyUI Fundamentals - Upscaling 1
11:49
Просмотров 6 тыс.
ComfyUI Fundamentals - Adv Masking and Compositing
42:25
LATENT Tricks - Amazing ways to use ComfyUI
21:32
Просмотров 116 тыс.
How to AI Animate. AnimateDiff in ComfyUI Tutorial.
27:47
ComfyUI Fundamentals - Mastering Noise
42:16
Просмотров 7 тыс.