Тёмный
No video :(

All new Attention Masking nodes 

Latent Vision
Подписаться 26 тыс.
Просмотров 26 тыс.
50% 1

Опубликовано:

 

22 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 165   
@AthrunWilshire
@AthrunWilshire 4 месяца назад
The wizard says this isn't magic but creates pure magic anyway.
@latentvision
@latentvision 4 месяца назад
Any sufficiently advanced technology is indistinguishable from magic
@NanaSun934
@NanaSun934 4 месяца назад
I am so thankful for your channel. I have watched countless yourtube video about comfy ui, but yours are definitely one of the clearest with deep understanding of the subject. I hardly leave comment EVER, but i felt the need to write this one. I was watching and rewatching your video and follow along. Its so much fun. Thank you so much!
@jayd8935
@jayd8935 3 месяца назад
I think it was a blessing that I found your channel. These workflows spark my creativity so much.
@DataMysterium
@DataMysterium 4 месяца назад
Awesome as always, Thank you for sharing those amazing nodes with us.
@xpecto7951
@xpecto7951 2 месяца назад
Please continue doing more informative videos like you always do, everyone else just shows prepared workflows but you actually show how to build them. Can't thank you enough.
@kf_calisthenics
@kf_calisthenics 4 месяца назад
Would love a video of you going into depth on the development and programming side of things!
@latentvision
@latentvision 4 месяца назад
maaaaybe 😄
@DarkGrayFantasy
@DarkGrayFantasy 4 месяца назад
Amazing stuff as always Matt3o! Can't wait for the next IPAv2 stuff you got going on!
@flisbonwlove
@flisbonwlove 4 месяца назад
Mr. Spinelli always delivering magic!! Thanks and keep the superb work 👏👏🙌🙌
@musicandhappinessbyjo795
@musicandhappinessbyjo795 4 месяца назад
The result looks pretty amazing. Could you maybe do one tutorial where there is combination with control net (not sure if that possible) just so we can also control the position of the characters.
@d4n87
@d4n87 4 месяца назад
Grande matt3o, i tuoi nodi sono uno spettacolo! 😁👍 Questo workflow soprattutto sembra assolutamente interessante e malleabile alle problematiche delle varie generazioni
@alessandrorusso583
@alessandrorusso583 4 месяца назад
Great video as always. A large number of interesting things. Always thank you for your time in the community.
@user-je7qy5ey3y
@user-je7qy5ey3y 4 месяца назад
Grazie Matteo stai facendo un buon lavoro
@volli1979
@volli1979 4 месяца назад
6:05 "oh shit, this is so cool!" - nothing to add.
@contrarian8870
@contrarian8870 4 месяца назад
Great stuff, as always! One thing: the two girls were supposed to be "shopping" and the cat/tiger were supposed to be "playing". The subjects transferred properly (clean separation) but there's no trace of either "shopping" or "playing" in the result.
@latentvision
@latentvision 4 месяца назад
the first word in all prompts is "closeup" that basically overcomes anything else in the prompt
@Foolsjoker
@Foolsjoker 4 месяца назад
This is going to be powerful. Good work Mat3o!
@mattm7319
@mattm7319 4 месяца назад
the logic you've used in making these nodes makes it so much easier! thank you!
@yql-dn1ob
@yql-dn1ob 4 месяца назад
Amazing work! improved the usability of the IPadapter!
@davidb8057
@davidb8057 4 месяца назад
Brilliant stuff, thanks again, Matteo. Can't wait for the FaceID nodes to be brought to this workflow.
@ttul
@ttul 4 месяца назад
Wow, this is so insanely cool. I can’t wait to play with it, Matteo.
@context_eidolon_music
@context_eidolon_music 2 месяца назад
Thanks for all your hard work and genius!
@latentvision
@latentvision 2 месяца назад
just doing my part
@aliyilmaz852
@aliyilmaz852 4 месяца назад
Thanks again for great effort and explanation Matteo. You are amazing! Quick question: Is it possible to use controlnets with IPAdapter Regional Conditioning?
@latentvision
@latentvision 4 месяца назад
yes! absolutely!
@jccluaviz
@jccluaviz 4 месяца назад
Thank you, thank you, thank you. Great work, my friend. Another master piece of art. Really apreciated.
@latentvision
@latentvision 4 месяца назад
glad to help
@marcos13vinicius11
@marcos13vinicius11 4 месяца назад
it's gonna help million times on my personal project!! thank you
@Kentel_AI
@Kentel_AI 4 месяца назад
Thanks again for the great work.
@11011Owl
@11011Owl 3 месяца назад
most usefull videos about comfyui, thank you SO MUCH, im excited af about how cool it is
@aivideos322
@aivideos322 4 месяца назад
u should be proud of your work. thanks for all you do. Was working on my video workflow with masking ipadaptors for multiple people... this will SOOOOOO make things easier.
@WhySoBroke
@WhySoBroke 4 месяца назад
An instamazing day when Maestro Latente spills his magical brilliance!!
@allhailthealgorithm
@allhailthealgorithm 4 месяца назад
Amazing, thanks again for all your hard work!
@Showdonttell-hq1dk
@Showdonttell-hq1dk 4 месяца назад
Once again, it's simply wonderful! During a few tests, I noticed that the: "RGB mask from node", needs very bright colors to work. A slightly darker green and it no longer has any effect. Everything else produced cool results on the first try. Thanks for all the work! And I'm just about to follow your ComfyUI app tutorial video to make one myself.
@latentvision
@latentvision 4 месяца назад
you can set thresholds for each color, you can technically grab any shade
@Showdonttell-hq1dk
@Showdonttell-hq1dk 4 месяца назад
@@latentvision Of course I tried that. But it worked wonderfully with bright colors. It's no big deal. As I said, thanks for the great work! :)
@latentvision
@latentvision 4 месяца назад
@@Showdonttell-hq1dk using black or white and the threshold you can technically get any color. But you can probably better use the node Mask From Segmentation
@skycladsquirrel
@skycladsquirrel 4 месяца назад
Great video! Thank you for all your hard work!
@elifmiami
@elifmiami Месяц назад
This is an amazing workflow! I wish we could animate it.
@Ulayo
@Ulayo 4 месяца назад
Nice! More nodes to play with!
@jacekfr3252
@jacekfr3252 3 месяца назад
"oh shit, this is so cool"
@Cadmeus
@Cadmeus 4 месяца назад
What a cool update! This looks useful for controlling character clothing, hairstyle and that kind of thing, using reference images. Also, if you compose a 3D scene in Unreal Engine, it can output a segmented object map as colors, which could make this very powerful. You could link prompts and reference images to objects in the scene and then diffuse multiple camera angles from your scene, without any further setup.
@premium2681
@premium2681 4 месяца назад
Angel Mateo came down from latent space again to teach the world his magic
@mycelianotyours1980
@mycelianotyours1980 4 месяца назад
Thank you for everything!
@autonomousreviews2521
@autonomousreviews2521 4 месяца назад
Excellent! Thank you for your work and for sharing :)
@35wangfeng
@35wangfeng 4 месяца назад
You rock!!!!! Thanks for the amazing job!!!!
@nrpacb
@nrpacb 4 месяца назад
I learned something new, happy, I want to ask when can we do a tutorial on replacing furniture indoors or something like that?
@latentvision
@latentvision 4 месяца назад
yeah that would be very interesting... I'll think about it
@lilien_rig
@lilien_rig 17 дней назад
ahh nice tutorial, I like it very thanks
@erdmanai
@erdmanai 4 месяца назад
Thank you very much man!
@ojciecvaader9279
@ojciecvaader9279 4 месяца назад
I really love your work
@renegat552
@renegat552 4 месяца назад
great work. thanks a lot!
@matteogherardi7342
@matteogherardi7342 2 месяца назад
Sei un grande!
@kaiserscharrman
@kaiserscharrman 4 месяца назад
really really cool addition. thanks
@helloRick618
@helloRick618 4 месяца назад
really cool
@AnotherPlace
@AnotherPlace 4 месяца назад
Continue creating magic senpai!! ❤️
@PurzBeats
@PurzBeats 4 месяца назад
"the cat got tigerized"
@AndyDeighton
@AndyDeighton 4 месяца назад
Genius. Fact. Again.
@Mika43344
@Mika43344 4 месяца назад
Great work as always🎉
@digidope
@digidope 4 месяца назад
Just wow! Thanks a lot again!
@GggggQqqqqq1234
@GggggQqqqqq1234 4 месяца назад
Thank you!
@rawkeh
@rawkeh 4 месяца назад
8:01 "This is not magic," says the wizard
@latentvision
@latentvision 4 месяца назад
I swear it is not :P
@fulldivemedia
@fulldivemedia Месяц назад
thanks,and i think you should put the "pill" word in the title :)
@Freezasama
@Freezasama 4 месяца назад
what a legend
@Shingo_AI_Art
@Shingo_AI_Art 4 месяца назад
Awesome stuff, as always
@DashengSun-ki9qe
@DashengSun-ki9qe 4 месяца назад
Great workflow. Can you add edge control and depth to the process? I tried it but failed. Can you help me? I'm not sure how the nodes are supposed to be connected, it doesn't seem to work.
@leolis78
@leolis78 19 дней назад
Hi Matteo, thanks for your contributions to the community. I am trying to use Attention Masking in the process of compositing product photos. The idea is to be able to define in which zone of the image each element is located. For example, in a photo of a wine, define the location of the bottle and the location of the props, such as a wine glass, a bunch of grapes, a corkscrew, etc. But I tried the Attention Masking technique and it is not giving me good results in SDXL. Is it only for Sd1.5? Do you think it is a good technique for this kind of compositions for product photography or do you think there is another better technique? Thanks in advance for your help! 😃😃😃
@latentvision
@latentvision 19 дней назад
this is complex to answer in a YT comment. depends on the size of the props. You probably need to upscale the image and work with either inpainting or regional prompting. Try to ask on my discord server
@heranzhou6976
@heranzhou6976 4 месяца назад
Wonderful. May I ask how I can insert FaceID into this workflow? Right now I get this error: Error occurred when executing IPAdapterFromParams: InsightFace: No face detected.
@deastman2
@deastman2 4 месяца назад
This is so helpful! I’m using closeup selfies of three people to create composite band photos for promotion, and this simplifies the workflow immensely. Question: Do you have any tips to go from three headshots to a composite image which shows three people full length, head to toe? Adding that to the prompts hasn’t worked very well so far, and I’m not sure if adding separate openpose figures for each person would be the way to go? Any advice would be most appreciated!
@latentvision
@latentvision 4 месяца назад
that has to be done in multiple passes. there are many ways you can approach that... it's hard to give you advice on such complex matter on a YT comment
@deastman2
@deastman2 4 месяца назад
@@latentvisionI understand. But “multiple passes” gives me an idea anyway. So probably I should generate bodies for each person first, and only then combine the three.
@freshlesh3019754
@freshlesh3019754 4 месяца назад
That was awesome
@GggggQqqqqq1234
@GggggQqqqqq1234 4 месяца назад
감사합니다.
@crazyrobinhood
@crazyrobinhood 4 месяца назад
Molto bene... molto bene )
@FotoAntonioCanada
@FotoAntonioCanada 4 месяца назад
Incredible
@ceegeevibes1335
@ceegeevibes1335 4 месяца назад
love.... thank you !!!
@pfbeast
@pfbeast 4 месяца назад
❤❤❤ as always best tutorial
@nicolasmarnic399
@nicolasmarnic399 4 месяца назад
Hello Mateo! Excellent workflow :) Consultation: To solve the proportion issues, that the cat is the size of a cat and that the tiger is the size of a tiger, the best solution would be to edit the size of the masks? Thanks
@latentvision
@latentvision 4 месяца назад
no, if you need precise sizing you need a controlnet probably. To install the essentials use the Manager or download the zip and unzip it into the custom_nodes directory
@stephanmodry1301
@stephanmodry1301 4 месяца назад
Absolutely incredible. Like always. BUT: Cat and tiger are not "playing". Please fix this as soon as possible. (just kidding, of course.) 😅
@WiremuTeKani
@WiremuTeKani 4 месяца назад
6:04 Yes, yes it is.
@latentvision
@latentvision 4 месяца назад
:)
@JoeAndolina
@JoeAndolina 3 месяца назад
This workflow is amazing, thank you for sharing! I have been trying to get it to work with two characters generated from two LORAs. The LORAs have been trained on XL so they are expecting to make 1024x1024 images. I have made my whole image larger so that the mask areas are 1024x1024, but still everything is coming out kind of wonky. Have any of you explored a solution for generating two characters from separate LORAs in a single image?
@svenhinrichs4072
@svenhinrichs4072 3 месяца назад
oh shit, this is so cool.... :)
@eduger
@eduger 4 месяца назад
amazing
@fukong
@fukong 3 месяца назад
Great job done! I'm wondering if theres any workflow using faceid series IPadapter with regional prompting...
@latentvision
@latentvision 3 месяца назад
it totally works there's nothing special to do just use the FaceID models
@fukong
@fukong 3 месяца назад
@@latentvision Thanks so much for reply!! I know I can replace the IPadapter Unified loader with FaceID unified loader in this workflow, but I don't know how to receive images and adjust the v2 weight or choose a weight type while using regional conditioning for FaceID, in other word, I don't know how to create an equivalent "IPadapter FaceID Regional Conditioning" node with existing nodes.
@tailongjin-yx3ki
@tailongjin-yx3ki 4 месяца назад
awesome
@pyyhm
@pyyhm 4 месяца назад
Hey matt3o, great stuff! I'm trying to replicate this with SDXL models but getting a blank output. Any ideas?
@Ai-dl2ut
@Ai-dl2ut 4 месяца назад
Awesome sir :)
@Zetanimo
@Zetanimo 4 месяца назад
how would you go about adding some overlap like the girl and dragon example from the beginning of the video where they are touching? Or does this process have enough leeway to let them interact?
@latentvision
@latentvision 4 месяца назад
The masks can overlap, if the description is good enough the characters can interact. SD is not very good at "interactions" but standard stuff works (hugging, boxing, cheek-to-cheek, etc...). On top you can use controlnets
@Zetanimo
@Zetanimo 4 месяца назад
Thanks a lot! Looking forward to more content!@@latentvision
@michail_777
@michail_777 3 месяца назад
Hi. Thanks for your work. I was wondering. Is there any IPAdapter node that will be linked to AnimateDiff? And this node will work only in a certain frame.That is, if I connect 2 input images, from 0 to 100 frame one image affects the generation, and from 101 frame the second input image affects the generation. But it would be quite nice if from frame 90 to 110 these images are blended.
@latentvision
@latentvision 3 месяца назад
yes I'm working on that
@michail_777
@michail_777 3 месяца назад
@@latentvision Thank you. I've added AnimateDiff and 2CN to your workflow. And it's working well.
@jcboisvert1446
@jcboisvert1446 4 месяца назад
Thanks
@hleet
@hleet 4 месяца назад
Very good stuff but hard to use. Thank you for this tutorial. I hope SD3 will better understand prompts and IPAdapter will be supported on SD3 as well. ... But SD3 is now a paid/API , so sad for "free opensource"
@latentvision
@latentvision 4 месяца назад
SD3 should be really easy to guide with images... let's see when they release the weights.
@guilvalente
@guilvalente 4 месяца назад
Would this work with Animatediff? Perhaps for segmenting different clothing styles in a fashion film.
@latentvision
@latentvision 4 месяца назад
attention masking absolutely works with animatediff
@user-tx2ey4bx5l
@user-tx2ey4bx5l 9 дней назад
Why does it show ClipVision model not found when I use it?
@n3bie
@n3bie 3 месяца назад
Woah
@user-yb5es8qm3k
@user-yb5es8qm3k 4 месяца назад
This video is great, but I follow the video, why does the portrait not look like the original picture
@francaleu7777
@francaleu7777 4 месяца назад
👏👏👏
@ai_gene
@ai_gene 2 месяца назад
Why doesn’t it work so well with the SDXL model? In my case, the result is one girl with different styles on two sides of the head.
@latentvision
@latentvision 2 месяца назад
try to use bigger masks, try different checkpoints, use controlnets
@user-yb5es8qm3k
@user-yb5es8qm3k 3 месяца назад
What file is the ipadpt in the embedded group read by the author and how to edit it
@thomasmiller7678
@thomasmiller7678 2 месяца назад
Hi great stuff, is there anyway to do this kind of attention masking with loras, so I can apply separate loras to separate masks? There's a few things kicking around but nothing seems to work all that well.
@latentvision
@latentvision 2 месяца назад
not really (it would be technically feasible probably but not easy)
@thomasmiller7678
@thomasmiller7678 2 месяца назад
@@latentvision hmm this is why I have been struggling there are some applications nodes for it but from the stuff I've found haven't had much luck yet, might you be able to help me out or do a lil digging maybe you can pull of some more magic! 😄
@divye.ruhela
@divye.ruhela 2 месяца назад
@@thomasmiller7678 But can't you just use the concerned LoRAs in a separate workflow to generate the images you like, then bring them here, apply conditioning and combine?
@thomasmiller7678
@thomasmiller7678 Месяц назад
Yes that is possible but it's still not a true influence like the Lora would be if it could be implemented
@sino-ph2gc
@sino-ph2gc 4 месяца назад
niubi!!
@jerrycurly
@jerrycurly 4 месяца назад
Is there a way to use controlnets in each region, I was having issues with that?
@latentvision
@latentvision 4 месяца назад
yes of course! just try it
@fmfly2
@fmfly2 4 месяца назад
My comfyui don't have 🔧 Mask From RGB/CMY/BW, only have Mask from color. Where do i find it?
@latentvision
@latentvision 4 месяца назад
you just need to upgrade the extension
@user-rx8wb4dn9c
@user-rx8wb4dn9c 3 месяца назад
thanks, for style someone , is there benefit to use ipadapter v2 combine with instantId or just ipadapter v2 face id is enough ? if padapter v2 combine with instantId get more better result , any tutorial for that ? another is does a casual&normal camera taken photo of a person can get a fantasy result use above method ?
@latentvision
@latentvision 3 месяца назад
yes you can combine them to get better results, but don't expect huge improvements, just a tiny bit better :)
@user-rx8wb4dn9c
@user-rx8wb4dn9c 3 месяца назад
@@latentvision thanks, for point2 , can normal human photo from phone camera can be transfer into a style masterpiece using comfyui ? i don't find video in youtube talk about that
@latentvision
@latentvision 3 месяца назад
@@user-rx8wb4dn9c depends what you are trying to do. too vague as a question, sorry
@user-rx8wb4dn9c
@user-rx8wb4dn9c 3 месяца назад
@@latentvision below are my case : i want to take my child born photo into a t shirt. but this photo is taken from very long time ago , and the quality is bad , especially the face got a bit vague , anyway its my memory. can i using comfy ui transfer this vague photo into a picture that reserve the pose and face of my child , improved the quality and with T-Shirt art style which suitable for print to t shirt,and it should reserve my child's face and body pose that i can regonize. how can i do with that using comfyui ?
@latentvision
@latentvision 3 месяца назад
@@user-rx8wb4dn9c it is possible using a combination of techniques but it's impossible to give you a walk-through in an youtube comment... it highly depends on the conditions of the original picture
@makristudio7358
@makristudio7358 3 месяца назад
Hi, Which one is better IP adapter FaceID vs InstantID ?
@latentvision
@latentvision 3 месяца назад
the are different 😄depends on the application
@burdenedbyhope
@burdenedbyhope 3 месяца назад
is this possible to use ipadapter and attention masks for character and items interaction? like a man handing over an apple or carrying a bag
@latentvision
@latentvision 3 месяца назад
yes of course! why not?!
@burdenedbyhope
@burdenedbyhope 3 месяца назад
@@latentvision maybe my weights/start/end are not right, I always have trouble make a known character interact with another known character or a known item. "Known" in this case means using IPAdapter. Most of the example I saw is 2 characters/subjects standing beside each other, not interacting, so I wonder.
@burdenedbyhope
@burdenedbyhope 3 месяца назад
@@latentvision I tested it in many cases, the interaction works pretty well; a girl holding an apple, a girl holding a teddy bear... all works well. With 2 girls holding hands, bleeding happens time to time, negative prompts are not always applicable; can the regional conditioning accept a negative image?
@hashshashin000
@hashshashin000 4 месяца назад
is there a way to use faceidv2 with this?
@latentvision
@latentvision 4 месяца назад
I will add the faceid nodes next
@hashshashin000
@hashshashin000 4 месяца назад
@@latentvision ♥
@walidflux
@walidflux 4 месяца назад
when are going to do videos with ip-adapter workflow?
@latentvision
@latentvision 4 месяца назад
not sure I understand
@walidflux
@walidflux 4 месяца назад
@@latentvision sorry, i meant animation with ip-adapter, there are many workflows out there most famous animatediff and ip-adapter i just though yours is defiantly going to be better
@latentvision
@latentvision 4 месяца назад
@@walidflux I'll try to do more animatediff tutorials, but I need to add a new node that will help with that
@a.zanardi
@a.zanardi 4 месяца назад
Matteo, FlashFace got released, will you bring it too?
@latentvision
@latentvision 4 месяца назад
I had a look at it, it's weird cookie. 10GB model that only works with SD1.5... I don't know...
@a.zanardi
@a.zanardi 4 месяца назад
@@latentvision 🤣🤣🤣🤣 Weird cookie it was really fun! Thank you so much for answering!
@Fernando-cj2el
@Fernando-cj2el 4 месяца назад
Mateo, I updated all and nodes still red, am I the only one´?😭
@alexisnik135
@alexisnik135 2 месяца назад
Anyone knows how you execute only a portion of the woflow so only the preview of the crop will be executed and not the whole workflow as he is doing at ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-4jq6VQHyXjg.html
@latentvision
@latentvision 2 месяца назад
disable the preview or the save node. The workflow won't execute all the way down but only up to the last enabled prreview
@alexisnik135
@alexisnik135 2 месяца назад
@@latentvision thanks!!
@user-jx7bh1lx4q
@user-jx7bh1lx4q Месяц назад
прекрасно но это только портретные изображения в близи, видно что на примере кота и тигра уже не знает как изобразить что они играют
Далее
Animation with weight scheduling and IPAdapter
20:50
Просмотров 31 тыс.
Don't Contribute to Open Source
9:55
Просмотров 230 тыс.
Deep dive into the Flux
28:03
Просмотров 26 тыс.
IPAdapter v2: all the new features!
16:10
Просмотров 81 тыс.
The Most INSANE AI Image Upscaler, EVER!
13:42
Просмотров 368 тыс.
How to use PuLID in ComfyUI
20:53
Просмотров 29 тыс.
Dissecting SD3
20:25
Просмотров 17 тыс.