Тёмный

ComfyUI With Meta Segment Anything Model 2 For Image And AI Animation Editing 

Future Thinker @Benji
Подписаться 49 тыс.
Просмотров 12 тыс.
50% 1

ComfyUI With Meta Segment Anything Model 2 For Image And AI Animation Editing
In this exciting video, we delve into the cutting-edge realm of artificial intelligence and computer vision with Meta's Segment Anything Model 2, also known as SAM 2. This next-generation AI model revolutionizes object segmentation, offering real-time capabilities for both images and videos. SAM 2's advanced architecture, featuring a unique memory mechanism, enables precise segmentation even in challenging scenarios like occlusions and reappearances. With its state-of-the-art performance, SAM 2 is a game-changer in the field of AI-driven object segmentation.
Previous Segement Anything 1 Multi Objects Editing Example :
• Slick back walk test r...
All Nodes, Models, Freebie Workflow Included Here : thefuturethink...
For Patreon Supporters Additional Contents : www.patreon.co...
Discover the power of SAM 2 as we guide you through implementing this groundbreaking technology in ComfyUI. From adding stunning effects to videos to tracking objects with ease, SAM 2 proves to be a versatile tool for content creators, researchers, and engineers alike. With SAM 2's open-source nature and accessibility under the Apache 2.0 license, it paves the way for innovation and experimentation in the AI community, from hobbyists to seasoned researchers. Join us as we explore the limitless possibilities of SAM 2 in enhancing image editing, video object tracking, and AI animations.
Unleash the potential of SAM 2 in your projects by following our step-by-step tutorial on integrating this powerful AI model into ComfyUI. Witness how SAM 2 collaborates seamlessly with other custom nodes and large language models to accurately segment objects in videos and images. Whether you're a novice looking to explore the world of AI or a seasoned professional seeking advanced segmentation tools, SAM 2 offers a user-friendly and efficient solution. Join us on this journey of exploration and innovation with SAM 2, the future of object segmentation in artificial intelligence.
If You Like tutorial like this, You Can Support Our Work In Patreon:
/ aifuturetech
Discord : / discord

Опубликовано:

 

11 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 36   
@TheFutureThinker
@TheFutureThinker Месяц назад
segment-anything-2 ai.meta.com/blog/segment-anything-2/ github.com/kijai/ComfyUI-segment-anything-2 Model : huggingface.co/Kijai/sam2-safetensors/tree/main Save to ComfyUI/models/sam2
@goodie2shoes
@goodie2shoes Месяц назад
its fun and interesting seeing the progress Kijai made with implementing this model in comfy. Great explanation @benji!
@TheFutureThinker
@TheFutureThinker Месяц назад
Yes, he is very quick to implement, whenever new model release, he will create a custom node done. 😊
@AgustinCaniglia1992
@AgustinCaniglia1992 Месяц назад
Who?
@aivideos322
@aivideos322 Месяц назад
Good video buddy, you have me opening comfy and updating workflows... seems like a real upgrade to impact SAM 1 Edit : i needed to change the security of my manager to weak to install this.
@santicomp
@santicomp Месяц назад
I was thinking this exact flow when Sam 2 was released. The combination of both is dynamite. This could also be used with PaliGemma or a finetuned version of florence 2. Awesome job. 🎉
@TheFutureThinker
@TheFutureThinker Месяц назад
Florence 2 FT works well with this, should give it a try.
@crazyleafdesignweb
@crazyleafdesignweb Месяц назад
Thanks, since Segment Anything you mentioned last time, I like to use it more than other SEG method.
@TheFutureThinker
@TheFutureThinker Месяц назад
Nice! ☺️
@thibaudherbert3144
@thibaudherbert3144 18 дней назад
thansk for the tutorial, one question though : which is better ? Animatediff or mimic motion ?
@__________________________6910
@__________________________6910 Месяц назад
It's very complex task
@OnlySong007
@OnlySong007 Месяц назад
nice explanation
@kalakala4803
@kalakala4803 Месяц назад
Thanks , i will update my wf to try SAM2.
@TheFutureThinker
@TheFutureThinker Месяц назад
Have fun😉
@adrivlogsgt628
@adrivlogsgt628 Месяц назад
On 4:00 you say "then you can load up segment anything2" but it doesn't show how you loaded the nodes? Could you please explain how you were able to go from the blank screen to the full node setup? I'm stumped on this step. Thank you!
@thegtlab
@thegtlab 20 дней назад
How can we have IPadapter ignore the background and only change the style of the subjects?
@DP-zw8sb
@DP-zw8sb 25 дней назад
Can you post video inpainting with sam2 sd 1.5 model please
@antoniojoaocastrocostajuni8558
@antoniojoaocastrocostajuni8558 Месяц назад
Is it possible to use the model to segment Stable Difusion / Midjourney deformed pictures? (multiple fingers, blurry flaces, etc)
@kallamamran
@kallamamran Месяц назад
That's a first :/ "ComfyUI SAM2(Segment Anything 2) install failed: With the current security level configuration, only custom nodes from the "default channel" can be installed."
@MrZuhaib69
@MrZuhaib69 2 дня назад
Hey Can I use mimic motion with this?
@TheFutureThinker
@TheFutureThinker 2 дня назад
Yes, we are using this for mimic
@MrZuhaib69
@MrZuhaib69 2 дня назад
@@TheFutureThinker can i use image to image with mimic motion?
@darkmatter9583
@darkmatter9583 Месяц назад
is that comfyUI? yes i see now...
@lionhearto6238
@lionhearto6238 Месяц назад
hi. is there a way to output/save only the orange? instead of the mask of the orange?
@alecubudulecu
@alecubudulecu Месяц назад
yeah. cut by mask
@weirdscix
@weirdscix Месяц назад
I tried this with several videos, some it worked great, florence tracked the dancer fine and sam2 masked it well, but others florence once again tracked well, but sam2 only masked part of the dancer, like their shorts. I'm not sure what causes this
@TheFutureThinker
@TheFutureThinker Месяц назад
Is better to use SAM2 Large. More parameters to identify objects within the Bbox. With SAM small , or plus , i have experienced that problem in some video and images too. I noticed, it happened when an object moving different angle.
@aivideos322
@aivideos322 Месяц назад
had the same issue, large worked better but it still was not perfect. Edit - ya something is wrong with this node set atm. I can put person in the text box, and get only pants, if i put face, it gives me a person. It doesn't seem to be working as it should
@TheFutureThinker
@TheFutureThinker Месяц назад
@@aivideos322 I wish there are node create for SAM 1 and 2. We can use a drop-down to selec which version we want and simplify the node connection, it will need a textbox for SEG prompt keep that idea from SAM1 custom node.
@authorkevin
@authorkevin Месяц назад
​@@aivideos322toggle the individual objects selector
@__________________________6910
@__________________________6910 Месяц назад
Your system config ?
@machanmobile4216
@machanmobile4216 Месяц назад
齊藤さんだぞ
@user-rk3wy7bz8h
@user-rk3wy7bz8h 24 дня назад
It doesn't work good . Sometimes it doesn't segment good if there are several things
@suzanazzz
@suzanazzz 17 дней назад
Awesome videos on florence, thanks for your time creating these. A quick question, when I use Florence for captions in an animateDiff and IPAdapters workflow, I get 2 results: 1- the final animation 2- the animation with the Florence captions. For some reason the Florence captions is much faster even though it is set at the same frame rate (24fps) as the Video combine for the plain animation (without the captions). Any idea why this is happening or how to fix it? Thanks in advance 🙏
Далее
iPhone 16 - презентация Apple 2024
01:00
Просмотров 116 тыс.
Unveiling Meta's Impressive CV Model: Sam 2
12:00
Просмотров 33 тыс.
Hollywood is so over: The INSANE progress of AI videos
21:34