Тёмный

Style and Composition with IPAdapter and ComfyUI 

Latent Vision
Подписаться 20 тыс.
Просмотров 23 тыс.
50% 1

IPAdapter Extension: github.com/cubiq/ComfyUI_IPAd...
Github sponsorship: github.com/sponsors/cubiq
Paypal: www.paypal.me/matt3o
Discord server: / discord
00:00 Intro
00:26 Style Transfer
03:05 Composition Transfer
04:56 Style and Composition
07:42 Improve the composition
08:40 Outro

Наука

Опубликовано:

 

30 май 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 243   
@Billybuckets
@Billybuckets Месяц назад
No hype. No BS gimmicks. You are the golden god of AI generation tutorials. Plus you make a damned fine node or two.
@latentvision
@latentvision Месяц назад
thanks! The difference is that I'm not a youtuber, youtube for me is a hosting platform not a job... that and the fact that I hate "this will change everything..." kind of videos
@HiProfileAI
@HiProfileAI Месяц назад
​@@latentvisionthe fact is this is GAME CHANGING. I've been struggling with art direction for images for client requests for artwork. This is game changing making it very easy to get desired outputs with styles and compositions. Looking forward to see how this works out with image posing references for composition. Thanks much appreciated
Месяц назад
You added value to our lives. Thank you..
@latentvision
@latentvision Месяц назад
I'm doing my part!
@zGenMedia
@zGenMedia Месяц назад
Incase anyone is wondering... the art work used (First crayon skull) is from Jean-Michel Basquiat. An amazing artist.
@lefourbe5596
@lefourbe5596 Месяц назад
thanks for saying it 👌. it's true it should be credited. we don't want to destroy the people despite the look of it. we just like to make stuff like everyone else.
@GabrielRosenthal
@GabrielRosenthal Месяц назад
Wow, this is realy the last missing piece I needed to use Stable Diffusion in proper client work. I already tested it and am super impressed by the performance of style transfer!! Thank you so much for making all of this possible open source! ❤ This new features will change entire industries!
@GoblinWar
@GoblinWar Месяц назад
I remember when I was a research assistant my professor told me to look into this "GAN-style transfer" thing and report back to him. For the time it was super impressive but it's cool to see what ~7 years can do, this is astounding compared to back then
@urbanthem
@urbanthem Месяц назад
Literally grabbing pop corn and smiling at all your videos. The goat. Thank you for all!!
@razvanmatt
@razvanmatt Месяц назад
Thank you once again for doing all of this!
@Paulo-ut1li
@Paulo-ut1li Месяц назад
That's wonderful! That combined composition and style node is game changer! I added visual style prompt to some generations it seems to give that final touch to style transfer. Thank you!
@AdamDesrosiers
@AdamDesrosiers Месяц назад
Elegant and powerful workflow - this is absolutely fantastic stuff
@sorijin
@sorijin Месяц назад
Thank you for your hard work, I have a workflow im constantly updating with your ipadapter tech, these new capabilties are awesome!
@reapicus557
@reapicus557 Месяц назад
I am so grateful that you take the time to make these videos. Your nodes are so powerful and efficient, and I am able to use them with confidence right from the start given these wonderful expositions. :)
@Mr.Sinister_666
@Mr.Sinister_666 Месяц назад
What you do here is incredibly helpful on all ends. Helpful is a bad word. Game changing is better but that is two words. Honestly whatever you release is gold. I look forward to every video and release man. Thank you so very, very much for your hard work!
@TheGalacticIndian
@TheGalacticIndian Месяц назад
God bless you, Matteo! This makes me fall in love with generative AI again😍AMAZING!
@latentvision
@latentvision Месяц назад
IKR?! me too!
@AnotherPlace
@AnotherPlace Месяц назад
OMG, just this small short vid gives a lot of important information and update to my knowledge.... THANK YOU MATEO!!
@latentvision
@latentvision Месяц назад
just doing my part
@flisbonwlove
@flisbonwlove Месяц назад
Great work Matteo 👏👏✨❤️
@DanielPartzsch
@DanielPartzsch Месяц назад
Just when you think it can't get any better you always proof us wrong and surprise is with new tools and smarter as well as more efficient ways to get to certain results. Thank you so much.
@denisquarte7177
@denisquarte7177 Месяц назад
This is massive, something I was hoping to achieve when I started using sd and you made it possible
@latentvision
@latentvision Месяц назад
if I understand SD3 architecture, it should be totally possible
@Skydam33hoezee
@Skydam33hoezee Месяц назад
Absolutely brilliant! Already having so much fun with this.
@petertjie4128
@petertjie4128 Месяц назад
fantastic work Mateo, Thank You!
@DarkGrayFantasy
@DarkGrayFantasy Месяц назад
Matt3o this is amazing! I can tell you I'm going to have a blast experimenting with the nodes, keep up the amazing work!
@latentvision
@latentvision Месяц назад
IKR?! Stable Diffusion is fun again 😄
@marjolein_pas
@marjolein_pas Месяц назад
amazing, thank you very much for making these and your easy to understand explanation and workflow!
@jibcot8541
@jibcot8541 16 дней назад
This is increadible, great job.
@Some1uNo
@Some1uNo Месяц назад
This information is gold. Thank you
@ttul
@ttul Месяц назад
Lovely work, Matteo! I can’t wait to play with this.
@ranks6670
@ranks6670 Месяц назад
You always make my day ❤
@3dpixelhouse
@3dpixelhouse Месяц назад
I love it! for me, it works like a charme. thanks for your engagement and time
@piemoul
@piemoul Месяц назад
I’ve been missing your live stream and you gave us this? Impressive
@aamir3d
@aamir3d Месяц назад
Incredible stuff, thank you Matteo!
@latent-broadcasting
@latent-broadcasting Месяц назад
This is exactly what I needed! Thanks so much
@remmo123
@remmo123 Месяц назад
This is amazing. Thank you for great work.
@kallamamran
@kallamamran Месяц назад
Amaaaaazing video! Your work is fantastic :D
@mhfx
@mhfx Месяц назад
ok wow amazing update. Thank you for your hard work
@ivanyang2022
@ivanyang2022 Месяц назад
you did wonder to the community ! thank you so much!
@kittikajorns1811
@kittikajorns1811 Месяц назад
Great work as always.
@mmxyt
@mmxyt Месяц назад
I haven't had so much fun with SD in a while, thanks so much for all you do!
@latentvision
@latentvision Месяц назад
IKR?! same here!
@andykoala3010
@andykoala3010 Месяц назад
you are a genius, sir!
@latentvision
@latentvision Месяц назад
only by chance
@Showdonttell-hq1dk
@Showdonttell-hq1dk Месяц назад
That's just fantastic, thank you!
@renegat552
@renegat552 Месяц назад
This is absolutely amazing!
@caseyj789456
@caseyj789456 Месяц назад
Mamamia ! Gracia Mateo ❤
@erikdias9604
@erikdias9604 Месяц назад
This is what we imagined RU-vid to be used for when it happened. More practical than the forums and newsgroups of the time. Excellent work. it's simple and effective, it opens the way to a lot of exploration and testing (I added lora to see what it gives and... at 5 a.m. I looked up from the screen: oops 😅
@jccluaviz
@jccluaviz Месяц назад
Amazing again. Really nice work
@amineroula
@amineroula Месяц назад
I am going to try this as soon as i get home 😮❤
@ZhuYuxiang
@ZhuYuxiang Месяц назад
Very good video, thank you, it helped me a lot
@faxuancai
@faxuancai Месяц назад
Fantastic work! Thank you, Matteo!
@NotThatOlivia
@NotThatOlivia Месяц назад
OMG!!! 👏👏👏
@Rammahkhalid
@Rammahkhalid Месяц назад
Great update as usual
@prasanthchowhan
@prasanthchowhan Месяц назад
Thank you Matteo and the unnamed heroes (sponsers) who are responsible for this incredible thing🎉
@DataMysterium
@DataMysterium Месяц назад
mamma Mia, this is incredible! Thank you for making this a open-source project
@PulpoPaul28
@PulpoPaul28 Месяц назад
you are the best, number 1, the greatest channel in youtube. I love you, Mateo.
@latentvision
@latentvision Месяц назад
thanks! I'm not worth it :)
@autonomousreviews2521
@autonomousreviews2521 Месяц назад
Thank you for this excellent tutorial :)
@paulotarso4483
@paulotarso4483 Месяц назад
Insane, you're the goat!
@orion4d727
@orion4d727 Месяц назад
👍Excellent work
@banzai316
@banzai316 Месяц назад
Fantastic!! As always 👏
@no-handles
@no-handles Месяц назад
Amazing work!
@optimbro
@optimbro Месяц назад
I have been away from all AI stuff due to my work, just wanted to thank you for this. It means a lot
@latentvision
@latentvision Месяц назад
you are welcome! have fun (and profit) with it
@optimbro
@optimbro Месяц назад
@@latentvision
@user-cp8vm5ef2l
@user-cp8vm5ef2l Месяц назад
Thank you!
@AnthonyDev
@AnthonyDev Месяц назад
Amazing. Ipadapter and comfyui = 😍
@sachacarletti6533
@sachacarletti6533 Месяц назад
amazing!! thank you Matteo!
@pandalayreal
@pandalayreal Месяц назад
So good!
@eddiemauro.design
@eddiemauro.design Месяц назад
Thank you for everything. I supported you in paypal :)
@latentvision
@latentvision Месяц назад
thanks!
@Injaznito1
@Injaznito1 Месяц назад
Just finished updating my IPAdapter and it wasn't too painful. I did had to use ComfyUI manager to download the new models but other than that I didn't hit any walls. Thanx for the tutorial!
@electrolab2624
@electrolab2624 Месяц назад
ooh - I want to update too! - did you use a tutorial to do that? - So you just update the IP Adapter nodes (using manager) and download some new models? - I am a bit confused 😬
@electrolab2624
@electrolab2624 Месяц назад
Managed! - Using the 'IPAdapter Style & Composition SDXL' now - it's awesome! THX much much - @Matteo! 🤗
@hamidmohamadzade1920
@hamidmohamadzade1920 Месяц назад
you really unlocked the power of image generation
@RodrigoNishino
@RodrigoNishino Месяц назад
Very interesting surely gonna try it. Thx!
@RodrigoNishino
@RodrigoNishino Месяц назад
Cant seem to find StyleAndComposition node .. .am I missing something?
@sanchitwadehra
@sanchitwadehra Месяц назад
Dhanyavad
@Injaznito1
@Injaznito1 Месяц назад
I'm guessing this only works with SDXL and not 1.5?
@alanhk147
@alanhk147 Месяц назад
You are the best !
@pk.9436
@pk.9436 Месяц назад
great work
@35wangfeng
@35wangfeng Месяц назад
awesome!!!
@haljordan1575
@haljordan1575 Месяц назад
Since you build these, you're the perfect person to ask. Would it ever be possible to combine your previous workflow "character stability and repeatability" with something like composition adapter or multi area composition, where you'd feed separately stable and posed characters into one image to generate a final singular artwork where they interact?
@MrMartinBoo
@MrMartinBoo Месяц назад
IPAdapter is just mesmerizing...
@afaridoon1104
@afaridoon1104 Месяц назад
🤯amazing
@ceegeevibes1335
@ceegeevibes1335 Месяц назад
amazing!
@ashokp9260
@ashokp9260 Месяц назад
this is crazy good..
@BobDoyleMedia
@BobDoyleMedia Месяц назад
GREAT!
@hayateltelbany
@hayateltelbany 9 дней назад
that is crazy xD I love it
@AdwinWijaya
@AdwinWijaya Месяц назад
nice one... i am looking for this feature in stablediffusion and just found it on your video. nice one ...
@aviator4922
@aviator4922 Месяц назад
awesome man
@juanchogarzonmiranda
@juanchogarzonmiranda Месяц назад
Thanks Matteo!!
@yvann.mp4
@yvann.mp4 Месяц назад
thanks so much
@liialuuna
@liialuuna Месяц назад
great nodes! 🍒🍒🍒
@alessandrorusso583
@alessandrorusso583 Месяц назад
Grazie.
@sb6934
@sb6934 2 дня назад
Thanks!
@david_ce
@david_ce Месяц назад
I love this guy so much Great video and thank you for your service When’s the next comfy to hero video coming out?
@latentvision
@latentvision Месяц назад
I'm prepping it... not sure when but it's in the pipeline
@xellostube
@xellostube Месяц назад
Interesting behaviour: I've just discovered that if I mask the image leaving the center of the image out of the mask, whatever generation will be ifluenced by the image but no in the center Interesting results. (workflow at 9:21)
@murphylanga
@murphylanga Месяц назад
Wow, wow, wow 🤩
@hmmrm
@hmmrm Месяц назад
woow thats cool
@user-sy9eq2vp9h
@user-sy9eq2vp9h Месяц назад
where would you insert a lora loader (character), before or after the ipadapter in the model chain? I tried to do some testing but the results were inconclusive so I wanted to hear your thoughts on this. thanks.. oh and huge thanks for this marvelous node 😍
@burdenedbyhope
@burdenedbyhope Месяц назад
awesome work!!! btw, can you suggest a workflow for using the new composition transfer with subjects from other IPAdpater (like how we use attention masks to achieve before)
@latentvision
@latentvision Месяц назад
I really just released the feature I still need to play with it. I'll post more in the coming weeks
@burdenedbyhope
@burdenedbyhope Месяц назад
@@latentvision you’re the best
@Shisgara77
@Shisgara77 Месяц назад
ФАНТАСТИКА 😍❤
@TheCcamera
@TheCcamera Месяц назад
V2 is so powerful and easy to use! Im really having fun with it! Thank you Matteo! Style and Composition Transfer for SDXL is amazing! I'm wondering what this type of approaches can mean for SD3 where the different modalities are even more closely related?
@digidope
@digidope Месяц назад
Any plans for SD 1.5 version? XL is just too slow to be used with AnimateDiff.
@latentvision
@latentvision Месяц назад
sd15 has a completely different architecture, I'll give it a go but it might not be possible without a dedicated model.
@rsunghun
@rsunghun Месяц назад
Holy magical heaven
@dashx3465
@dashx3465 Месяц назад
Thank you! I think this will be way better than controlnet for me. Give the AI a base to follow but have way more freedom than controlnet gives. I could never get the SDXL controlnets to work too well.
@pfbeast
@pfbeast Месяц назад
👌👌👌❤❤❤ very nice & high quality video. Really your new node give lots more option😀. I am sorry for my comments on your last video due to change your node structure. Salut 🫡 your thought about open source in this video last part.
@flamingwoodz
@flamingwoodz Месяц назад
These updates are great! How do you recommend composition for SD1.5 models currently?
@Darkwing8707
@Darkwing8707 Месяц назад
Since 1.5 models don't have a composition unet, tools like attention/latent couple and regional prompter might be your best bet.
@alessandrorusso583
@alessandrorusso583 Месяц назад
Thank you for the time you dedicate to the community, my thanks unfortunately are not much, I was only able to offer you a coffee, but I hope that others will do the same to be able to support your time. 😊 I gave you something using the youtube thank you button.
@latentvision
@latentvision Месяц назад
thank you, it would be great if companies that are actually making a lot of money out of this technology would chime in
@alessandrorusso583
@alessandrorusso583 Месяц назад
@@latentvision Can you mix multiple styles? And does it also work with image to image?
@latentvision
@latentvision Месяц назад
@@alessandrorusso583 yes and yes-ish. img2img works but the denoise needs to be pretty high. depends on the result you are after
@Kamerosoul
@Kamerosoul Месяц назад
Amazing, great work. But in my case, when running, it is asking for a clip vision model in IPAdapter Unified Loader node. Any feedback? I added a clip vision model in next node but it didnt work
@DanielThiele
@DanielThiele Месяц назад
This is really cool. I am an Artist and learning all this AI stuff now, because it is taking my jerrrb. My question is: would you use this kind of workflow also for creating a character sheet from an original character illustration that I made before (without AI)?. I am looking for a workflow to take my character (no AI) into a clean reference with orthogonal views for the 3d Artists. Looks like this is very close to what I need. Or do you have other suggestions?
@latentvision
@latentvision Месяц назад
it can be used to help the generation but to do what you are asking you need a lot more conditioning (controlnet likely)
@DanielThiele
@DanielThiele Месяц назад
Tutorial request 😅
@flyashy8397
@flyashy8397 14 дней назад
Hello, thank you very much for this! I'm kinda noob at this and it took a long time to make ComfyUI work without errors. Your tutorial was very helpful and wonderful to explore. I have a noob question, is it possible to use Canny or other controlnet nodes, canny etc, with an adapter to enforce adherence to the original image? If yes is there any guides for that? Thank you!
@flyashy8397
@flyashy8397 13 дней назад
Hey, so please ignore my question, I figured out how to add controlnet Canny & Depth components and plug them into conditioning. The thing is that it works but very very slowly, ~25 sec becomes ~800 secs. Is there something I may be doing wrong?
@ysy69
@ysy69 Месяц назад
Thanks always and again Mateo. should I update via Manager or Git ?
@latentvision
@latentvision Месяц назад
if you know how... git is always better, but the manager works too
Далее
All new Attention Masking nodes
10:13
Просмотров 18 тыс.
КОРОЧЕ ГОВОРЯ, 100 ДНЕЙ В СССР 2
08:37
Design Better Than 99% of UI Designers
14:52
Просмотров 173 тыс.
10 Viral PowerPoint Presentations 🚀
5:58
Просмотров 238 тыс.
ComfyUI for Everything (other than stable diffusion)
32:53
ComfyUI Multi Person Masking With IPADAPTER Workflow
12:31
IPadapter Version 2 - EASY Install Guide
11:38
Просмотров 38 тыс.
ComfyUI: Advanced understanding Part 2
25:19
Просмотров 24 тыс.
THE MOST MIND-BLOWING SUNRISE I'VE PHOTOGRAPHED!
15:12
IPAdapter v2: all the new features!
16:10
Просмотров 58 тыс.
Топ-3 суперкрутых ПК из CompShop
1:00