Тёмный

New IP Adapter Model for Image Composition in Stable Diffusion! 

Nerdy Rodent
Подписаться 51 тыс.
Просмотров 19 тыс.
50% 1

The new IP Composition Adapter model is a great companion to any Stable Diffusion workflow. Just provide a single image, and the power of artificial intelligence will analyse the very composition itself - ready for you use!
Check out some of things you can do with it :)
Want to support the channel?
/ nerdyrodent
Links:
huggingface.co...
== More Stable Diffusion Stuff! ==
Faster Stable Diffusions with LCM LoRA - • LCM LoRA = Speedy Stab...
SD Generated Avatar Animation - • Create your own animat...
Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
ComfyUI Workflow Creation Essentials For Beginners - • ComfyUI Workflow Creat...
Video-to-Video AI using AnimateDiff - • How To Use AnimateDiff...
One image = A Consistent Character in ANY pose - • Reposer = Consistent S...

Опубликовано:

 

4 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 44   
@ClownCar666
@ClownCar666 6 месяцев назад
Thanks for sharing! I've been messing with Ip adapter all week, it's so much fun!
@NerdyRodent
@NerdyRodent 6 месяцев назад
It really is!
@BabylonBaller
@BabylonBaller 6 месяцев назад
Negative Prompt: "Bad Stuff Such as Evil Kittens" ROFL!
@kariannecrysler640
@kariannecrysler640 6 месяцев назад
I saw the rodent in the sky!!!! I have the witnesses! 🤘😉
@godpunisher
@godpunisher 6 месяцев назад
Nerdy's contents are amazing. Are you a mind reader? 😁
@NerdyRodent
@NerdyRodent 6 месяцев назад
Yes I am!
@farsi_vibes_edit
@farsi_vibes_edit 6 месяцев назад
I wish I had found your channel earlier😢🤯❤❤🔥
@NerdyRodent
@NerdyRodent 6 месяцев назад
4 years late is better than never!
@farsi_vibes_edit
@farsi_vibes_edit 6 месяцев назад
@@NerdyRodent you are the best🔥
@NerdyRodent
@NerdyRodent 6 месяцев назад
@@farsi_vibes_edit 😊
@SejalDatta-l9u
@SejalDatta-l9u 2 месяца назад
Have you managed to use this composition with existing images (e.g. your Nerdy Rodent), to give it more depth composition?
@Remianr
@Remianr 6 месяцев назад
6:54 Meme material haha!
@dudufridak1145
@dudufridak1145 5 месяцев назад
I like the thumbnail for this image. I wonder if you can create an A. I. for generating similar images, compositing text (with effects) as such.
@holysabre8499
@holysabre8499 6 месяцев назад
What I really want to see is a Lucid Sonic Dreams working update or something similar that's user friendly. Any idea of anything in the works similar to that or how to even achieve a similar effects using something else?
@NerdyRodent
@NerdyRodent 6 месяцев назад
Lucid Dreams is slightly difficult on diffusion models 😞
@DemShion
@DemShion 6 месяцев назад
has anyone managed to get this working with pony checkpoint? it works with other models derived from sdxl like animagine and jugg/realvis but not pony for some reason, curious if its just me.
@LIMBICNATIONARTIST
@LIMBICNATIONARTIST 6 месяцев назад
Impressive!
@MarcSpctr
@MarcSpctr 6 месяцев назад
Can you make a video on all your favorites AI tools and ComfyUI workflows ? Like Google's Film Interpolation, StableDiffusion, RVC Webui, MusicGen, etc
@comfyui
@comfyui 6 месяцев назад
Complete Menu
@SandyGoneByeBye
@SandyGoneByeBye 6 месяцев назад
No hugging cats? *giggles*
@NerdyRodent
@NerdyRodent 6 месяцев назад
Cat should never be used in a prompt! 😱
@Jcs187-rr7yt
@Jcs187-rr7yt 6 месяцев назад
Is there 1.5 models that this doesn't work with? I keep getting 'header too large' error and that usually happens with model mismatch, but I'm using the 1.5 adapter. ?
@NerdyRodent
@NerdyRodent 6 месяцев назад
Inpainting models may not work, but just your standard ones should all be fine
@Niffelheim
@Niffelheim 6 месяцев назад
Hey Nerdy Rodent, thanks for the tutorial. Do you know if this can apply together with a pose control net? I want to design a character from different views, (front, back, profile) and maybe transfer style or Lora character for consistency. Any tips?
@ramn_
@ramn_ 6 месяцев назад
I installed it in Forge and it ruined my installation. Now it generates only deformed images and randoms. I tried everything and I couldn't fix it, I will have to reinstall.
@Hooooodad
@Hooooodad 6 месяцев назад
Mate can you show how it's done in automatic 1111, forge please ?
@NerdyRodent
@NerdyRodent 6 месяцев назад
Select the model and your composition image (like with ComfyUI). Win!
@Hooooodad
@Hooooodad 6 месяцев назад
@@NerdyRodent I tried and failed miserably , doesn't work for me on forge. Do you use pre professor?
@wakegary
@wakegary 6 месяцев назад
that tiger needs help and I think we should act on it.
@peoplez129
@peoplez129 6 месяцев назад
Images come out all garbled on A1111
@KDawg5000
@KDawg5000 6 месяцев назад
What preprocessor do you use when using this with Automatic1111?
@NerdyRodent
@NerdyRodent 6 месяцев назад
It’s just the same as usual, like when using ip-adapter-plus or light
@KDawg5000
@KDawg5000 6 месяцев назад
@@NerdyRodent Hmm. It was giving me error messages no matter what I tried. Note, regular IPAdapter and the Face ID versions work. I'm not at home currently, but can share the messages later (in case anyone cares or is having the same problem).
@reallifecheatcodeaudiobooks
@reallifecheatcodeaudiobooks 6 месяцев назад
Please if you find solution for this let me know. I am also struggling to make it work.@@KDawg5000
@kallamamran
@kallamamran 6 месяцев назад
Just feels like img2img
@NerdyRodent
@NerdyRodent 6 месяцев назад
Or perhaps how you'd LIKE img2img would work, but it doesn't? :)
@MyAmazingUsername
@MyAmazingUsername 6 месяцев назад
This absolutely isn't like img2img whatsoever. Img2img keeps the exact pixels, colors and exact layout. This new technique is extremely flexible and can do anything and will be more "inspired by" than "exactly the same as the input".
@kallamamran
@kallamamran 6 месяцев назад
@@MyAmazingUsername Img2img definately doesn't keep the exact pixels! If it did img2img would be useless!
@ForeverNot-wv4sz
@ForeverNot-wv4sz 6 месяцев назад
I can't seem to get it to work for Auto1111, Like it works but the image comes out very painted/pastel/distorted. The same thing happened to me on comfyui.. till I downloaded the 2 encoders; CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors and CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors, and added them to the \ComfyUI\models\clip_vision folder, then it worked. So I thought, maybe that's the issue with the auto1111 version? However I can't find where to put these 2 encoder files for auto1111, I tried extensions\sd-webui-controlnet\annotator\downloads\clip_vision folder but that didn't work. I've also had issues just getting it to drop down for me in the menu on the gui, the ip composition model, like when I click on the ip adapter in the control dropdown, it has ip adaptorplus etc but no composition one unless I click the refresh next to the model dropdown then I can select ALL the models (even the ones not for ipadapter) THEN I'm able to load it, but like I said, it's all foggy/blurry when I make the image. I have controlnet v1.1.441, and my auto1111 is version: v1.6.0. I'm not sure what else to do. EDIT; I just updated my auto to version: v1.8.0 still having issues.
@NerdyRodent
@NerdyRodent 6 месяцев назад
Yes, you do unfortunately need to click refresh to get the full model list if you select the ipadapter filter. As for blurry images, I can’t find any way to replicate that in either Comfy or Forge 🫤
@ForeverNot-wv4sz
@ForeverNot-wv4sz 6 месяцев назад
Ah I see.. well at least it's good to know the refresh feature is meant to work that way. Perhaps I need to upgrade auto to forge, maybe that's the issue here@@NerdyRodent
@KDawg5000
@KDawg5000 6 месяцев назад
Are you using a preprocessor? I put the 2 "composition" models in my Controlnet folder, and get them to show up with a refresh, but I don't know which preprocessor to use? Any of the ip-adapter ones I try, never do anything. Meaning, automatic1111 just skips using controlnet (like it does when your controlnet settings don't make sense).
@reallifecheatcodeaudiobooks
@reallifecheatcodeaudiobooks 6 месяцев назад
Can you please let us know what preprocessor you use in forge. I cant get this to work without proper preprocessor and If i choose ip-adapter_clip_sdxl or ip-adapter_clip_sdxl_plus_vith it gives errors and doesn't work :/@@NerdyRodent
@NerdyRodent
@NerdyRodent 6 месяцев назад
It’s just the same Sd1.5 clip vision model as normal - like you’d use with ip adapter plus, ip adapter light, ip adapter full face, etc.
Далее
Style Transfer Using ComfyUI - No Training Required!
7:16
Stable Diffusion Portable (AUTOMATIC1111)
48:41
Просмотров 378 тыс.
Why You're Prompting Wrong, Do This (Per Leonardo AI)
18:31
Omost = Almost AI Image Generation from lllyasviel
9:43