Тёмный

Map Bashing - NEW Technique for PERFECT Composition - ControlNET A1111 

Olivio Sarikas
Подписаться 227 тыс.
Просмотров 102 тыс.
50% 1

Map Bashing is a NEW Technique to combine ControlNet maps for Full Control. This allows you to create amazing Art. Have full artistic control over your AI works. You can exactly define where elements in your image go. At the same time you have full prompt control, because the ControlNET Maps have now color, daylight, weather or other information. So you can create many variations from the same composition
#### Links from the Video ####
Make Ads in A1111: • Make AI Ads in Flair.A...
Woman Sitting unsplash.com/photos/b9Z6TOnHtXE
Goose unsplash.com/photos/eObAZAgVAcc
Pillar www.pexels.com/photo/a-brown-...
explorer: unsplash.com/photos/8tY7wHckcM8
castle: unsplash.com/photos/8tY7wHckcM8
mountains unsplash.com/photos/lSXpV8bDeMA
Ruins unsplash.com/photos/d57A7x85f3w
#### Join and Support me ####
Buy me a Coffee: www.buymeacoffee.com/oliviotu...
Join my Facebook Group: / theairevolution
Joint my Discord Group: / discord

Хобби

Опубликовано:

 

11 июн 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 155   
@OlivioSarikas
@OlivioSarikas Год назад
#### Links from the Video #### Make Ads in A1111: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-LBTAT5WhFko.html Woman Sitting unsplash.com/photos/b9Z6TOnHtXE Goose unsplash.com/photos/eObAZAgVAcc Pillar www.pexels.com/photo/a-brown-concrete-ruined-structure-near-a-city-under-blue-sky-5484812/ explorer: unsplash.com/photos/8tY7wHckcM8 castle: unsplash.com/photos/8tY7wHckcM8 mountains unsplash.com/photos/lSXpV8bDeMA Ruins unsplash.com/photos/d57A7x85f3w
@aeit999
@aeit999 Год назад
Latent couple when?
@xiawilly8902
@xiawilly8902 Год назад
looks like the explorer image and castle image are the same.
@ainosekai
@ainosekai Год назад
Sir, no need to check 'restore face'. Because if you use kinda 2.5D/animated base model, it face will looks weird. You can use an extension named 'After Detailer'. It can fix your character's faces flawlessly (based on your model). Also it can works perfectly with character (face) LoRa. There are also several models like it can fix hands/finger and body. Give it a try~
@hacknslashpro9056
@hacknslashpro9056 Год назад
how to put own face in a picture generated in SD, should we use inpaint or what? need the same style tho and matching lighting, should we use inpaint or what?
@ryry9780
@ryry9780 Год назад
​As a birthday gift to my sister three months ago, I made a picture featuring her and one of her favorite characters. The way it worked was I trained models of both the character and my sister. My sister's models had to be done in two steps: first with IRL pictures, then second with generated animated pictures. Once that was a done, it was a matter of compositing them all together in one pic via OpenPose + Canny + Depth and hours of Inpainting, with a little Photopea. Took me 20 work-hours. Idk how much of this process has changed since Auto1111 is now at v1.3.2 and ControlNet at 1.1.
@samc5933
@samc5933 Год назад
What are these “other models” that fix hands? If you can point me in the right direction, I’d be grateful!
@Feelix420
@Feelix420 Год назад
@@samc5933 until ai learns to draw hands and feet i wouldn't worry so much about ai like Elon is now
@cleverestx
@cleverestx Год назад
adetailer is amazing, comes standard on Vladmandic...it can be set to detect hands and fix those as well if you choose the hand instead of face model, but only mildly, not as effective on hands as it is on faces, but still can save a picture from time to time!
@Maria_Nette
@Maria_Nette Год назад
ControlNet gets even better with every new update.
@aeit999
@aeit999 Год назад
It is. But this method is as old as control
@jacque1331
@jacque1331 Год назад
Olivio, you're a Rockstar! Been following you for a while. Extremely grateful to have found your channel.
@eddiedixon1356
@eddiedixon1356 Год назад
This is exactly what I was looking for. I still have a few things to piece together but this was huge, thank you so Much for your time.
@mikerhinos
@mikerhinos Год назад
This is amazing as very often... one of the most under rated RU-vid account on A1111 tutorials !
@jason-sk9oi
@jason-sk9oi Год назад
Tremendous human artistic control while maintaining the ai creativity as well. Nice!
@paulodonovanmusic
@paulodonovanmusic Год назад
Exactly. I think a lot of traditional artists, particularly those with at least basic desktop publishing skills (or basic doodling skills) would love how empowering this is. 1111 is such a wonderful art tool, it's a pity that it can be so technically challenging to get set up, I hope this gets solved soon and that the solution becomes more accessible to the unwashed masses.
@chickenmadness1732
@chickenmadness1732 Год назад
@@paulodonovanmusic Yeah it's very close to how a real artist concept artist for movies and games works. Main difference is they use a collage of photos to get a rough composition and then paint over it.
@soothingtunes6780
@soothingtunes6780 10 месяцев назад
You are a lot more amazing than Stable Diffusion XL bro, what good is a tool if we don't have people like you to show us how to use it properly!!!
@neeqstock8617
@neeqstock8617 Год назад
Tried it, and this is probably the most simple, creative, and effort-effective technique I've come across. It's so easy to edit edge maps, even with simple image editing software. Thank you Olivio! :D
@BruceMorgan1979
@BruceMorgan1979 Год назад
Fantastic, and well detailed video Olivio. Look forward to trying this.
@boyanfg
@boyanfg 11 месяцев назад
Hi Olivio! I am amazed about the master level at which you use the tools. Thank you for sharing this with us!
@akanekomi
@akanekomi Год назад
I have been using similar techniques for a while now, I AI Dance animations I make are a lot more complex, glad you made a tutorial on this, Ill redirect anyone who asks for SD tutorials to your channel. Thanks Olivio❤❤
@frostreaper1607
@frostreaper1607 Год назад
Oh wow, this actually solves the composition and color issues, great find Olivio thanks !
@ex0stasis72
@ex0stasis72 Год назад
I'm so excited to use this technique. I was getting frustrated with the limitations of openpose not being detailed enough. But this soft edge thing looks really powerful as long as I'm willing to do a little manual photo editing beforehand.
@travislrogers
@travislrogers Год назад
Amazing process! Thanks for sharing this!
@trickydicky8488
@trickydicky8488 Год назад
Watched your live stream over this last night. Highly enjoyed it.
@OlivioSarikas
@OlivioSarikas Год назад
Thank you very much
@CCoburn3
@CCoburn3 Год назад
Great video. I'm particularly happy that you used Affinity Photo to create your maps.
@ronnykhalil
@ronnykhalil Год назад
this is brilliant! thanks for sharing. opens up so many possibilities, and also helps me grasp the infinitely vast world of controlnet a little better
@AZTECMAN
@AZTECMAN Год назад
One very similar method I've been exploring is creating depth maps via digital painting. Additionally, I've experimented with using a inference based map and then modifying by hand it to get more unusual results. Mixing 3D based maps (rendered), inference based (preprocessed), and digital painting methods, while utilizing img2img and multi-controlnet highlights the power of this tech. "Map Bashing" is a great term.
@EllaIsSlayest
@EllaIsSlayest Год назад
I've been contemplating how best to bash up source images to create a final composition for SD rendering and this looks like a grand solution! Thanks for sharing.
@monteeaglevision5505
@monteeaglevision5505 11 месяцев назад
You are a legend!!! Thank you sooooo much for this. Game changer. I will check back and let you know how it goes!
@ctrlartdel
@ctrlartdel 10 месяцев назад
This is one of your best videos, and you have a lot of really good videos!
@mysterious_monolith_
@mysterious_monolith_ Год назад
That was incredible! I love what you do. I don't have ControlNET but if I could get it I would study your methods even more.
@Braunfeltd
@Braunfeltd Год назад
Love your stuff, learning lots. this is awesome
@bjax2085
@bjax2085 Год назад
Brilliant!! Thanks!
@destructiveeyeofdemi
@destructiveeyeofdemi Год назад
Thorough brother. Peace and love from Cape Town.
@ericvictor8113
@ericvictor8113 Год назад
Incredible video like always is. GRats!
@OlivioSarikas
@OlivioSarikas Год назад
Thank you
@Aisaaax
@Aisaaax 8 месяцев назад
This is a great video! Thank you! 😮
@morizanova
@morizanova Год назад
Thanks .. smart trick to make machine function as our helper not just our overlord
@yadav-r
@yadav-r Год назад
wow, learned a new thing today. Thank you for sharing.
@coloryvr
@coloryvr Год назад
Super helpful as always! Big FAT FANX!
@ex0stasis72
@ex0stasis72 Год назад
I recommend playing around with adding this to your positive prompt: "depth of field, bokeh, (wide angle lens:1.2)" Without the double quotes of course. Wide angle lens is a trick that allows the subject's face to take up more of the area on the image while still fitting in enough context of the area around the subject. And the more pixels you allow it to generate the face, the more details you'll get generally. Although, if you already have controlnet dictating the composition of the image, adding wide angle lens to your prompt will likely have no effect and therefore reduce the effectiveness of everything else in your prompt. The depth of field and bokeh are just some ways to make it feel like it was a photo shot professionally by a photographer than if it was just shot by an average person with automatic camera settings.
@joywritr
@joywritr Год назад
This was very useful, thank you. I was considering drawing outlines over photos and 3D renders to do something similar, but using the masks generated by the AI should work as well and save a lot of time.
@Carolingio
@Carolingio Год назад
👏👏👏👏👏 Nice, Thanks Olivio
@minhhaipham9527
@minhhaipham9527 Год назад
Awesome, please make more videos like this. Thank!
@ZeroIQ2
@ZeroIQ2 Год назад
this was really cool, thanks for sharing!
@aicarpool
@aicarpool Год назад
Who’s da man? You da man!
@ysy69
@ysy69 Год назад
Beautiful
@MadazzaMusik
@MadazzaMusik Год назад
Brilliant stuff
@TheGalacticIndian
@TheGalacticIndian Год назад
I love it!♥♥
@spoonikle
@spoonikle Год назад
Holy smokes. This changes the flow
@jonmichaelgalindo
@jonmichaelgalindo Год назад
I've been using this for ages! ❤ NOTE!: RevAnimated is *terrible* at obeying controlnet! (It is my favorite model for composition, but... I wouldn't use it like this.) I inpaint after the initial render. Same map bash controlnet, +inpaint controlnet (no image), inpaint her face w/ "face" prompt, pillar w/ "pillar" prompt, etc. No final full-image upscale; SD can't handle more than 3 large-scale concepts. You can get hires details in a 4k canvas by cropping a section, inpainting more detail, then blending the section back in w/ photoediting software. (This takes some extra lighting-control steps; there are tutorials on how to control lighting in SD.)
@foxmp1585
@foxmp1585 9 месяцев назад
Could you clarify the "extra lighting-control steps" you mentioned? Is that the map we painted in Black&white and then feed into img2img tab? Thank you in advance!
@jonmichaelgalindo
@jonmichaelgalindo 9 месяцев назад
@@foxmp1585 I barely remember my workflow from back then... SDXL is fantastic at figuring out what sketches mean in img2img. Right now, I block out a color paint sketch with a large brush, then run it through img2img with the prompt, then paint over the output, and run it through again and repeat, eventually upscaling and inpainting region by region with the same process. I have just about perfect control over composition, facial expressions, lighting, and style. :-)
@heikohesse4666
@heikohesse4666 Год назад
very cool video - thanks for it
@accy1337
@accy1337 Год назад
You are amazing!
@PhilippSeven
@PhilippSeven Год назад
Thank you for this technique! It’s really useful. As for advice from my side, I suggest using an alternative methods for fixing faces (aDetailer, inpaint, etc ) instead of “restore faces”. It uses one model for each face, and as a result, the faces turn out to be too generic.
@luke2642
@luke2642 Год назад
You could also use background removal tool step to preprocess each image, or as others suggested, non destructive masking when cutting them out.
@TorQueMoD
@TorQueMoD Год назад
You don't even need to do any sort of masking. When both images have a black background and white strokes, just set the top layers to Linear Dodge blend and they will seamlessly blend together.
@dm4life579
@dm4life579 Год назад
This will take my non-existant photo bashing skills to the next level. Thanks!
@adastra231
@adastra231 Год назад
wonderful
@williamuria4048
@williamuria4048 Год назад
WOW I like It!
@blood505
@blood505 10 месяцев назад
спасибо за видео 👍
@Marcus_Ramour
@Marcus_Ramour 10 месяцев назад
Brilliant video and thanks for sharing your workflow. I have been doing something similar but using blender & daz studio to build the composition first (although this does take a lot longer I think!).
@WolfCatalyst
@WolfCatalyst Год назад
This was a great tutorial on affinity
@mayalarskov
@mayalarskov Год назад
hi Olivio, the image of the castle has the same link as the explorer image. Great video!
@starmanmia
@starmanmia 5 месяцев назад
Hello future me,remember to use IP adapter for faces and body and have A detailer for a backup works well x
@kyoko703
@kyoko703 Год назад
Holy bananas!!!!!!!!!!!!!!!!!
@Grimmona
@Grimmona Год назад
I installed automatic 1111 last week and now I'm watching one video after another from you, so i get ready to become an Ai artist😁
@OlivioSarikas
@OlivioSarikas Год назад
Awesome!!!
@AlfredLua
@AlfredLua Год назад
Hi Olivio, thank you for the super cool video! Curious, if you were using a depth map instead of softedge for the woman, how would you edit it in Affinity to remove the background? It seems trickier for depth map since the background might be a shade of gray instead of absolute black. Thanks.
@ddiva1973
@ddiva1973 11 месяцев назад
@14:43 mind blown 🤯😵🎉
@glssjg
@glssjg Год назад
you need to familiarize yourself with masks in your image editor so that way you're using a nondestructive process instead of rasterizing and then resizing things which will lose you quality and if you erase things you wont have a way to undo other than using the undo button.
@theSato
@theSato Год назад
In a way, I agree with you - but honestly, the whole point of a workflow like this is (and AI/SD in general I think) that its as quick/efficient as possible. Going in and using more "proper" methods like masking/mask management, more layers, etc is nice, but it takes more time and more clicks to do, and for the purposes of making a quick map for ControlNet like this, likely not even worth bothering (in my opinion).
@glssjg
@glssjg Год назад
@@theSato I mean once you learn to use masks it is so much quicker. for example he had to resize the girl larger because he wanted to make sure the quality was best, If he did a mask he could have just made a mask and erase with a black paint brush (hit x to switch to white brush to correct a mistake) or do the free section method and instead of pressing delete you just fill with the foreground color by hitting option+delete. it's a super small thing as you said but will make your workflow faster, your mistakes less damaging (resizing a rasterized image over and over will decrease it's quality), and lastly it will just make your images better. sorry for writing a book, once you learn masks you will never not use them again.
@jonmichaelgalindo
@jonmichaelgalindo Год назад
I've found myself saving intermediate steps less and less. Something about AI just changes the way you feel about data. (Also, Infinite Painter doesn't have masks, and I can make great art just fine.)
@blakecasimir
@blakecasimir Год назад
​@@theSatoI agree with this. The bashing part of the process isn't so much about precision as giving SD a rough visual guide to what you want.
@theSato
@theSato Год назад
@@ayaneagano6059 I know how to use masks, dont get me wrong. But it's an unnecessary extra step when you're just trying to spend 30 seconds bashing some maps or elements together for sd/ controlnet. The precision is redundant and I have no need to sit there and get it all just right. For purposes other than the one shown in the video, yes, use masks and itll save time long term. But for the use in the video, it just costs more time when it's meant to be one and done quickly and quality losses from resizing is irrelevant
@EmilioNorrmann
@EmilioNorrmann Год назад
nice
@novabk2729
@novabk2729 Год назад
超級有用!!!!! thx
@SergeGolikov
@SergeGolikov Год назад
Brilliant results! if not a very convoluted workflow beyond the scope of but the most dedicated, but as the saying goes, no pain - no gain 🍷 Would it not be simpler to create the Control Maps right in Affinity Photo by using the FILTER/Detect Edges command on your source images? just a thought.
@hngjoe
@hngjoe Год назад
Hi. Thanks for sharing your smart notes of every new thing. I really appreciate that. I have one question. After checking update in SD's extension, system response that I have lates controlnet(caf54076 (Tue Jun 13 07:39:32 2023)). However, I can't find Softedge control model in that dropdown list. Though, i do have Softedge controlnet type and pre-processer. What might be wrong?
@Pianist7137Gaming
@Pianist7137Gaming Год назад
For iOS users in iOS 16 and above, there's an easy way to crop out the image, transfer the image to your phone (google photos or something), save image, press and hold on the area you want captured. Tap share and save image, then transfer it back to your pc.
@Kal-el23
@Kal-el23 Год назад
It would be interesting to see what your outcome is without the maps, and just using the prompts as a comparison.
@shipudlink
@shipudlink Год назад
like always
@rodrigoundaa
@rodrigoundaa Год назад
amazing video.!!! as usual. Im still not getting where to do it. it is local on your pc? need a very powerfull GPU? or its online?
@yoavco99
@yoavco99 Год назад
To fix faces automatically you can use the adetailer extension.
@merion297
@merion297 Год назад
Cool! Now what if we make an animation using e.g. Blender but only for the line art, then input each frame to ControlNet then generate the finaly animation frame-by-frame? I wonder when it becomes so consistent that we can consider it as a real animation.
@TorQueMoD
@TorQueMoD Год назад
This is great! What's the AI program you're using called? It's obviously not Midjourney.
@d1m18
@d1m18 Год назад
This is very valueable content but may I suggest you alter the title a bit? It is not very enticing to users who are not fully in the know of AI and prompts. Keep up the great work!
@gwcstudio
@gwcstudio Год назад
How do you control a scene with 2 people in it? Say, fighting. Do a map bash and then a colored version of the map with separate prompts?
@KryptLynx
@KryptLynx Год назад
Those fingers, though :D
@nsrakin
@nsrakin Год назад
You're a legend... Are you available on LinkedIn?
@DJHUNTERELDEBASTADOR
@DJHUNTERELDEBASTADOR Год назад
esa era mi método para crear arte 😊
@hugoruix_yt995
@hugoruix_yt995 Год назад
Oh I see, I missunderstood. Name makes more sense now
@cryptobullish
@cryptobullish Год назад
Crazy cool! How can I retain the face if I wanted to use my own face? What’s the best prompt to use to ensure the closest resemblance? Thanks!
@wykydytron
@wykydytron Год назад
Make Lora of your face then use adetailer
@nspc69
@nspc69 Год назад
It can be easier to fuse layers with "additive" filter
@NERvshrd
@NERvshrd Год назад
Have you watched the log while running hires fix with upscale by 1? I tried doing so as you noted, but it just ignores the process. On or off, no difference in output. Might just be bacuase I'm using vlad's fork. worth double-checking, though
@anim8or
@anim8or Год назад
What version of SD are you using? Have you upgraded to 2.0+? (If so do you have a video on how to upgrade?)
@ValicsLehel
@ValicsLehel Год назад
OK to use A1111 to get the outline, but also Photoshop filter can do this and you can do at any resolution. So I think that this first steps can be done with filters to get the outline picture and bash it, Even you can do before the mix roughly and then apply the filter, will not speed up the process because you see what you are doing easier.
@OlivioSarikas
@OlivioSarikas Год назад
I don't think Photoshop has filters for Depth map, Normal Map or Open Pose. And for the soft edge filter there is a option, but there are 4 options in ControlNET and does the PS version look exactily the same as the ControlNET version?
@MONTY-YTNOM
@MONTY-YTNOM Год назад
How do you see the 'quality' from that drop down menu ?
@hakandurgut
@hakandurgut Год назад
It would have been much easier with photoshop select subject. I wonder if edge detection would do the same for soft edge
@honestgoat
@honestgoat Год назад
Great video Olivio. What extension or setting are you using that allows you @ 11:13 to select the vae and clip skip right there in txt2img page?
@forifdeflais2051
@forifdeflais2051 Год назад
I would like to know as well
@addermoth
@addermoth Год назад
In Auto1111 go to settings, user interface, look down the page for "[info] Quicksettings list ". From there go to the arrow on the right and then highlight and check (A tick mark will appear) both 'sd_vae' and "CLIP_stop_at_last_layers". Restart the UI and they will be where Olivio has them. Hope that helped.
@forifdeflais2051
@forifdeflais2051 Год назад
@@addermoth Thank you!
@Shandypur
@Shandypur Год назад
There's is close button bottom right of the preview image. I feel little anxiety that you didn't click it. haha
@springheeledjackofthegurdi2117
could this be done all in automatic using mini paint?
@bjax2085
@bjax2085 Год назад
Still searching for this AI tool for comic book and children's book creators: 1. Al draws actor using prompts. 2. Option to convert the selected character to a simple, clean 3d frame (no background). The character can be rotated. 3. The limbs, head, eyelids, etc can be repositioned using many pivot points. 4. Then, we can ask for the character to be completely regenerated again using the face and clothing on the original. Once we are satisfied we can save and paste the character in a background graphic.
@TheElement2k7
@TheElement2k7 Год назад
How do you got two tabs of controlnet?
@rajendrameena150
@rajendrameena150 Год назад
Is there any way to render the render elements inside 3d application like masking id, Z depth , Ambient occlusion, material id and different channels to add information in stable diffusion for making more variation out of it.
@foxmp1585
@foxmp1585 9 месяцев назад
Currently SD can properly reads Z-Depth (Depth map), Material ID (Segmentation map), Normal map. And it depends on apps of your choice (Blender, Max, Maya, C4D, ...). Each of these app will have their own way of rendering/ exporting these maps, you need to find out yourself. It'll take time but worth it!
@lsd250
@lsd250 Год назад
Hi all, may someone answer me a question? How much GPU do I need to run A111? I'm using mostly Midjourney because I've a really old PC
@moomoodad
@moomoodad Год назад
How to fix finger deformity, multiple fingers, and bifurcation?
@andu896
@andu896 Год назад
Remove background first with AI or right click on Mac. Then do the depth maps.
@electricdreamer
@electricdreamer Год назад
Can you do this with Invoke AI?
@Shoopps
@Shoopps Год назад
I'm happy ai still struggle with hands.
@maxeremenko
@maxeremenko Год назад
The image is not generated from the mask I created. Only based on the Prompt. I have set all the settings as in the video. What could be the problem?
@jibcot8541
@jibcot8541 Год назад
Have you clicked the "Enable" check box in control net blade? I'm often missing that!
@maxeremenko
@maxeremenko Год назад
@@jibcot8541 Thank you. Yes, I clicked on enable. Unfortunately, it keeps generating random results. It feels like I have something not installed.
@maxeremenko
@maxeremenko Год назад
@@jibcot8541 problem was solved by removing the segment-anything extension
@serena-yu
@serena-yu Год назад
Looks like rendering of hands is still the Achilles' heel.
@OlivioSarikas
@OlivioSarikas Год назад
Hands are just really hard to create and understand. Even for actual artists, this is one of the hardest things to create
@serizawa3844
@serizawa3844 Год назад
0:01 six fingers ahushauhsuahsua
@emmanuele1986
@emmanuele1986 Год назад
Why I don't have ControlNet on my automatic1111 ?
@OlivioSarikas
@OlivioSarikas Год назад
because that is an extension you need to install
@ericvictor8113
@ericvictor8113 Год назад
Almost FIRST?
@OlivioSarikas
@OlivioSarikas Год назад
❤‍🔥
@itchykami
@itchykami Год назад
Everyone wants to give bird wings. I might try using a peacock spider instead.
Далее
The last one surprised me! 👀 🎈
00:30
Просмотров 5 млн
Mindblowing AI Image Upscaler! Krea ai
7:06
Просмотров 810 тыс.
Stable Diffusion - Prompt 101 #ai
30:05
Просмотров 21 тыс.