Тёмный
No video :(

NEXT-GEN NEW IMG2IMG In Stable Diffusion! This Is TRULY INCREDIBLE! 

Aitrepreneur
Подписаться 153 тыс.
Просмотров 185 тыс.
50% 1

ControlNet is a brand new neural network structure that allows, via the use of different special models, to create image maps from any images and using these informations to transfer data from one image to another, making it a really powerful option in the image-to-image tab inside Stable Diffusion! This gives you even more power and more control over the final result than before. So in this video, I will show you how to install the extension and how to use the differents models to get the best results possible!
Did you manage to install the extension? Let me know in the comments!
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
SOCIAL MEDIA LINKS!
✨ Support my work on Patreon: / aitrepreneur
⚔️ Join the Discord server: bit.ly/aitdiscord
🧠 My Second Channel THE MAKER LAIR: bit.ly/themake...
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Runpod: bit.ly/runpodAi
Miro board: miro.com/app/b...
Extension URL: github.com/Mik...
The models: huggingface.co...
Special thanks to Royal Emperor:
- BSM
Thank you so much for your support on Patreon! You are truly a glory to behold! Your generosity is immense, and it means the world to me. Thank you for helping me keep the lights on and the content flowing. Thank you very much!
#stablediffusion #controlnet #stablediffusiontutorial
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
WATCH MY MOST POPULAR VIDEOS:
RECOMMENDED WATCHING - My "Stable Diffusion" Playlist:
►► bit.ly/stabled...
RECOMMENDED WATCHING - My "Tutorial" Playlist:
►► bit.ly/TuTPlay...
Disclosure: Bear in mind that some of the links in this post are affiliate links and if you go through them to make a purchase I will earn a commission. Keep in mind that I link these companies and their products because of their quality and not because of the commission I receive from your purchases. The decision is yours, and whether or not you decide to buy something is completely up to you.

Опубликовано:

 

13 фев 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 346   
@Rafael64_
@Rafael64_ Год назад
Not even week goes by and there's image to image significant enhancement. What a time to be alive!
@Aitrepreneur
@Aitrepreneur Год назад
HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx
@MrArrmageddon
@MrArrmageddon Год назад
Do we know if these are safe? I don't even know how to scan pickle anymore. I have avoided them for months. Amazing video by the way. Thank you.
@joachim595
@joachim595 Год назад
“Type cimdy”
@peterbelanger4094
@peterbelanger4094 Год назад
👍👍👍👍👍👍👍 Great extension! paused the video, got it all downloaded and installed before I finished the video. Runs fine on my 1060GTX 6GB, even without the lowvram option. Actually 10x faster without it. only the xformers option is needed.
@MrArrmageddon
@MrArrmageddon Год назад
@@peterbelanger4094 If you can Peter? I have a RTX 4080 16GB. I've never used any Xformers. Should I look into them? And if so what purpose they server? lol If you can't explain that is fine.
@zwojack7285
@zwojack7285 Год назад
what SD version are you using? I only have..1.5 I think and the extension only appears in txt2img, not in img2img
@SnowSultan
@SnowSultan Год назад
If this works as well as it appears to, this is both game-changing and life-changing for artists like myself that work in 3D but want more illustrated or toony results. I've waited 24 years to be able to make true 2D art with 3D methods. Even if it's not perfect yet, this gives me hope.
@muerrilla
@muerrilla Год назад
I'm playing around with the scribble model and i'm absolutely blown away!
@Smokeywillz
@Smokeywillz Год назад
topaz studio 2 was coming close but THIS is next level
@PriestessOfDada
@PriestessOfDada Год назад
I had the same thought. Makes me want to train my cc4 characters
@SnowSultan
@SnowSultan Год назад
@@PriestessOfDada I've had decent luck using untextured DAZ figures as ControlNet pose references, but I do not know what the results would be if you train a checkpoint or LoRA on a complete 3D character. If you can still get 2D or anime results from it...well, I'll have a lot training to do. ;)
@mrhellinga9440
@mrhellinga9440 Год назад
this is pure gold
@NC17z
@NC17z Год назад
This extension is amazing! I'm having an absolute blast with it. It is solving so many of my problems with matching the look of realism photos with what I'm feeding it for an image and with my prompt. Thank you so much for what you do. You've been my first go to on RU-vid for weeks!
@peterbelanger4094
@peterbelanger4094 Год назад
I'm having fun with the sketch pre processor. Runs fine on my gpu (1060GTX 6GB), not even using the --lowvram option.
@o.b.1904
@o.b.1904 Год назад
The pose one looks great, you can pose a character in a 3d program and use it as a base.
@mactheo2574
@mactheo2574 Год назад
What if you use your own body to pose?
@vielschreiberz
@vielschreiberz Год назад
Perhaps it will be useful with some simplified solutions like Daz 3d Or with poses library
@Amelia_PC
@Amelia_PC Год назад
Yup. I've been using Daz to help me with comic book character poses for years and it took only some seconds to put a character in different poses (if a person says it takes more than that, that's because they're newcomers or have not much experience with 3D programs).
@pladselsker8340
@pladselsker8340 Год назад
and yeah, the 3D software thing is actually a good idea if you implement a good inverse cinematic thingy on it. It can probably save time and be faster than a google search for simple (or really specific and complex) poses. I've been learning how to do that last night in blender, I'm almost done with the model, it's actually not too hard to make. You don't even have to render anything btw, you just have to take a screenshot of it when the angle and everything seems okay, and then paste that in the webui. Works like a charm.
@sonydee33
@sonydee33 Год назад
Exactly
@pladselsker8340
@pladselsker8340 Год назад
This is THE extension everyone needed. I don't know if people realise, but this is a MEGA GAME CHANGER. Up untill now we ONLY had tools for changing the details of the image. Inpainting, pix2pix, img2img, loras, etc. All of this is JUST for the details, and can even make the control of composition even harder to achieve (especially for TIs and loras, they do affect it negatively quite a lot in general). This new extension now handles the composition of the images. This thing is literally filling a hole in the AI image making process. One other such hole that I hope someone or a team will learn how to fill now is to be able to make loras of stuff that don't yet exist, like OCs.
@lefourbe5596
@lefourbe5596 Год назад
someone like me ? totally agree, i strive to make my bad CG model into glorious anime characters in blender, using stable diffusion as render engine. i already have Dreambooth and Lora ready to use but lack consistency on character feature to make a nice animation. i lack time sadly and i'm still an amateur, definitly interested to have a discord gathering ppl togeter to find the best settings for everyone
@pladselsker8340
@pladselsker8340 Год назад
@@lefourbe5596 I think touhou AI project is one if the best servers for that. A lot of people just throw in information everyday, and some of them share their whole workflow if you ask. It's so hard to keep up with them haha.
@Alex-dr6or
@Alex-dr6or Год назад
This is exactly what I need. Last night I was having fun with the blend option in Midjourney and wished SD has something similar. This video came at the perfect time
@ristopaasivirta9770
@ristopaasivirta9770 Год назад
My biggest complaint about SD has been the lack of control. In order to make comic books and alike you need to be able to precisely control the pose of the characters. Gonna see how well this holds up. Thank you for the video!
@coda514
@coda514 Год назад
Saw info about this on Reddit, I knew you would put out a how-to video so I waited to install. Glad I did, you did not disappoint. Sincerely, your loyal subject.
@Unnaymed
@Unnaymed Год назад
It's epic, power of stable diffusion is upgraded ! ❤️
@SuperEpic-vb8nq
@SuperEpic-vb8nq Год назад
This is absolutely amazing, my only complaint is that it doesn’t seem to work with batch img2img. If that gets to working, then this could easily solve the issue with stable diffusion videos where details tend to be “sticky” due to the seed not shifting with the video. This could help stabilize it. Edit, after an update, it works with batch img2img and it does exactly what I wanted. What a time to be alive!
@Ich.kack.mir.in.dieHos
@Ich.kack.mir.in.dieHos Год назад
yoo do you make videos in stable? because i do and I m just interested in batchmode /animation. consistens characters and places. can we connect on insta ?
@xnooknooknook
@xnooknooknook Год назад
I really like txt2img but img2img has been where I've spent most of my time. Scribble mode looks amazing! I need it in my life.
@A_Train
@A_Train Год назад
Thanks for being on the bleeding edge of this and imparting your knowledge for artists like me. My question is, what is the best way to implement Stable Diffusion in Blender. I used a version like 3 months ago and now that seems so outdated.
@GolpokothokRaktim
@GolpokothokRaktim Год назад
I recently experimented with blue willow and I'm really amazed to use it. Blue Willow recently launched V2 with a brand new model upgrade. Now I got better-quality images with more aesthetic outputs
@DjDiversant
@DjDiversant Год назад
Installed it just couple of hours before a vid. Thx for a tut!
@voEovove
@voEovove Год назад
Yet again, you have manged to blow my mind! Thank you for showing this new amazing functionality! Feels like these tools are getting more insane every single day.
@ysy69
@ysy69 Год назад
downloading the models now and will try. thank you so much. this seems very powerful. In fact, I've been spending more time on img2img lately. even at the current state, it is fantastic... can't imagine the possibilities with this new extension
@OsakaHarker
@OsakaHarker Год назад
K you forgot to copy the checkpoints in the ControlNet/annotator/ckpts if you do then the hand pose works amazingly using preprocessor openpose_hand and the model openpose, thnk you for this amazing video, this changes a lot how we create images
@rmb6037
@rmb6037 Год назад
where are those? I don't see them on the github page
@OsakaHarker
@OsakaHarker Год назад
@@rmb6037 on the models link page go back once into ControlNet and then enter the annotator/ckpts folder
@ThisOrThat13
@ThisOrThat13 Год назад
@@rmb6037 look within "annotator" just above the models folder. Then ckpts into those files.
@ThisOrThat13
@ThisOrThat13 Год назад
That is where I'm lost at now. We would those files (pth & pt) go? Just into the model folder with everything else?
@OsakaHarker
@OsakaHarker Год назад
@@ThisOrThat13 i noticed they were auto downloading but wasn't working for me and i put them all in here \extensions\sd-webui-controlnet\annotator\ckpts
@purposefully.verbose
@purposefully.verbose Год назад
i saw people talking about this concept on several other channels, and all were like "i hope this comes out for auto1111", and i'm all "it is!" - and linked this video. hopefully you get more subs.
@dreamzdziner8484
@dreamzdziner8484 Год назад
Wow. So exciting. Thank you dear Overlord 💪🙏🏽
@kylehessling2679
@kylehessling2679 Год назад
I've been wishing for this since day one of using SD! This is going to be so useful for generating versions of my graphic design work!
@kuromiLayfe
@kuromiLayfe Год назад
cannot wait for this to be an extension to txt2img also : prompt for a specific character and then add a scribble or pre process image to the script to get the described character in the pose you want.
@BlackDragonBE
@BlackDragonBE Год назад
It already has this.
@BlackDragonBE
@BlackDragonBE Год назад
@@ClanBez Open the txt2img tab with the extension this video explains installed. At the bottom of the tab you can use the ControlNet just like in img2img, including the scribble model. By providing a prompt and a scribble, you can generate images with lots of control. I suggest lowering the Weight to 0.25-0.5 to start with as you ca get some weird results depending on your drawing skills otherwise. Good luck.
@zwojack7285
@zwojack7285 Год назад
for some reason it only shows up in txt2img for me lmoa
@gloxmusic74
@gloxmusic74 Год назад
Installed it straight away and can honestly say I'm super impressed 👍👍👍
@DOntTouCHmYPaNDa
@DOntTouCHmYPaNDa Год назад
Awesome thanks for sharing! Btw, if you click the down arrow just to the right of the letters LFS it just downloads the models without opening another tab. Small but useful tip :)
@JohnVanderbeck
@JohnVanderbeck Год назад
So a few thoughts playing with this. I specifically zeroed in on the pose model because one of the things I've been trying (and failing) to do for a long time now is mix my own photography with generation, but having any sort of control over posing was nearly impossible. Until now! First off, this works in txt2img as well, so you can supply it a pose reference image and then do your normal txt2img prompts and get a completely new generation in that pose. Mind fracking blown! That said, temper expectations at least for now as the pose estimation is not so accurate. This is rather surprising actually given 2d pose estimation has been a pretty well solved problem for longer than SD has been a popular thing, so not sure whats up there. Still it has already allowed me to start making fusions of my studio photography with SD generations and it is amazing!
@ysy69
@ysy69 Год назад
Are you saying this extension can also be used over at txt2img?
@JohnVanderbeck
@JohnVanderbeck Год назад
@@ysy69 Yes! That's mostly where I am using it right now actually. I'm talking some of my models that i've shot in studio, bringing them into SD in txt2img, putting that photo in for controlnet pose, and then using the txt2img to generate a completely new person in roughly the same pose as the one I photographed.
@Seany06
@Seany06 Год назад
@@ysy69 txt2img is where it works, not img2img
@Seany06
@Seany06 Год назад
According to github it should be possible with the open pose model to control the skeleton but gradio isnt easy to work with. I'm sure in a few months we'll have a lot more control. These tools are insane so far!
@ysy69
@ysy69 Год назад
@@JohnVanderbeck when you say model, you're referring to SD custom models right? and not people as models? When you bring a photo to SD, that is img2img... in text2img, one doesn't use an image as reference, so I guess you meant img2img and then using the prompt to make the change into a new person, correct?
@haidargzYT
@haidargzYT Год назад
Cool 😮 A.I community keep surprising us every day
@Irfarious
@Irfarious Год назад
I love the way you say "down below"
@yeahbutcanyouredacted3417
@yeahbutcanyouredacted3417 Год назад
Amazing tool- solves a lot of RNG for us to get closer to the designs we are looking for ty again Aitrepreneur for helping get my home studio going
@XaYaZaZa
@XaYaZaZa Год назад
My favorite youtuber 🧡
@h8f8
@h8f8 Год назад
Never knew you could type cmd on the top directory... thank you so much
@RyanBigNose
@RyanBigNose Год назад
i cant install the first one can anyone help me out why? D:\ai\stable-diffusion-webui>pip install opencv-python 'pip' is not recognized as an internal or external command, operable program or batch file.
@SolracNaujMauriiDS
@SolracNaujMauriiDS Год назад
The same thing happened to me with CMD, but it worked for me with Anaconda
@toddzircher6168
@toddzircher6168 Год назад
Thank you for the wonderful walk through on this new extension. I have a lot of 2d pose sheets/sketches from various artists and I can totally see using them with a controlnet.
@AmirZaimMohdZaini
@AmirZaimMohdZaini Год назад
This feature finally able to make new image with exact style from original input picture.
@s.foudehi1419
@s.foudehi1419 Год назад
this is truly nextlvl stuff. im glad i found this video. has anyone already tried to create a depthmap with controlnet and then using that to create a 3model in blender? there's some good tutorials on here as well, you might wanna check that out :)
@cinemantics231
@cinemantics231 Год назад
This just keeps getting better and better! Thanks for putting this together. Is there any way to merge two different images? Like take the pose from one image and implement it in the style or background of another?
@pastuh
@pastuh Год назад
its called inpaint. just use photoshop plugin for this. paint over (or put image) on different layer and click inpaint
@thebonuslvl7181
@thebonuslvl7181 Год назад
magic... so much more to come thank you for keeping us on the leading edge..
@JohnVanderbeck
@JohnVanderbeck Год назад
I'm droolling at the thought of using the pose model!
@SteveWarner
@SteveWarner Год назад
Top notch training! Thanks for this comprehensive overview! Looking forward to testing this out!
@MrOopsidaisy
@MrOopsidaisy Год назад
Are you able to create an updated installation video? I've been out of the loop for a few months with Stable Diffusion and feel lost with all the updates.......... :(
@sebastianclarke2441
@sebastianclarke2441 Год назад
Why have I only heard about this for the first time today!? Wow!!
@StrongzGame
@StrongzGame Год назад
i need this video on a flash drive for reference for ever
@EntendiEsaReferencia
@EntendiEsaReferencia Год назад
I've been waiting por the 1.5 depth model and now it's here, and with a few friends 🤗🤗
@TomiTom1234
@TomiTom1234 Год назад
Something you could have covered in the video, is the inpainting using this method. I noticed in their site that you can inpaint a part of an image. I wish you explained that.
@Fingle
@Fingle Год назад
NO WAY THIS IS INSANE
@Eins3467
@Eins3467 Год назад
Thanks for another great vid! I'm more interested in the open pose model because then you won't need to prompt a pose too much. Seems like in the video it can retain some details like the clothes color so it also needs a prompt for it to change. Very interesting. Edit: Some more interesting things - It can accept characters provided the model knows them (Azur Lane characters for example) - Like from the video it does eat up vram fast, my Linux almost crashed one time lol.
@upicks
@upicks Год назад
Simply amazing, thanks for the video! This makes img2img even better than I could have imagined. .
@Rickbison
@Rickbison Год назад
I finished my last short with the old img2img. Downloading all the models and lets see how this goes.
@shongchen
@shongchen Год назад
Hello, I have a question , how to find the control stable diffusion with human pose tab ? Thank you for your share.
@harambae117
@harambae117 Год назад
This looks like a lot of fun and really good for professional use. Thanks for sharing dear AI overlord
@ModestJoke
@ModestJoke Год назад
I noticed you didn't activate your Python virtual environment before installing opencv-python. Do you not use a venv when running stable diffusion? I thought it was on be default, and I'm not even aware of a way to disable it. (There could be, though.) When I run the webui-user.bat file, the very first line in the output is a note that it's running a virtual environment. If you install it in you're system's main Python folder instead of the venv, will it even work? If anyone out there is having trouble getting it to work and think this might be the cause, try this: if you're on Windows, open a command line in your webui folder, then enter the command (with quotes!): "venv/Scripts/activate". This runs activate.bat in that folder to begin using the virtual environment inside your webui folder. Then run the pip command to install opencv-python into that environment. If you're on Linux, you'll have to look up how to activate the virtual environment before installation with pip.
@alephstar
@alephstar Год назад
There is no way for me to properly thank you for this. I've had the pip issue since I installed it a few days ago and this was the only thing that worked to solve my troubles. Thank you so so so much for taking the time to write this comment.
@Pertruabo
@Pertruabo Год назад
lord you're a godsend. Hope this got pinned, thanks alot!!
@JonahHache
@JonahHache Год назад
THANK YOU!!
@cybermad64
@cybermad64 Год назад
Thanks a lot for sharing your miro boards, those are normaly test that I would do myself to understand how the system works, you are saving us a lot of tech investigation time! :)
@BenPhelps
@BenPhelps Год назад
Brilliant. Any tips to install on M1 Mac?
@AnimatingDreams
@AnimatingDreams Год назад
That's amazing! Will it work on Colab as well?
@duplicatemate7843
@duplicatemate7843 Год назад
does it work on colab?
@Varchesis
@Varchesis Год назад
This is insanely great! Thanks for sharing this info.
@desu38
@desu38 Год назад
Goddamn, the webui just keeps getting more and more powerful. 😯
@angelicafoster670
@angelicafoster670 Год назад
now we need a way to combine a pose + consistent character together.
@dthSinthoras
@dthSinthoras Год назад
So now we have this which seems better than previous depht-models, we have LORA which seems to be better than Hypernetworks, x versions of Video-Creation (lost track on which is best here),... I would LOVE a "State of the art" Video, with what is outdated, what are different usefull variations for stuff, etc. :)
@rmb6037
@rmb6037 Год назад
could be a monthly thing
@user-bm9oy4gx2l
@user-bm9oy4gx2l Год назад
Thanks again for good contents! Pose thing looks interesting 👀
@Smiithrz
@Smiithrz Год назад
Thank you so much for making this video. I was just talking about how this stuff will solve a lot of posing problems we have without it. Can’t wait to try it with my model photography as references 👏🏻
@Ben-xr6gy
@Ben-xr6gy Год назад
Hello, at the last step to generate, my cmd console is stuck at the step "Loading preprocessor: canny, model: control_sd15_canny(fef5e48e)" Anyone know how to fix that?
@ChrisCapel
@ChrisCapel Год назад
Man, I've followed the instructions but I'm not even seeing a Control Net tab. Anyone else having this problem?
@Amelia_PC
@Amelia_PC Год назад
Please, consider talking about ControlNet with Anime Line Drawing for Stable Diffusion when it's available for the public! :D It's a revolution for comics/mangas and animation.
@IlRincreTeam
@IlRincreTeam Год назад
This is VERY impressive
@NinjaLex1776
@NinjaLex1776 Год назад
It is a cool extension, but 45 gigs, ouch , also the downloads are not safetensor and have pickle, be very carefull what you download.... anyway cool vid thank you.
@jivemuffin
@jivemuffin Год назад
Nice, comprehensive video -- and thanks for the Miro board in particular! Makes me think there's great potential for AI workflows in there. :)
@unknowngodsimp7311
@unknowngodsimp7311 Год назад
This is awesome. But I'm kinda confused about how the 3 "inputs" relate to each other. Perhaps this is me just not understanding img2img. Basically my question is how do the prompt, image, and 2nd image (for depth map) relate to each other for the resulting image? Doesn't this new extension mean that we could (also) use only an image or prompt with the image for depth map to generate an image? I would love an in-depth answer 🙏
@ryanp515
@ryanp515 2 месяца назад
This is cool. I was wondering could this be used to make line art? That would be a time saver with poses, etc.
@puma21puma21
@puma21puma21 Год назад
downloaded models moved them to models folder but controlnet doesn't show
@rickardbengtsson
@rickardbengtsson Год назад
Great breakdown
@kallamamran
@kallamamran Год назад
Finally! If you can say that for something that didn't take 24h 😃
@digitalkm
@digitalkm 11 месяцев назад
Awesome, thank you!
@ThatOneGuyWithAReallyLongName.
Anybody else having a problem where ControlNET just... stops working? I had it working just fine this morning, and now I'm trying to use it again and it's refusing to acknowledge my depth maps. In fact, I was having a hell of a time even getting it to /make/ a depth map. Finally got it to spit one out and now I'm trying to use said depth map in txt2img and it's ignoring it. ControlNET is Enabled, the depth map has been placed in ControlNET, no preprocessor, and control_sd15_depth is selected as my Model. What am I doing wrong?! E: Fixed it, the depth model for ControlNET just decided to self-corrupt. Deleted and reinstalled the control_sd15_depth model and I'm back up and running.
@cyberfalcon2872
@cyberfalcon2872 Год назад
Your ControlNet playlist is out of order. This video should be the first on the list
@RyanBigNose
@RyanBigNose Год назад
this is awesome
@jucabalmacabro
@jucabalmacabro Год назад
amazing. I´m buyng a new pc to be able to play with stable diffusion. IN LOVE.
@ThisOrThat13
@ThisOrThat13 Год назад
1tb isn't enough anymore with all the models I have installed just for Text2text.
@jucabalmacabro
@jucabalmacabro Год назад
@@ThisOrThat13 omg, my new pc have only 1tb, Im screwed
@ThisOrThat13
@ThisOrThat13 Год назад
@@jucabalmacabro most Mobo will hold two M.2s now. The 2nd one could be a 2tb for SD and games. Or have a portable HD to change models in and out.
@nathancanbereached
@nathancanbereached Год назад
I'd love to see a video/cartoon where you transition from one scene to another by increasing midas weight slowly. Would be a very trippy/dream like way to transition to a totally different place.
@leafdriving
@leafdriving Год назад
Dear AIOverlord: Thank you as always for being ahead of the pack, and showing me something useful and amazing. In every example, your two images (input above and input below) are the same ~ What happens if they are different? (Naturally, I couldn't possibly just try it lol)
@martinkaiser5263
@martinkaiser5263 Год назад
Did exactly the same steps but controlnet not showing up in webUI
@mikishomeonyoutube2116
@mikishomeonyoutube2116 Год назад
This is TRULY INCREDIBLE!
@Cneq
@Cneq Год назад
Man control net open pose + VR FBT with 11pt can seriously open up some possibilities.
@itsalwaysme123
@itsalwaysme123 Год назад
There exists a safetensors version of the models that take up *signifigantly* less space! but other than that, golden.
@Deniska1911
@Deniska1911 Год назад
hello, please make a video on how to use this method for a batch of images, when working in the batch tab, only the first image is rendered, and then an error occurs. Please, help🙏 ok, I'm fine, if it gives the destination folder it gives an error, but if you don't touch anything and leave it by default, there is no error, maybe I can help someone😎
@thailandbestbest8172
@thailandbestbest8172 Год назад
Using it with rtx2060 6 GB ram without low vram... Just extract the model and it will be around 1.444 gb instead of 5.7... :) :) :) the extract file in the GitHub repo. And--+ thank u very much u r doing a wonderful job!!!
@boythee4193
@boythee4193 Год назад
i had to restart the whole thing to get the models part to show up, but it did work. too late to test it out now, though: )
@thays182
@thays182 11 месяцев назад
Is there way to use img to img and control net to move an existing character, with clothing and style, into a new pose/new position, and have it still be that initial character? (WIthout a Lora, only going off of one image)?
@mrrooter601
@mrrooter601 Год назад
This is great, the hand one seemed to work for me, at least on one base image. It refuses to work at all with waifu diffusion 1.4e2 though.
@gilz33
@gilz33 Год назад
Can I install this on my macbook M1Max ?
@rickland1810
@rickland1810 Год назад
Amazing videos! Thank you. Just a suggestion, maybe the part where you download models in your vids should be first, so they download while we do the other steps. I already know your videos so I get ahead of this. But again, thank you.
@CCoburn3
@CCoburn3 Год назад
Wouldn't it be great if you could move the "joints" in the open pose model so you could make changes in the pose? But scribble comes close. Good video.
@Daeca
@Daeca Год назад
If you're using Automatic1111, there's an extension called OpenPose-Editor. Installing that will create a new tab at the top and you'll be able to edit the 'joints' directly then save as needed. You can even add a background image so that you can properly pose your scene.
@CCoburn3
@CCoburn3 Год назад
@Daeca Thanks. And there are some websites that allow posing models that include hands. That should help with the deformed hand problem.
@secondcave
@secondcave Год назад
Hi! I Have this mistake: RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 8, 88, 64] to have 4 channels, but got 8 channels instead I don't know how to solve it :
@SentinelTheOne
@SentinelTheOne Год назад
Der pip-install funktioniert bei mir nicht. Allerdings ging es hierüber im CMD window: python -m pip install opencv-python
@XaYaZaZa
@XaYaZaZa Год назад
Wow dude, I'm having seizures here this is so cool
@ZeroCool22
@ZeroCool22 Год назад
Where is MIDAS? It's not listed in the huggingface Files.
@gohan4585
@gohan4585 Год назад
Thank you sensei bro 🙏
@phatbuihong4014
@phatbuihong4014 Год назад
Thank you so much.
@dovrob
@dovrob Год назад
thanks so much mate
@76abbath
@76abbath Год назад
Excuse me, after done "pip install opencv-python", i launch my webui.bat and got this message "json.decoder.JSONDecodeError: Extra data: line 201 column 2 (char 7282)" and a full of lines... So it's impossible to launch the webui. :( Anyone to help me please ?
@girasan
@girasan Год назад
thank you so much 🙂
@xd-vf1kx
@xd-vf1kx Год назад
so cool! I love ya!
@ayanechan-yt
@ayanechan-yt Год назад
To use the seg model, venv/script/activate then pip install prettytable then restart webui.
Далее
ULTIMATE SDXL LORA Training! Get THE BEST RESULTS!
52:11
Classic Italian Pasta Dog
00:20
Просмотров 2,5 млн
would you eat this? #shorts
00:39
Просмотров 1,5 млн
KLING AI wipes out LUMA and Runway Gen 3 ?
9:00
Просмотров 15 тыс.
Next level AI art Control | My workflow
23:02
Просмотров 97 тыс.
Stable Diffusion IMG2IMG HACKS You need to TRY!
6:15
Quickly fix bad faces using inpaint
7:55
Просмотров 24 тыс.
Master Udio with THIS - Make Better Songs Instantly
31:30
Stable Diffusion for Flawless Portraits
13:24
Просмотров 235 тыс.
Classic Italian Pasta Dog
00:20
Просмотров 2,5 млн