Тёмный
No video :(

GET PERFECT HANDS With MULTI-CONTROLNET & 3D BLENDER! This Is INSANE! 

Aitrepreneur
Подписаться 153 тыс.
Просмотров 115 тыс.
50% 1

Recently we discovered the amazing update to the ControlNet extension for Stable Diffusion that allowed us to use multiple ControlNet models on top of each other, which was a true revolution since that brand new neural network structure allowed us to combine multiple special ai models, and create better and more precise images than before! Moreover, a user by the new of toyxyz recently created a 3d skeleton model for Blender with hands and feet included that allows to not only pose the character in any position that you want but also to directly export depth maps and canny maps from blender to be use inside Stable Diffusion with ControlNet! Making this the ultimate perfect hand and feet creator for Stable Diffusion! So in this video, I will not only show you how to install and use Blender 3D and the toyxyz's blender model but also how to easily export depth maps, canny maps from blender to use in ControlNet. I will also show you how to get great results without using Blender or ControlNet. So get ready!
Did you manage to use the 3D Blender model? Let me know in the comments!
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
SOCIAL MEDIA LINKS!
✨ Support my work on Patreon: / aitrepreneur
⚔️ Join the Discord server: bit.ly/aitdiscord
🧠 My Second Channel THE MAKER LAIR: bit.ly/themake...
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
Runpod: bit.ly/runpodAi
Toyxyz's Character bones that look like Openpose for blender : toyxyz.gumroad...
Blender: www.blender.org/
All ControlNet Videos: • ControlNet
My previous ControlNet video: • NEXT-GEN MULTI-CONTROL...
CHARACTER TURNAROUND In Stable Diffusion: • CHARACTER TURNAROUND I...
EASY POSING FOR CONTROLNET : • EASY POSING FOR CONTRO...
3D Posing With ControlNet: • 3D POSING For PERFECT ...
My first ControlNet video: • NEXT-GEN NEW IMG2IMG I...
Special thanks to Royal Emperor:
- Merlin Kauffman
Thank you so much for your support on Patreon! You are truly a glory to behold! Your generosity is immense, and it means the world to me. Thank you for helping me keep the lights on and the content flowing. Thank you very much!
#stablediffusion #controlnet #blender #stablediffusiontutorial
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
WATCH MY MOST POPULAR VIDEOS:
RECOMMENDED WATCHING - My "Stable Diffusion" Playlist:
►► bit.ly/stabled...
RECOMMENDED WATCHING - My "Tutorial" Playlist:
►► bit.ly/TuTPlay...
Disclosure: Bear in mind that some of the links in this post are affiliate links and if you go through them to make a purchase I will earn a commission. Keep in mind that I link these companies and their products because of their quality and not because of the commission I receive from your purchases. The decision is yours, and whether or not you decide to buy something is completely up to you.

Опубликовано:

 

28 фев 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 314   
@Aitrepreneur
@Aitrepreneur Год назад
HELLO HUMANS! Thank you for watching & do NOT forget to LIKE and SUBSCRIBE For More Ai Updates. Thx
@generalfishcake
@generalfishcake Год назад
Blender has an "Align camera to view" option, which automatically moves the camera to what you're viewing. It's accessible using the search menu (F3 usually)
@TiagoTiagoT
@TiagoTiagoT Год назад
Also in the Align submenu inside the View menu
@alexy431
@alexy431 Год назад
Or lock the camera to view: View -> View Lock -> Camera to View
@vendacious
@vendacious 9 месяцев назад
Or you can constrain the camera, so that it always looks at the model: 1). Select the camera 2). Go into the constraints panel/tab 3). Add a track to constraints 4). Select the object as the target When you press g to grab the object, the camera follows the object
@gogodr
@gogodr Год назад
Sometimes people tend to forget that if you want a picture of a hand pose, you can take a photo of your hand with your phone. Just putting the thought here...
@AscendantStoic
@AscendantStoic Год назад
Yeah, sometimes the easiest solution is right in front of people but they don't see it due to habit or other distracting factors (people are just so used to looking anything and everything up on Google).
@mathal07
@mathal07 Год назад
Simple but good advice 👍
@1lllllllll1
@1lllllllll1 Год назад
PSA: you cab use emojis in prompts too
@theodoredennis2548
@theodoredennis2548 Год назад
@@1lllllllll1 what?!
@gogodr
@gogodr Год назад
@@kishirisu1268 No, but my phone camera has a timer function that would work just fine for that 😅
@CCoburn3
@CCoburn3 Год назад
This is great. Hands and feet are so often a problem in Stable Diffusion. I'm glad there is a work-around available.
@Keavon
@Keavon Год назад
When you're viewing from the perspective of the camera in Blender (via numpad 0), you can fly the camera using noclip-style controls by hitting Shift+` (Shift backtick/tilde). WASD to move, Q/E to go down/up, Shift to speed up, and scroll the mouse wheel to change the base movement speed. Left click to confirm, right click to cancel and revert back to before the noclip.
@vendacious
@vendacious 9 месяцев назад
I never knew this! Thanks!
@Modozenmusic
@Modozenmusic Год назад
Theres a lot of usefull shortcurd for blender.. You can mover object with the g .. and if You press g and x You move on the x axis so You can move g x g y g x. . this works with rotation too
@vi6ddarkking
@vi6ddarkking Год назад
I have been saying Blender was going to be the ultimate AI Powered Software for a while now. And once Text To 3D improves enough to be integrated into Blender along with the current Stable Diffusion Addon. The Indie Animation Revolution is closer than we Think.
@FrancisGo.
@FrancisGo. Год назад
Yes. ❤️
@DavidMiller212
@DavidMiller212 Год назад
within 5 years a completely AI generated Hollywood blockbuster will be in theaters
@vi6ddarkking
@vi6ddarkking Год назад
@@DavidMiller212 And It will be made by a group of nerds in a basement and will be outselling Hollywood by a wide margin.
@jasonhemphill6980
@jasonhemphill6980 Год назад
@@vi6ddarkking can you imagine how disruptive that will be? Hollywood movies are 100s of millions. A studio that utilizes AI could cut that into a fraction.
@FrancisGo.
@FrancisGo. Год назад
@@DavidMiller212 All the best films have their writers and artists disrespected by Hollywood. Those people will be way better with these tools than some exec with a checklist. Hollywood will get exactly the control they always wanted, but no one will care. Meanwhile, your favorite RU-vidr on Book tube can now publish her first novel as an animated oil painting. You do the math.
@mrrooter601
@mrrooter601 Год назад
Thank you for making these videos! Not only do you help me keep up to date with all the new changes, but actually show how to use them! This is really one Ive been waiting for, I knew as soon as I saw controlnet had a hand model we would be there soon. And it is literally only going to get easier, I fully expect this to be streamlined within a week.
@mrcead
@mrcead Год назад
This is very clever. 3D programs already support mocap with hand and fine finger data, all that's needed is for a Stable Diffusion type renderer and it's a wrap
@AscendantStoic
@AscendantStoic Год назад
If you mean using stable diffusion to render a scene in blender, that plug-in already exists, there also a similar plug-in for instant A.I texturing in blender.
@uk3dcom
@uk3dcom Год назад
One speed up for the Blender setup would be to use three linked scenes each with the render setup for each output. That way once you have posed the rig all you have to do is to switch to each scene and hit render.
@NyaruSunako
@NyaruSunako Год назад
Funny enough I have been using Blender in mix with my art already So to have this at my disposable I can probably create to most insane stuff granted I still change the character since I only use it as my base but having be able to have it all in one place is so Nice. Since I used 3d model To fix my hands before Now I can do it all at once Man cant wait to create some very beautiful intricate pieces on having this as my base model
@vi6ddarkking
@vi6ddarkking Год назад
Hey Remember when Everyone was mocking AI Art for not being able to do hands and feet. I Seriously don't think anybody believed we were going to advance this fast.
@JayXdbX
@JayXdbX Год назад
I use stable diffusion and the only thing i believe is: No matter how fast i believe AI is improving, it's advancing faster.
@Rifkar122
@Rifkar122 Год назад
sometimes i feel bad for artist tbh, especially ameteur artist
@vinz344
@vinz344 Год назад
The power of open source
@vi6ddarkking
@vi6ddarkking Год назад
@@Rifkar122 AI Art is a tool for the best and a replacement for the rest. The good and talented ones will incorporate AI art into their workflow and drastically cut down on their Production Time. And those you fail to adapt, well we all know how evolution works.
@Amelia_PC
@Amelia_PC Год назад
Honestly, I thought it'd come earlier.
@OriBengal
@OriBengal Год назад
to save you many steps-- if you push N it will open up this side panel. There you can find View. And if you are in camera view, you can select "Lock camera to view" - and then when you are in camera view you can just use your middle mouse button, etc to navigate the view like you're already doing-- but this time it moves your camera.... Saves you having to go back and forth between perspective / camera mode, and guesstimating on what moving the camera will do.
@NeverduskX
@NeverduskX Год назад
For the compositing page, I figured out that you can copy the nodes and then set up each combination (depth, canny, openpose) side-by-side. To choose which you want to render, just click the corresponding Composite and Viewer node. That'll save you from having to remember combinations and reconnect nodes each time.
@TheRoninteam
@TheRoninteam Год назад
One thing that will help is the shortcut shift ', this will create a first person flythrough, like a FPS game
@louislebel2995
@louislebel2995 Год назад
to stick the camera to your view: numpad 0 or click camera button on the right, then go to right panel in 3d view > click on "view" tab > check "camera to view" under "view lock" section useful shortcuts: g: move selected object r: rotate selected object tab: toggle edit mode
@impactframes
@impactframes Год назад
That's a great tip I use it all the time
@itskiggu
@itskiggu Год назад
It would be perfect if there was a general body shape overlaid on the skeleton for us to visualize the poses better that could be hidden on export
@SantikusG
@SantikusG Год назад
Pose editor already does it
@sonderbain
@sonderbain Год назад
Search royalskiesllc he has a better model
@PolarBearon
@PolarBearon Год назад
I really appreciate people like you who take the time to explain and share stuff like this. Getting into some of these systems and processes can so easily feel totally overwhelming, so having it explained in simple way by someone like you is a great help. So, thank you!
@TheAiConqueror
@TheAiConqueror Год назад
You were right......... I love your new video! 🙌
@Aitrepreneur
@Aitrepreneur Год назад
Yay! Thank you! Thanks for the tip man ;)
@Pianist7137Gaming
@Pianist7137Gaming Год назад
THANK YOU! I've tried so many other methods but by far, this one has produced the most consistent poses in every image. hands still come out strange when it is a small part of the image or when the hand is facing the side, also peace sign is difficult for the AI when the hand is a small part of the entire image, but with open palm facing out it works perfect!
@technoclasher1
@technoclasher1 Год назад
Quick tip you can roam around the workspace if you press CTRL + ` and you can use it in camera mode as well
@wakegary
@wakegary Год назад
Hitting Num 1 will change the current view to the front facing ortho view. You can then hit Control + Alt + num 0 to set that view as the camera.
@Steamrick
@Steamrick Год назад
The latest version is incredibly improved... more controlnet variants and no fiddling around to get the different export types. Now someone just has to combine this with one of those easy-to-use webui things with a thousand preset poses.
@PortalLughnasadh
@PortalLughnasadh Год назад
Great! Before this video i've been using painthua. i usualy prepare a hand gesture the same way in photoshop but export only the hand in a png file, then put in painthua in the exact same position, select the hand with the painthua mask and apply the prompt. the results are not so good than this method but it was working for me as I was doing some comic art that require less details. Thank you K!
@CaritasGothKaraoke
@CaritasGothKaraoke Год назад
i guess i should do this for Poser and DAZ now.
@zetracakep
@zetracakep Год назад
and i just found out that for me drawing the hand myself then edit it in the picture is then using canny or depth is faster, than using the blender, but for those who cant draw hand its very helpfull you can even say that its was life saver
@orperry8064
@orperry8064 Год назад
7:00 YOU CAN USE THE VIEW TAB to attach the camera to screen and then just position the camera as by using the normal controls
@danowarkills4093
@danowarkills4093 Год назад
How do you get the "Rig Main Properties" menu to appear? I can't get that menu to show so I can use the finger properties
@misternick-ir3or
@misternick-ir3or Год назад
Hi, just want to say THANK you very much. Please do more videos like this. You can't imagine how much they help noob people like me to create awesome images!
@rawyin
@rawyin Год назад
Wow. Great rabbit hole tour, Alice! This is nuts!
@TheGreatBizarro
@TheGreatBizarro Год назад
Click the IK box and see if it helps keep the bones in the proper reference to the body
@fluffsquirrel
@fluffsquirrel Год назад
Stable Diffusion and Blender. Best crossover in tech history
@Palzahar
@Palzahar Год назад
I found a different method of getting the canny model without going through all those steps (I got lost within the process that I literally had to make sure there was an easier method). Instead of going to compositing, you can actually select Rendering which is LEFT to it, and can change the layers in this window.
@HeineArthur
@HeineArthur Год назад
Could you give a more detailed explanation please ? After going to the rendering tab, what should I do exactly ?
@aayush_dutt
@aayush_dutt Год назад
Love the pace at which the playing field is change!!
@risewithgrace
@risewithgrace Год назад
Amazing!! Where is the setting to enable multiple control models under ControlNet tab?
@tahuangkasa
@tahuangkasa Год назад
I also got this problem before, so to enable. Go to Setting, scroll down to ControlNet option, and adjust the slider on "Multi ControlNet: Max models amount (requires restart)" to 3. Clikc apply and restart UI
@kamransayah
@kamransayah Год назад
Hey K! I saw this a week ago and I didn't even consider it because it went to Blender, (I'm a Cinema 4D user), but now it's you and when you teach something it's different :D (a lot of details are in your tutorials that I love!) So it was interesting! But even due it's so simple, it is time-consuming and in any line of work time is everything. So, it's great to know how to do it in some situations where you need to do something precise but I think this feature coming to AI soon! So until then Thank you so much for making our life easier!
@macronomicus
@macronomicus Год назад
Thanks, this is a great use of blender, its got all the bells & whistles to do many things! Im glad to see you jump into blender, its an amazing app, I should say there's a bunch of ai stuff already, and lots more in the works too, dive in!
@thomasmann4536
@thomasmann4536 Год назад
assuming that this is rigged correctly, you can select multiple controllers of the fingers and rotate them all at once. Saves quite a bit of time^^. Also, when rotating, make sure to change the orientation to "local".
@tehs3raph1m
@tehs3raph1m Год назад
I wonder if using daz3d and poser and rendering out different passes would result in similar results... Large library of stuff on there
@thefriendlyaspie7984
@thefriendlyaspie7984 Год назад
7:19 press N you ll see a side panel appear, go to view, camera to view, now you cam change your view while inside the camera view mode shift + middle wheel button pan around, wheel to rotate, wheel to zoom to zoom in and out.
@Bloodysugar
@Bloodysugar 2 месяца назад
Oh boy... just gave me the idea to add Daz3D into the mix. Something like modifying this skeleton and hands so it's 100% compatible with Daz posing tools, modify positions in Daz (way quicker as I got plenty of poses there) then import back the result in Blender to export for Stable... (or just do everything in Daz, there's a way to produce heightmaps there...)
@sergentboucherie
@sergentboucherie Год назад
I doubt that someone who is running stable diffusion locally would have a computer that cannot run Blender, especially just for playing with that model
@ShawnFumo
@ShawnFumo Год назад
A lot of people use google colab though. I didn't realize until recently that you can have a colab notebook where you press the play button and it gives you a url and the whole automatic1111 UI is there with the open pose tab, multi controlnet, etc.
@macronomicus
@macronomicus Год назад
New version is even a whole lot better, maybe do an updated version? It has lots more features, & exports all the things (10 images) at once when you click render.💚
@chariots8x230
@chariots8x230 Год назад
It’s pretty cool, except the AI copies the “stiffness” of the 3D into the pose. There needs to be a way to improve the “stiffness” problem in the final result.
@Beetlebomb3D
@Beetlebomb3D Год назад
I had no idea the blender plugin was updated! Thanks for the info :)
@MarkArandjus
@MarkArandjus Год назад
Haha. Since this workflow requires both 3D models and photomanipulation to work... congrads, there is now no more argument that it's not art as you have creative control over both the input and the output :D
@pwngo
@pwngo Год назад
The rotate all phalanges option isn't coming up for me, is there a setting I'm missing? latest version of blender and v46 for the poser
@Arthr0
@Arthr0 Год назад
As a 3d artist I can confirm, this will help me in creating concepts for animations
@brianj7204
@brianj7204 Год назад
Gonna try this one out, looks promising
@Fhantomlordofchaos
@Fhantomlordofchaos Год назад
hello, how i toggle up rig main properties, i can't find those tool on my blender setup, thanks for your adv
@Luv_draft
@Luv_draft Год назад
Same problem
@lokitsar5799
@lokitsar5799 Год назад
First off, thank you for the video. The effort is much appreciated and I enjoy your channel. So please don't take it the wrong way when I say with the speed that AI is advancing, this video will be obsolete in the next 3 months due to better prompt understanding, AI no longer struggling with hands, feet or expressions, and/or an extension in Automatic that allows us to do this.
@exomata2134
@exomata2134 Год назад
Hope so
@richardduska1558
@richardduska1558 Год назад
10:06 You can just have both node set up seperatly but not pluged into the image. That way you can just plug in the apropriat one quickly.
@Gichanasa
@Gichanasa Год назад
You don't even need Photoshop or Blender to do this... Stable Diffusion now has the 3D OpenPose extension, which gives you the skeleton, depth map, and outlines for canny all integrated. You can even extract the 3D pose model(body+hands) from imported photographs.
@TC-1207
@TC-1207 Год назад
As usual, great stuff, thanks Aitrepreneur.
@grahamthomas9319
@grahamthomas9319 Год назад
Id love to see something like this with a scene as well. block out areas with simple plans and such.
@richardduska1558
@richardduska1558 Год назад
7:10 select the camera than hit num pad 0 than "g" to "grab". than you can move it manualy with the mouse or hit ether x,y ro z to move it in the corresponding axes.
@lux-aeterna
@lux-aeterna Год назад
My renderer does not show the depth of the hands. Just a black picture appears. what could be the problem?
@godspeed2145
@godspeed2145 Год назад
Same
@shongchen
@shongchen Год назад
Ok, it work ! I get a good hand , thank you !
@zetracakep
@zetracakep Год назад
for me i like to make the line art first and let the auto coloring canny model finish the image the result is the perfect image that i want
@Xaddre
@Xaddre Год назад
wow your channel has really grown the past few months keep up the good work :)
@Antraxus11
@Antraxus11 Год назад
Would be interesting if that also works with different shapes of eye pupil. its also quite different to make this look good. Nice video!
@LauLauHip
@LauLauHip Год назад
You can use shortcuts like r for rotating and g for moving
@madwurmz
@madwurmz Год назад
great video! updating controlnet broke my img2img just now , but I guess I can't have it all. :D another easy way could be making a photo of your hand
@3dcgphile
@3dcgphile Год назад
What about using PS or a drawing app to trace an outline of some hands--either photographed or rendered from a posing app--and then add them as a ControlNet canny input? Would that work well enough, without the depth map?
@RiotingSpectre
@RiotingSpectre Год назад
I don't have the "Rig Main Properties" tab to "Bend all phalanges", how can I get this?
@user-kt1pu7vw4u
@user-kt1pu7vw4u Год назад
Let me know if you find something, ty
@danowarkills4093
@danowarkills4093 Год назад
I am having the same issue. I hear the current version may be blocking a necessary script from running. Going to go further down that rabbit hole. I'll let you know if I figure something out.
@todgins
@todgins Год назад
Same boat :(
@todgins
@todgins Год назад
Open the model (double click), once in Blender... Edit > Preferences > Addons > Install (rig tools folder AS A ZIP FILE, DONT UNZIP),Same window as "Install" search for "Animation: Auto-Rig Pro Tools" > Check the checkbox
@todgins
@todgins Год назад
@@user-kt1pu7vw4u see below
@sonderbain
@sonderbain Год назад
I would recommend doing this in the animation tab and pose mode instead of directly messing with the rig as an object. You could potentially export these as animations as well if you did it this way >.>
@madebyrasa
@madebyrasa Год назад
hahahaha I was literally going to make this blender rig today!! Good thing I subscribed to you ;)
@slackstation
@slackstation Год назад
Excellent video!
@silberpetermann4003
@silberpetermann4003 Год назад
Can you please show your extensions? Like the one in our positive prompt and clip?
@friendofphi
@friendofphi Год назад
I'm glad you shared this persons blender model. There's another one going around but this person seems to be really on it and charging nothing.
@Schyferyel
@Schyferyel Год назад
Is it possible to retarget mocap/animation data to this blender model? I imagine something like animation data -> retarget to blender -> render animation -> generate via SD Controlnet -> ez consistent animation -> ??? -> Profit
@grahamthomas9319
@grahamthomas9319 Год назад
Thats not a bad idea, right now its a really bad idea lol. But in the future pose data to a image generator may be faster solution than building and rendering the scene, now theres to much noise. They are getting better at fixing that, they models that are getting there. This opens up alot kf possiblities build a quick depth map in blender no materials or anything is super easy. You just stick blocks together.
@sperzieb00n
@sperzieb00n Год назад
if you know how to work blender, and make it output the frames as a sequence of png's, i don't see why not. Corridor basically did a similar chain but on captured video earlier this month and released a video about it just a few days ago.
@sperzieb00n
@sperzieb00n Год назад
@@grahamthomas9319 there are actually a few ways to drastically reduce the noisyness post-SD with plugins in your video editor that are meant to reduce flicker, and/or within sd by setting the noise multiplier low and output looping.
@linkinparkgus
@linkinparkgus Год назад
Excellent tutorial!
@eduacademia
@eduacademia Год назад
wow.... amazing thanks
@flonixcorn
@flonixcorn Год назад
Very Nice!
@haidargzYT
@haidargzYT Год назад
no one is talking about the controlNet reloading problem without even changing the model or the setting , its a big deal no animation in Stable diffusion until they fix it , with every image generating it take 5 to 6 minutes
@ShawnFumo
@ShawnFumo Год назад
Isn't that what the ControlNet-M2M script fixes? I haven't used it, so not sure...
@piranhaman1044
@piranhaman1044 Год назад
needed this thanks
@godspeed2145
@godspeed2145 Год назад
I advice you look into IK, you're doing FK mostly. Both are good but IK is the best for dragging child bones from parent bones.
@hashygaming5269
@hashygaming5269 Год назад
Can you please make video on how to turn a real photo into a 3d environment which can be use in making of video games.
@the_stray_cat
@the_stray_cat Год назад
12:25 im pretty sure if you can run Stable Diffusion then you can run blender. i can run blender just fine but my pc dose not have the power to run Stable Diffusion yet.
@ShawnFumo
@ShawnFumo Год назад
Yeah but a lot of people (including me) use google colab to run Stable Diffusion. I just don't have a great graphics card but in theory you can easily run SD (including the whole UI and extensions like controlnet) from a tablet (or even your phone actually.. the auto1111 UI I found works on my phone, though cramped) using a colab.
@oyunbirlikteligi2963
@oyunbirlikteligi2963 Год назад
This is getting ridiculous... I love it
@nguyensang7189
@nguyensang7189 Год назад
can you make a video on how to train using kohya sd-scripts webui i think it seems to be a good train lora for weak computer users
@purposefully.verbose
@purposefully.verbose Год назад
well, looks like i have to add another 3d package. it's ok, i have been meaning to dive into blender for some time. thanks for the info.
@rorymorrissey4970
@rorymorrissey4970 Год назад
Can the size of the hand+feet models be adjusted? Because they look kinda odd... borderline grotesque even, on some of those anime characters.
@micbab-vg2mu
@micbab-vg2mu Год назад
great video!!!
@Actual_CT
@Actual_CT Год назад
stable diffusion and blender is the perfect match.. hopefully someone will make a blenderapp including all webui featureset mixed with all the power of blender..
@tariksaid4536
@tariksaid4536 Год назад
could you please make a video about prompting, tips and tricks
@jonathaningram8157
@jonathaningram8157 Год назад
Sometime I wonder if it's useful to learn those janky methods knowing that it will probably be much easier and faster maybe in few weeks.
@Basnih
@Basnih Год назад
At 5:36, how do you activate the rig main properties panel?. I want to use the bend all phalanges parameter but the panel doesn’t appear.
@Conjurus77
@Conjurus77 Год назад
Hello, did you manage to make the rig panel show?
@Darkbolt83
@Darkbolt83 Год назад
For the depth rendering i see only a dark background ^^
@Pianist7137Gaming
@Pianist7137Gaming Год назад
For the Blender method, can we do it in reverse? Say I have the openpose image but it isn't in blender, can I import the openpose image to blender and have it detect the model? So to get the canny and depth that matches the openpose image.
@user0K
@user0K Год назад
Someone will combine it with VR trackers for sure.
@takiali8476
@takiali8476 Год назад
there is a drawing sofwar called clip studio have a 3d models feature maybe
@Meikerart
@Meikerart Год назад
You a god of stable !!
@pooflinger4343
@pooflinger4343 Год назад
Looks like a loooooot of work to simply get a decent hand. Hopefully down the line SD will just be able to generate this natively.
@hoblon
@hoblon Год назад
Another way to get perfect hands is a good prompting skills =)
@HikaruYamamoto
@HikaruYamamoto Год назад
what would you recommend?
@duskfallmusic
@duskfallmusic Год назад
You're gonna make me stay up all night making things again? XD I JST LAERNED HOW OT MAKE POSES XD --- and holycrap two of my pose packs have only been up on Civit for 1 day and they're at 500 downloads XD
@ThankYouESM
@ThankYouESM Год назад
I went searching online for a manufactured fully articulate posing 3D model of at least 12 inches tall to create such perfect images super fast, but... I couldn't find any for this task.
@Smokeywillz
@Smokeywillz Год назад
I hope we can get this in open pose editor
@UnknownUser-eb1lk
@UnknownUser-eb1lk Год назад
1) Tried v4.6 and v9 and can't get the Bend All Phalanges slider to appear on the right. Tips? At 3:29 you can see the "Rig Main Properties" tool menu appear at the top right by the XYZ red/green/blue orientation model (by the hand icon). I can't get Rig Main Properties to show. 2) Can you do a new video with v9 and show how to use the extra limbs or animation?
@stellasvartur4547
@stellasvartur4547 Год назад
How do you get the hands to show in blender as hands (as here in the video) and not as mere lines?
Далее
Top Free AI Model Generators Tested for 3D Printing
26:42
Big Baby Tape - Turbo (Majestic)
03:03
Просмотров 236 тыс.
Gelik yoki Velik?
00:20
Просмотров 823 тыс.
아이스크림으로 진짜 친구 구별하는법
00:17
I Built an INFINITELY ONE-SIDED Violin??
15:39
Просмотров 157 тыс.
These Blender AI Addons Will Shock You!
8:26
Просмотров 127 тыс.
How to Make 3D Animation MOVIE with AI  🤖
20:09
Просмотров 1,3 млн
CSS tutorial, but it has to rhyme
3:15
Просмотров 46 тыс.
Big Baby Tape - Turbo (Majestic)
03:03
Просмотров 236 тыс.