Enigmatic_e did experiment a lot with that (specially using img2img alternative script + After Effects) and manages to get very consistent results in his A.I videos, check him out.
I'm so glad for this tools and for your tutorials man. Creative minds who have no drawing skills can finally start doing their own projects with no limitations. It's amazing
Canny is so powerful now in the new update, I changed an Asian pinup girl into a buff male Viking using it at 0.1 and not only did it make a great looking Viking, it kept the pose of the pinup almost exactly. Will be trying it on 3D models shortly.
BTW - it's called inverse kinematics, when you are talking about 3D animation/posing 3d models by dragging from the extremities, instead of doing it joint by joint, i.e. using forward kinematics.
An extra thought: if you use Blender or a more advanced 3d software, you can get out depth-map, normal map or even "open pose" render straight out of your 3d software, so it's that much more accurate than getting that data from a normal 3d render.
Interesting, I just updated ControlNet and there are new options: Guidance strength, Annotator resolution, and Thresholds, plus a "pidinet" model that I don't remember seeing before. Would you consider explaining these controls in a future video? Thanks.
I haven't checked yet but probably there's an explanation on the github page. Usually when there's a new update they explain the new parameters there even if it's just a bit.
Thank you so much for this, I had the idea a while ago, but I wasn't able to find anything useful with my google searches. These software and 3D models are probably gonna be a game changer for me (and everyone else).
Thank you for making these videos, you are the only person I actually look forward to seeing videos from every day, because ai is something I enjoy playing around with a lot. I did want to ask though, do you have any plans on creating a lora training video specifically for styles?
AI Lord! Thank you for making and sharing this video with us. We wonder if you could make a video that uses these latest methods to fix characters hands ?
wow, this will give people without art skill the chance to make good art and give those with art skills superhuman art! Its similar to what i've been doing the last couple years with igm2img and my drawing/painting ability but now its automated.
Cool! Is there a benefit to use img2img instead of just txt2img with ControlNet? I mean the extension uses it’s own image for analysis so there’s actual no need to load the image twice, right?
Hes doing it wrong. Img2Img don't need canvas. In the other hand, Img2Img mix the image + controlnet, while txt2img with the canvas use only the controlnet generation, meaning more randomness.
another great video Aitrepreneur! ... BTW that wasn't the peace symbol, it was the rude F Off symbol lol, you have to turn the hand around to make it the peace symbol
can't wait till we can make 3d models using prompts and images. It's already in the making, I can't wait till it's introduced onto NovelAi. I think eventually we will get video to Animation where we can take video of something and the Ai will map your 3D model doing the actions in the video, Then it will render out all the frames of that 3d model into a 2D animation style.
Can we use our own custom characters here, or will this just generate random characters? 10:57 - I want to apply these poses to my own original characters, instead of random characters that the AI generates. - Can we pose two characters together in a more interactive pose, such as hugging each other? - Also, can we apply specific facial expressions to each of our characters? - Lastly, can we ask the AI to put our characters in a specific background of our choosing (based on a photo reference that we provide the AI with)? These are things that I’d like to be able to accomplish with AI someday.
You can do all of those of things right now if you are skilled enough. The tech is ready. For the first thing you train an embed/lora/model. For second you use latent coupling and embeddings. Third you inpaint the expression you want (provided it is something the AI knows, or you'd need to img2img this as well. Fourth you do that in photoshop or img2img first then photoshop.
@@takeuchi5760 Thank you for the tips! For the facial expressions, I have my own reference materials that the AI can use. I have reference sheets where I have illustrated different facial expressions of my character in my own style, and want the AI to maintain that style when applying the facial expressions. I was wondering if the AI can just use my artwork as reference and apply the facial expression that I choose to my character, while considering the camera angle that my character is shown from. As for taking the image to Photoshop to add the background separately, I can’t say I’m a big fan of that approach. The reason is that the background & character are not separate from each other, and they both need to have the same lighting, shadows, and perspective. With Photoshop, it would take quite a bit of photo editing to make the background and characters match up with each other properly. Plus I don’t really use Photoshop. I use a different photo editor.
11:45 when you pause the video, read the prompt and check result :) Most of the prompt didn't work, except that part "photo of 2 people" and "modelshoot style". I'm working with SD for few months and still see, that you can control something, but in reality everything related on lucky randomness. I've never got the result I wanted. But if I didn't want something special, I got quite nice results. BTW: I didn't find, that Magic poser works on Windows.
6:45 the frame you capture has themodel upside down. Every time I try to model someone upside down, I get terrible results. Really wish you kept THAT frame, and followed through to a final image. Maybe it's something missing in my prompts.
I cancelled my MidJourney Membership this week and switched to supporting Aitrepreneur via Patreon. After consideration, he offers more value for my money as the Stable Diffusion plattform is almost on par with MidJourney now; last but not least because of the fun and informative videos created here. For me it's money well spent - he's the first ever creator I decided to support via Patreon. Looking back at what I learned here over the last 4 months, I could not come to another conclusion that Aitrepreneur definitely deserves my additional monetary support! 🤗 (Don't forget to still "like" the videos on youtube though)
creative way to make characters! is there a way to generate a whole entire novel with an ai model?i would love you to go over that and discuss about ai language models as well no force though
11:59 It is still an AI limitation. There are some positions that no matter how hard you try, cannot be corrected. AI can not understand bodies in perspective very well. I could put a worm's view camera body in perspective, but the results were awful when it was a body lying on the ground. But I believe it will be solved in less than a year.
I had an image of a dog lay on it's side with it's head closest to the camera and almost upside down, no matter how good I got the edge detection the AI just found the pose hard to understand and generate a useable image; I guess its due to how the model was trained.
By the way, you don't need that whatever program he was pointing at, Because Leonardo ai has a Pose to Image where you get any post from a character or the pose preset and give It the image and give it a prompt
Btw, "normal map" is better then midas in almost all cases, because it works very similar, but saves more info about the image. Also you can generate normal map directly from 3d software an use it without preprocessor
What if I wanted to create a storyboard and I wanted to use the same character over multiple frames, using different poses? Is there a way to save a character and then just re-pose it?
Some of the models of ControlNet maintain the image details more than others, which would help with character consistency, only way to know for sure is to experiment with them.
For some reason it's not making any pose for me, the pose image is just blank black and so the picture is just exactly replicated, any suggestions? I didn't downloads all the models, only the openpose, canny and scribble models
that's another perfect turtorial! by the way, is there any possible using controlnet at inpainting? for example, fixing the hands of a image that was already been generated by previous model? also, I saw some very fatanstic 3d gif image generated by stable diffusion, will you make a shorts or video tutorial for making that?
What if I want to turn the posed images into an animation? How can I batch process the images before sending them to be inpainted and have the faces corrected for each individual frame, as instructed in the video? (edit: later I realized my question may be wrong because we can't mask the face automatically when the motion of the subject is moving unless we have a script for tracking the face, do we have the script or can we ? )
greenshot is a fine screenshot too.. but you can do this on windows native.. there's a settings where you can set it to crop instead of capturing the whole screen when you press PRTSCRN
Kinda sad this is hard to use with close-up portraits.I only keep on getting black squares from the controlnet. And since the poser model is bald, if you use the depth model, SD will have a hard time generating long hair for women.
They must have changed something. As i do not see magic poser lite and the web page looks completely different. It just has a bunch of download buttons that does nothing