But what do you do with those images? How do you create a consistent character with those head shots? Am I missing something? As I was watching I thought you were then going to use those images to train a LoRA or something? Which I was hoping you would show as the next step. Am I missing something?
hi Andrew. I do the same thing but it seems it doesn't use canny so as result I see only one picture similar with what i used in Control Net 2. what can be the reason?
It's just a character sheet. By itself, it doesn't mean, you can generate multiple different images with the same character. Even the sheet not consistent, just similar.
@stable-diffusion-art thanks for your answer! So, staying at an intuitive level, I guess I would have two slightly different results, but both improved by the SAG algorithm. I will do some tests, thanks again
Месяц назад
Very good tutorial, thank you! Silly question: can you do the same with for example a male head?
1. Where are the results? 2. Where did you get the original 9 images? 3. What happens if the characters of my story are animals? - Please understand why people need consistent characters. They are needed for AI Videos, like Pika, Haiper, Genmo. Not everyone can make videos locally, since the minimum requirements are 12gb to 16gb of GPU. So must people depend on WebUI for the videos. Others need them for Children books. I know how to make a children book, but I need consistent characters. So I need the 9 image for every animal in the story. Where do I get the first 9 images? I just want an answer, because everybody is talking about consistent characters, a no one is showing a real solution. Apparently we will need to wait for the Spatial AI to be available to the public.
Unfortunately I don't have enough gpu vram to do this many faces on a character sheet. I am using an amd 6900xt on windows with directml, and even after decreasing the resolution to 664x664 it stops on the face detailer node.
The face fix processing is in series so it should take the same amount of VRAM no matter how many faces you fix. If you suspect comfyui may not release RAM from the first step, you can modify the workflow so that it only perform face fixes on an input image.
what an offensive comment, what accent is the right English accent? UK, US, Australian, Indian? If you judge by the demographics, the Indian accent should be the right one to go. From what I can tell his accent is more like East-Asian and perfectly fine to me as we have many clients worldwide speaks with various accents, if you have problem with this you are not fit to the modern era to work with worldwide level businesses. You need work on your arrogance.
@@kevz1532 I don't care, everything is translated by AI into my language and with any voice acting and intonation. The main thing is the semantic load and the disclosure of the topic. And it was translated and corrected by AI.
@@stable-diffusion-art I'm Dutch and sometimes I hear someone from my country speak English and it's just comical to me because I hear the person is trying to hard or is just not used to speaking another language. Both can be funny. It's no big deal. For the record: You speak perfectly understandable English and I was able to follow the video without trouble
Is there a site with a lot of different reference image like that? When I discovered the first consistent character model (charturner-character-turnaround-helper) I immediately think about 2d game making, but then the resource for it was still not a lot. And the result wasn't that good, for example, if I added something to the clothes on the left, then img2img again, the other poses doesn't reflect. Then I draw the same thing in different pose. Then I realize it's not that easy, if I really want to make a game out of it, i need like 50 poses right? I hope this time it's actually good
@Stable Diffusion Art Great Tutorial Bro. So, can you tell me how would you incorporate this into other venues? Like would I be able to use the 9 face photo to train a Lora? And dedicate that Lora to look exactly the way I want to? Would this help for video? If I make a short video and with a train Lora Character Model; will the model look always the same? Would I be able to use this for a comic book? Or would I have to train a Lora for her body as well not just face? So, how would you use it. I'm thinking based on general venues but I'm still learning to even get one right photo at a time. But, I've been wanting to somehow train one specifically, so it always looks the same. Ex: Comic Book? The character can't change in looks going from issue to issue. Unless noted a hair change or some transformation. :) Your thoughts and feedback is greatly appreciated. God bless you. Maybe this also may give you ideas to further tutorials based on this one which I think was pretty amazing. A+++++++ Easy to follow and I would be able to recreate it. God bless you Sir, Thank you for taking the time to teach young entreprenuers or artists how to progress their art. 😊🙏
Btw, It took me awhile to get SD to run properly. I'm learning lots very fast by the grace of God. Currently using SD Forge. I think I would be able to recreate your tutorial so that I can train a Lora. With Loras do you need 9 box grids or whole bodies as well in different poses? Hmm, so much to learn. But It's an amazing tool for those who take the time to learn.
You can use the same technique but with a full body template in 3x1 grids. You will need to experiment with the IP-adapter. The one in this tutorial only transfer the face. You will need to add or replace one with copy the whole image, e.g. IP adapter or IP adapter plus. Alternatively, you can consider combining other techniques to generate a consistent face: stable-diffusion-art.com/consistent-face/
@@stable-diffusion-art Thank you for the link and response.. I will check it out... I was able to follow your tutorial and got some similar results. I downloaded the ipadpter model and put it in controlnet. But since I'm using Forge, the preprocessor I could not match the same one as yours but still chose one for the "face" and it came out. Not exactly though as the main image reference. But similar. Like the smile and eyes shape and color but the face was a little off and hair was similar but still different. I would like to learn so that I can bring a sketch of mine and render it and then do the steps. Do you have a link for the exact preprocessor and where would I put it? I tried but mostly said controlnet. But that was for the model. Preprocessor seems builtin and I still checked in forge-preprocessor and did not find a folder to drop. I will try automatic1111 since I also need it for Tile Scale. The one I found in Forge worked but not as good as a video tutorial I saw in a1111. Keep up the good work sir. And Congrats on your channel. I will subscribe :)
I can easily do this lol I know a better trick but I dont see that much market value doing this kind of activity i mean where will you use it that could make lots of money?
Different AI generators have their use cases. Use Stable Diffusion when you want to control the image down to pixel level. Use midjourney to generate something quick and pretty. Use Dalle for prompt following. It is also great for generating reference images for controlnet SD.
Hey today i am saw the google notifications then i open this link . When i open it i loved it .... I want to learn this from starting .. I want to become pro in it. Can you help me ???
Yes, I have written a course to teach people to be a pro: stable-diffusion-art.com/stable-diffusion-courses/ It's currently text based but I will supplement them with videos.
Thank you very much for all of your hard work and dedication in designing this generator. I like creating art, but I often am grasping at straws for the right keywords. I don't see this as a complete solution to my learning curve problem but a step in the right direction. I especially like how you showed us how to add keywords and descriptions to the generator as we locate new words. Again, thank you.
Thanks. How can I add a full new keyword list (as a new additional topic) in Notion? I created a new one and tried to add to the sum formula, but it does not show up in the list of possible targets.
It was a good idea to have a spreadsheet, but in Excel on Windows the drop down menus are tiny and it is not possible to see what is written. It is very fiddly and not viable as a method to choose options IMO.