I have absolutely no idea what is happening here.... but it was amazing and I watched it all in fascination. This channel will be huge, I'm sure. Currently 31 subscribers... 32 now 👍
This might help: (text)--->(img) is t2img, with controlnet it'd be (text)--->[control/"conform to this outline or shape or depth"]--->(img) img2img with controlnet is: (text+img)---->[control]---->(img)
Thanks for the explanation of the different models. How do you use the preprosessed image in and original image in Midjourney? Do you upload and use the images as a reference image?
Great job. Did you use both the preprocessor and model version of every technique for these examples? Can you use one without the other and/or can you mix and match? Thanks.
great video fam, do you know if its possible to export the calculation the models make? like if i wanted to export just the depth map or normal map it makes
If you go to the settings for the extension you can enable this. You need to save the "detected_maps". They'll then be available in extensions/sd-webui-controlnet/detected_maps
Nice video man. Thanks. Is there a good model for adding furniture? E.g. I upload a photo of an empty living room, set the style in the prompt, and let the model keep the same room structure but add several furniture and decoration?
Hi, I have tryied everything and I never get something base on what I upload in AI. It always does what it wants. I have all the models, styles, everything but it just doesn´t do it.
Man I really want to watch this but 1)the music is instant migraine-inducing. No way in hell I'm sitting through 20 minutes of that. 2) it took you three minutes to say "These are our two test images". No.