Our fist run with flux did not yield as good result as there online demo. We might put some time into later or early next year as it continues to evolve. It was though pretty easy to get it running under OS X.
Hello, I would like to ask this question. Why do I set openpose_full and control_v11p_sd15_scribble [d4ba51ff] then I get the skeleton of the image (source) then I click generate, but the image is created according to the prompt without a skeleton, it’s just created according to the prompt
I usually do manual inpainting for fingers instead, Much more accurate that way and gives us humans something left to do. (Lol.) ControlNet I'll use for when I want a real photo as the pose reference or a specific architecture as background because you can layer these together.
i have a very specific question , i am a photgrapher for a shoe company , i shoot a lot of white background ecommerce photos of the products . ideally i want to input those ecommerce photos into an AI platform , generate a new image with a fantastic background WHILE retaining 100% how my product looks like . is this possible with img2img ?
Totally! Check out this tutorial: learn.thinkdiffusion.com/bria-ai-for-background-removal-and-replacement/ we can also use IC-light to relight the subject to match the new background: x.com/thinkdiffusion/status/1806347642550600004 Hope this helped:)
Only black man you have must be ape. What a suppression to love of black men you have. It is reversing it so nobody know your obsession? What is this hate against black muscles, masculinity & power you have? Good luck with it. It is like Hulk of Marvel studios becoming a green figure muscular alien instead of a black man in 20th century. Insecurity of some male races?
Hey! Stable diffusion embedding is a method that helps to reduce the complexity of large sets of data by simplifying them into fewer dimensions while keeping the important patterns intact. It’s particularly good at handling noisy data, making sure that similar pieces of data stay close together in the simpler version. This technique is useful for visualizing and identifying clusters or groups in your data. Hope this helps!
sorry if this is a stupid question, I'm a complete newbie. But, how does Think Diffusion differ from using stable diffusion? I'm (getting) familiar with Stable Diffusion, and am struggling to find what I can use ThinkDiffusion for other than generating images...
You can download any model or LoRA from CivitAI or your computer. That means you can use any animation syle model from here www.civitai.com Just follow the step-by-step guide here: learn.thinkdiffusion.com/thinkdiffusion-faqs/#how-to-upload-a-model-by-url-eg-civitai