Hi, as soon as I can use I will return here and I will write my impressions, now as You already know I'm studying the Prompthunt, but I think that is enough what I already learn, so I also need to see you videos of Playgorund that I see that the Platform is very good now and also with a lot of new features that I need to use and study. Thanks for the help with Playground too.
Props and a very sincere THANK YOU for creating this video, kind sir! I'm not the most versed person when it comes to SD (I mainly create static backgrounds and other assets that aren't the main focus)... some recent job requests have had me researching tools that render nice subject-based compositions for it, and my end results have been exactly what I needed (I installed all of the models you recommended). Great info fam... sorry for being wordy, but I'm really, really stoked as I write this. Cheers from Louisiana!
And thank you for taking the time to leave a comment and encourgement! It's appreciated. Just be sure to download the latest versions of the models, sometimes updates come fast, like Realistic Vision is already at V5 and it's a great update! Hit me up anytime if you have any questions I'm here to help! Take car bro 👍
I’ve been using it a lot lately to test out. While it does produce great quality images it’s still lacks a bit with hands but then again most models struggle with hands somewhat. I’ll be adding it to my next head to head comparison soon 👍🏼
Fairly new still and to be honest, I haven't tested it enough to say if it's better than V2, in some cases yes, in other cases I don't see a difference. Let me know if you find anything ground breaking and if I do, I will make a video comparing the 2. 👍
@@MonzonMedia Btw I have a good video idea sometime that always bugged me about AI depositories like civitea and prompthero - places like these are supposed to help you recreate perfectly the results of other people giving you to replicate a picture they posted - the prompts they used the steps, the models, guidance scale, loras etc but when you go to replicate someone else's work you often find the either left out key steps, listed the wrong model, didn't list the seed, etc making it frustrating trying to replicate their work at all. I think a video showing good and bad examples of this that are rife in all these depositories would be a really good video idea. Hope you think so too!
I've definitely come across comments in the past of people asking about this so it's definitely something I've been considering. Part of the reason I wanted to do a video on it was 2 fold. 1. It's a great way to learn, 2. Don't just copy and paste and not make it your own. I see it all the time on playground, people just remix and call it a day but they don't realize how or why it produced that kind of image. But yeah, I think it would be an interesting topic to cover, especially for those who are just starting out. Appreciate the suggestion! I'm always looking for topics to cover. 👍🙌
Hey there, some context would help, what's your prompt, what model are you using? You can get the same results in Easy Diffusion by installing these models. Comes down to how you are prompting your image.
Hi @MonzonMedia, first thank you for this video, amazing images, and I will work with it now that I already install the SD in my Potato PC, and is working even with my old Nvidea 4gb card. I'm waitting for the video to help us to install Clip Skip and others ultilities.
@@MonzonMedia Yes, I had instaled the Auto1111 using your video, very easy but it is very simple and need to add some extensions like Clip Skip and some others, I did not find how to install Clip Skip. Thank you for all your help.
Great list, thank you. When you mention the inpainting models with these...does it mean you should only use the inpainting models they provide and what impact does it have if you do/dont? Thank you. 👍😎
Great question and I realized after I should've mentioned something about that. Basically you would use the inpaint models to do inpainting...obviously, however it's also need for some of the other features like the Infinite Zoom extension, or ControlNet's inpainting which is like outpainting similar to Adobe's generative fill, although it's much slower. For standard generations, the main model is good enough. Hope that helps answer your question.
Can Easy Diffusion work with inpainting models? After installing I git an error: Size mismatch (320, 9,3,3) the shape size of the current model is (320,4,3,3)
Great question, so for one, my prompt had the asian influence, one of the things I do to get consistent faces is use 1-2 fictitious names to get a certain look. With that being said, many of these models do tend to have an asian influence. Not all of them but some. To counteract it I will use nationalities and sometimes combine them too.
Yeah it's a separate file, however most of the time for minor inpainting like touching up eyes, adding details etc the main model is fine. But if you are not getting the results you want, sometimes the inpaint model is better to use. I find it's mostly needed for other functions like if you were to use the infinite zoom extension, the dedicated inpaint model tends to work better.
Thank you for your recommendations. Can easy diffusion work with inpainting models? After installing inpainting models I get Error: Size mismatch, Copying a param shape torch size (320,9,3,3) from checkpoint, the shape of the current model is torch.size(320,4,3,3)
Hey John, unfortunately not at the moment but they are working on it. The custom models should still work fine though. The dedicated inpaint models do help increase the success rate when inpainting and is best used for other functionalities like the animation plugin, however for standard inpainting the main model should be fine.