I'm here from the subreddit, I just want to say i appreciate you so much for taking a scientific approach, especially as it's a small channel, and I can see how much effort you must have gone thru, handpicking every model generation and giving each model a chance. AND then making your findings free to download? Ridiculous how much effort you put here. Lots of love mate. Keep up the good content and I can see you making it big in the ai scene. Thanks for the wonderful analysis!
bro... I just know the channel and I can not do more than thank you and subscribe to continue learning from this, it's amazing the amount of effort, work and hours you invested in this research, and above all what I admire most is that you like to share this to everyone who sees you. It's an incredible job, thank you.
Thank you very much for covering my model. WoW, I'm chuffed to say the least. Yours isn't the first model compare list I've graced the top tiers of, and I hope it won't be the last. But I still love the work you put into this!
I wouldn't have used "Euler A" sampler as it is not a convergent sampler and you could end up with a very different image with a few more or less steps either way, other than that it looks good.
Congrat ! What a nice job ! I have my own set of test but I think I will integrate your prompts. I noticed that many models change drastically from one version to another (Juggernault XL 9 to 10 for instance, classic or Turbo/Lightning versions)
It's a very real evaluation. I also simply tested a lot of SDXL, and I have the same agreement as you. It seems that you also missed an excellent model: Crystal Clear XL. I think it can be rated B. Isn't it the champion of the civitai competition?
Thanks very much for the comparison. I find it very balanced and useful. Do you have any plans on a follow up comparison with some of the newer models?
Only criticism on this method of ranking is that it should have made multiple generations with specific seeds for each prompt (like how many where successful on 10 out of 10). And perhaps the recommended settings for each checkpoint (like clip 2 or etc). You will notice that it will make a huge impact on your specific type of ranking and you notice that a lot of models will almost generate the same results (specific seed will expose what base it is using).
Fantastic job! Well done and thanks. But like you said,, it is getting a liitle bit stale… I would really like an small update on your A and B models and maybe some other up-and-comers. I mostly use Juggernaut, but you have given me nice pointers on other, maybe better models. Thanks again.
You put RealVisXL V2.0 into C? Really? You underestimate this model, I guess. Cause I now use RealVis V5, and it's fantastic - nothing to compare with.