Thanks for another great video Bro ! Pro tip - on Civitai, if you see an image you want to "borrow", just drag it to the "Process Image" tab in SD (I am using Vlad). If the prompts are in the meta, they will come with it. Cheers !
There is one more I haven't seen anyone cover yet - epiCRealism. In my tests it was always better in realistic images than reliberate. I would love to see you cover this one as well. Always great videos!
According to you, if you would rank these models based of their capability of producing Realistic Image, how the rank would go ? Cyberrealistic, Deliberate, epiCRealism, ChillOutmix
I came to the comment section to say this as well. I've tried many others, most of them being the ones mentioned in the comments here, but epiCRealism is just better.
@@Phraxas52 I agree! I would put them in the same order. However I rarely stick with just one checkpoint. Usually I test my prompts with different combinations of checkpoints and sampling methods to find which result I like the most. X/Y/Z plot is my favourite tool by far :D
Trying out this model and am very impressed. In many cases, does a better job than my old favorite realisticVision 2.0. Thanks for bringing this model to our attention.
Me: Ok, time to clean house and prune out all the models I no longer use to free up space and make it simpler to find what I do want. Actual Me: Ok, time to add Reliberate and LowRA.
@@felipeitsuithen prompt better, use nationality prompts, prompts that specify look, use face loras. Generally models generate same face if you are too vague with prompts it will go for whatever has highest weight internally since you didn't precisely said what you want.
@@felipeitsui Not if you know how to prompt. The only times I've got the same person is when I've deliberately written the prompt to get the same person. If you keep your prompt simple, vague and worded the same, you probably will get the same person. That's how it works. 🙂
As a long time ChaiNNer user, I'm surprised you've never made a video about it.... It's a must for graphic designers as me, especially for batch precesses... Free of out of VRAM errors. Also I'm having much better results using SD Upscale than Ultimate Upscale.
@@The_Daily_Meow Both SD upscale and Ultimate upscale are bad. You can test your images in PS- add curves layer and play to extreme and you will see that your images are in tiles (if you can't see without curves layer). Only tiled diffusion method works, but with larger scales it loses details. But still without tiles.
@edu_machado correct if I'm wrong: ChaiNNer is the same type upscaler as Topaz Gigapixel (Except that in ChaiNNer you can upload multiple upscaling models) ? If so, I don't see the point to compare as it acts totally different from Auto111. In Auto1111 you can add extra small details with denoise and sampling steps. Is that possible in ChaiNNer ? There is also another software called Upscayl where you can also upload your own upscaling models. But again- it's not the same, as Auto1111- you just upscale and can't add extra small details.
@@relaxation_ambience No. It doesn't loose details but adds them when you go in smaller steps. I said 'if you know how to use it, of course'. otherwise, it's bad
@@relaxation_ambience Topaz also uses AI to Upscale but I believe they use a proprietary model. With ChaiNNer you have the freedom to use different models.... So yes you're right. I believe it's an obsolete app now (Topaz) but they have another app for video that will keep relevant for some time IMO.
I'm with you but I've seen more than one content creator talk about the times they tried this. The views plummet. Sometimes you just gotta play the game.
I have 900gb of checkpoints and my favoritenso far is Awportrait. No one really talks about it but it's the most realistic model i ever tried on many different styles
Not heard of that one. I'll check it out and I know that feeling of having a lot of checkpoints. I don't have anywhere near as much as you do but they still need pruning back.
New to this! Mind blowing if I am looking at what I think I am looking at. Can you please provide a link to a basic intro to this software/process? MUCH appreciated!
This model is almost exactly the same as Deliberate. Nothing new to see here. Don't believe me? Do your own tests, but this model contributes nothing to the space. It is no more realistic or photographic than Deliberate is.
All very nice, but we are always at the same point, hands and feet, all awful!! But does it take that long??? I'm getting really bored with Stable Diffusion
I find this model has slightly too much anime weight to be truly photorealistic (can look nice and arty if that is what you are going for), I use epiCRealism_pureEvolution v3 for better photo realisim.
I use ddim for everything lately, i dont know what changed but it seems to work best since last major a1111 update for ome reason. Also i would strongly recommend using adetailer insted of face restore that gives terrible results most of the time. As for uoscaler i suggest tiled diffusion it gives best results out of all methods iv tested and is very fast.
I've been trying to do something and I can't seem to make it so I'm here asking for help. Is it possible to create a character turnaround by using 2 different controlnet models, one that for example becomes the reference character and the second the poses? Because I created a character I really like but by using the img2img, and of course I can't simply recreate it in txt2img, and whenever I do try with any settings, any combinations (and even without poses but only reference image) I can't do it :/ Found people trying to do the same in reddit 6 months ago but with no follow up. Do you know any way with the new reference only control that can make it happen?
You can probably recreate it in txt2img by copying the settings from the PNG info tab after placing your character in the png info tab. Maybe? I have a character woman I love but it was an accident. So I had three pictures only. I trained her face as a Lora and it came out perfectly. Now I insert her Lora anytime I want. Didn’t think it would work but it did. If you only have one face, I would try making a Lora from it. Who knows
@@SantoValentino if it was a character completely made from txt2img it would work, but since it's from img2img from another image I had created, the parameters aren't accurate to those in infotab
Uuugh.. "Artists" It's like when DJ's call themselves "Musicians". I guess that's the world now. (Sorry, as someone who's spent his life studding anatomy, modeling techniques, light theory and subsurface scattering... it just kind of grinds my gears a bit.)
There are so many nice models, but sadly CivitAI has these strange licenses, that mostly make no sense, like not being allowed to sell pictures or to be forces to attribute the trainer of a model(-mix). Since I use Stable Diffusion like a tool, to create my own art, this makes no sense for me to use. I sometimes end up with dozens of loras and checkpoint-models to create a work and just can‘t keep track on what is in the picture and what is not, to give attributes to people, who are not artists, but train an open source ai supplement. So I stay away from those and go with the real open models. And of course I stay away from other artists works or styles, as long as I wouldn‘t mix different attributes into my final result, otherwise I would not feel good about calling it my own art. And althouh this was not your actual question, it kinda highlights my workflow for details. I make some concepts, make parts of my image with SD, some parts by drawing, then I throw my picture at different ais to see what sticks, and get closer and closer to my preferred final image and narrowing in with image to image to get a coherent picture. Sometime with less steps, sometime with more steps.
Seems to me that, since the models almost certainly were trained on images without permission, any art you create from them would be similarly without permission. If you're using SD at all, you're kind of taking your chances that a license or copyright was breached somewhere along the way no matter what. Take it as you will…
IMHO, all ai model creators are just collectors of artwork, they have no right whatsoever for those pictures, everyone is free to use all generated artwork, the only legal copyright holders are the CPUs or GPUs in our computer.
You are damaging the perfect illusion (fake background, fake faces) by including your real? face und real? voice in this video. This technology will make your work and channel obsolete in a very short period of time, because nobody wants to see imperfect faces and voices anymore…. (This comment was generated by an AI model summarising this video)
Hey, Mr. Olivio... thanks for another great video. I had not heard of Reliberate, so thank you for the info! Just FYI, though... Euler [Leonhard Euler, the Swiss mathematician/ engineer/ astronomer/ and much, much more] is actually pronounced "oil-er" rather than "yule-er". So... now we know!
Do you also pronounce French words with perfect pronunciation? What about words of Asian heritage? Euler can be pronounced several ways. It all depends on nationality, geography etc. Let's not bog down content creators with silly stuff like this.
@@Mocorn It's a man's name and it has an actual pronunciation. Yes, I do speak a little French and a little Cantonese - none of it perfectly, but I do make an effort to be respectful of other languages. Mainly, I brought up the pronunciation in this case because the term Euler appears throughout a number of CG packages, such as the Euler Filter in Maya's graph editor. If I mention it to Olivio - who is kind enough to put thall these wonderful videos together and share them with the rest of us - it is done, thoughtfully and considerately, to help inform him in the same way he generously informs us. My guess is that if someone mispronounced your name, you'd correct them - as well you should. Where is the harm in that?
@@atlanteum people mispronounce my name all the time actually and I do not correct them because that is how the name is pronounced in this country. Where I was born the name is pronounced differently. Interestingly, my name is pronounced a third way if you look at the country of origin which is only an hour away by flight. So, three different ways to pronounce my name, which one is the correct one!?
Hi i would like to know if its possible to create bulk images for more than 1 person, for example i want to generate 10 different people with 5 images of them. If someone know how to do this feel free to answer.
Hey great video. Before i research the whole internet i ask here. I got a good model at tensor art but every picture is a bit different. Now im searching a method to create a model in tensort art so that it will be the same face everytime Or a good free software where i can create a model which doesn’t change
I feel like people overtrain females in such models and that creates an imbalance, also the details on this one are weird, there is an effect if you can notice where the face shadows are like lines.
im new to this stuff. is there particular software needed? I keep seeing things about models, checkpoints etc. But i never come accross what program everyone's using to generate the models.
I've been defaulting my portrait renders to a 640 x 800 resolution, Instead of like 512x768. Just a slight bump, it will very rarely doubles up a head. But I feel the bump is worth it.
If face in the picture wears glasses (or sunglasses) Roop will erase the glasses, then replaces face in the video but it does not erase clear. Roop can not choose to reserve the glesses (or sunglasses) in the video. Can this problem be solved ?
Hi @Olivio Sarikas or anyone who can help me. Thanks for your videos, excellent stuff! When I go to my Scripts section, there's many Scripts missing for me compared to yours, & SD Upscale script is also missing. Can you please let me know why?
@OlivioSarikas Do you have a video where you teach how to create an RPG or NPC character with the face of a friend of yours or a photo that already exists? how could i do this?
hi.. i was using stable diffusion since the beginning.. but im studing hard for some exames and after some months. its like i dont know nothing anymore... lora? checkpoints? clip?? i will need a resume what is happening...
stumbled into this. got very little idea what it is about. it reworks your pics and makes them better? so why didn't he show a before and after? what's so great about it?
it took you some time, 😁 usually I am using euler but will try dimm, and I am using add_detail:1, everything higher than 1.2 is already too much. But I apply it to warpfusion mostly. Thanks for tutorial🖐
These images are almost too perfect - they look like photos from the best digital camera and heavily edited. Can this model be used to generate images that appear realistic but are of lower quality? So that they resemble photos taken with a medium-quality mobile phone?
Hi Olivio! I never understood what is the role of the sampler and what are the difference. Of course I can render a xyz plot to check the diff, but I would like to understand more why and when use which sampler... Can you make a video on that or maybe a live? thx a lot
in the description of the model there is information that, among other things, it is not allowed to sell images created with it. Does this also mean that you can't use this model to create sets of your own commercial projects (e.g. in your own comic or game)?
I know the author of the model and I will say that this is not just mixing models. He also worked with weights, trained the model on a lot of other photos, etc. But in any case, any model is a merge of the standard SD 1.5 model, so it does not negate the importance and quality of these author's models.