#### Links from my Video #### civitai.com/models/25694/epicrealism huggingface.co/gemasai/4x_NMKD-Superscale-SP_178000_G/tree/main civitai.com/models/110334/epicrealismhelper #### Video Links #### AI on Fire Live Stream: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-80hl3gJ0V8Y.html AdDetailer Video: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-7-428FWQHMs.htmlsi=XfEQN-NVPVFvlw9T&t=403 ControlNet: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-zrGLEgGFJY4.htmlsi=YRfTv3vOva53s5ya UltimateUpscaler: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-3z4MKUqFEUk.html
I like epic realism. It's a good model, but there are some things it doesn't do well. I keep a couple dozen checkpoints on hand for different styles I want to do. I put my models into 3 main categories. 1 Realistic, 2 Fantasy/Surreal, 3 Cartoon/Anime.
I don't like this version of the model (Natural SIn RC1) this one produces the same type of face structure. The older one is much better (pure evolution v5)
The only problem that all these models share is that, as fantastic as they are in generating, the faces of the subjects always look very similar, as if they were a mix of just three people
Be more specific and varied with the part of the prompt that refers to the person. Typing '1girl' or '1man' followed by some type of clothing etc will generate similar results. Try using names, first will do but first and last is even better.
@@jibcot8541I tried using celebrity names, and they always still come out looking like a mix of the same face and that celebrity. I tried a prompt of an older celebrity male, and it turned out to be a feminized younger version of him. I tried an old woman, and it still ends up looking youthful. The model looks fine if all you want to do is generate young pretty women with a similar face, but seems poor for anything else.
this model has a lot of asian influence. it even mentions using -asian on it's page. also, i'm getting slightly blurry images. switched upscalers, cut all the extras, still getting it...so i switched to eulera, nice and sharp now
Yep I love this model! AI getting better and better! Along with the "inpaint anything" extension I newly discovered it is so easy to just change haircolor, background and other details on the fly...
The inpaint model is absolutely top. It is super easy to fine tune and change original pictures. If you only want to smoothen things out no text required, but you can make changes with very simple prompts. I have completely overhauled some pictures and the only thing i have changed in the iterations was denoising.
Love your stuff Olivio! It's been a big help in learning to use SD. I would love at some point.. especially with some of the more recent versions of Auto.. a more step by step guide to things like img to img and inpainting. How to use an already existing image as a base with img2img and how to enhance, add, or subtract things with inpainting for example. A run down of the best or at least the new samplers would be amazing as well. Your explanations are always really thorough for the most part and really easy to follow.. which for dunces like myself is incredibly helpful. Either way keep up the awesome content!
@@Neolisk Likewise Neolisk, use your imagination, I think that comment was far more nuanced than not being bothered to explore and make stuff. Before you get sarcastic with people, try and understand them first, play nice, aye.
@@maggyai There was no sarcasm in my comment. Frustration with lack of research from the OP, yes. But I can't imagine putting it nicer than I did. ER excels with stunning landscapes, abstract objects, concepts and patterns, previously only achievable using paid Midjourney.
Hello. I'm a new subscriber, and what I see is truly impressive. However, I'm missing some knowledge. Could you please point me to the key videos to start with these generations of images?
Let's say you really like that face of the blonde girl and want it to generate wildly different images with her as the character. Is there a way to use After Detailer to reference a specific face, or is Roop (or making your own Lora) pretty much the only option? I'd love a video that goes into other ways to get consistent faces without relying on Roop (which often comes out looking very fake and doesn't blend well with most of my images)
ControlNet with OpenPose(full mode selected and adjusted) + Depth plug-in if just for short term replication. LoRA for diverse and promptable long term replication. RegionalPrompter to compliment both methods for composition, outfit, and/or multiple character in same render control. AfterDetailer could and should still be used with all of those and is compatible. Personally I don’t use it much as I find creating my own InPaint models and masking faces with a face-specific prompt and ramping up steps gives a far better and more detailed result. Especially when applying LoRA.
Hey dude you should make video on epicphotogasm, it is more amazing than epicrealism, the developer is same and he has focused more on realism than former one, it is badass model and highly underated one!
@@thebrokenglasskids5196 @thebrokenglasskids5196 but the good thing in that model is, it doesn't require lengthy prompts like others and not even negative prompts at all, I hope developer works on that model, and release the newer version with better compatibility with other models also.
Any advantage to using hi-res fix FIRST on generation (as this video highlights) vs just using adetailer without hi-res fix on generation to get good low res images with faces that are good (much faster), then taking only images you want into IMG2IMG and upscaling there? Ideas on this?
I disagree about that 'unreal' aspect with RV but obviously it's very subjective, down to exactly how it's prompted and what sampler is used. RV 5.1 is my go to SD 1.5 checkpoint although EpicRealism is where I go to next.
I wish you wouldn't only generate women. Men/women/boys/girls. I want to know if a model is flexible, can it show someone hanging upside down? Wrestling? Playing? Or it can only make portraits of women.
This model totally ruined SDXL for me. SDXL is good in its own ways, but After using this one, anything I create on SDXL looks like a child's scribble. Maybe if this creator makes something similar for SDXL I'll be able to use it one day XD
I'm confused here, why are you running hires fix with 0 steps? does it not need steps to work, it actually does something at 0 steps? I usually run hires fix with half as many steps as regular steps, say I run 30 steps, I will run 15 Hires fix steps.
I love the progress people make with these AI models, however, I am kind of tired of most of them focusing on young women. I get it, women are beautiful and sex sells. But I would love to see some versatility in use cases. If I want tto create mostly anything beside a beautiful girl, I might have a problem.
Hello, i big urgent need with my influencer, i have to get her more reality about her skin, what video you suggest me ? i think you are only person i found here that you have real skills about it.. thanks
Hey guys, I have one question. Juggernaut XL or epicRealism, what`s your favorite model for realistic pictures? I tried both and think, they are very similar. Happy New Year to everyone!
Do you have a model or a way to change the eyes shape because Stable diffusion contrary to Midjourney tend to use the exact same eyes shape for every human.
After too many tries, I finally got some great results for a model I trained. In my case, I trained two Loras for the same subject. One had wrong eyes, but everything else was great. The second model was the opposite. After using both at the same time with weights 0.6 and 0.4, it finally gave me the expected results. Turned out that the model I thought was useless, combined with the larger one, solved the issue I was having.
I tried it. Quality is superb - no question . but it changes ALL faces - like morphing into one specific face type with asian like eyes. So i cant use that one for face - just for inpainting.
Thanks as always. Are the consistent models around 8:30 made with Laura? Also, I'm Asian and epicrealism doesn't seem to be suitable for Asians, is there anything suitable for Asians?
Hi, nice video! A quick question since I'm new to all of these stuff. The model you presented here, it works with a program called Auto 11 11? Does it work with Stable Diffusion? Or is Auto 11 11 part of Stable Diffusion?
The skin quality looks good but this model seems to be the most overfit model I've ever tried. It tends to output very similar looking faces and body shapes and I've never seen a model that is so biased towards making everyone looking young.
These images are almost too perfect - they look like photos from the best digital camera and heavily edited. Can this model be used to generate images that appear realistic but are of lower quality? So that they resemble photos taken with a medium-quality mobile phone?
Hi, I wanted to have AI generated photos of myself adjusted to look more realistic, is it possible? I've upload photos of me to Remini and the outrcome is great, but they are sometimes looking too plastic, is there a way I can take one of these photos and get them to look more real? Or there is a tool to generate portraits of yourself with this realistic touch?
Olivio I love you man. Always fast and to the point. Kind of unrelated question: I've noticed you are very good with time, do you have any tips on that? 😀
This Model is crazy good (Photon was my go to realism Model until now). Works with 1.6.0, Loras and Embeddings for me. A pure charm. I can revisit prompts I did a year ago and they are vastly improved (yet results may vary). Thanky you!
Not only is it the best standalone model for photorealism, but it’s versatility makes it my go-to for mixing with other models to create custom styles I want. In fact it’s so versatile I’ve merged it with non-anime cartoon checkpoints and created some great comic-style art models. Works flawlessly with any LoRA I’ve ever created as well, both for rendering and training the LoRA into it directly as an add-on to the checkpoint itself. You can use it as your base to train LoRA too, although I still find I get better end results in that department by using the base SD 1.5 checkpoint and then merging the LoRA into epic or a mix based off it. This model is the whole reason I am holding off on SDXL until it and the tools for it mature to a more stable and diverse state. If you know what your doing, you can get near SDXL results with this model and still have the ease of use of SD and all the developmentally mature add-ons for it such as ControlNet, RegionalPrompt, etc. There’s many good SD models, but epic is simply the best of the best imo. An all-arounder for the ages. 🏆
Hey Olivio, thanks again for a great video. It would be interesting though to see comparison models not only on human subjects but also objects and scenes in general, see how good it interpret the prompt and so on. Sure people is (I guess) the most common and popular subject, but I think it would be interesting to also test other subject matter
Olivio on the text 2 image my preview image keeps being overly large, much larger than before 1.6, can you tell me what settings need to be redone to get a normal image?
After updating to 1.6 and reinstalling extensions, I can't get my roop to show under controlnet. It installs fine and all my other extensions work :( I've tried reinstalling it but no luck.
I’ve used it for months on A1111 for rendering, mixing, and even training LoRA directly into epic as a custom mix. Never had an issue. GPU is a 3060 12GB.
No model past present or future will beat Realistic vision V5 for photo realism. . With just 20 steps and 7 CFG. Clip skip 2, no high res. With the help of a skin lora and Openpose.
You’re prompting is subpar if you’re having that issue. Epic is literally one of the most diverse models ever created. Cut out all that “hyper realistic skin/face” type junk from your prompts that 9/10 people throw in there for some reason. Prompts like that are not necessary with well trained models and really only serve to clutter a prompt and confuse SD about what to render.
Hi, If you're talking about me (I was the one that sent the image with the girl that have flowers in the hair) then no...LoRA and model are both made by epinikion. In the prompts I wrote "+other loras trained by me" because for that photo I also used loras trained by me...those loras are a work in progress and are not uploadede anywhere.
The DPM++ 2M SDE Karras sampler doesn't show up in my A1111 even though I have the latest version. Have you heard of this and is there any way to get it?
Lets try again.. 2nd time.. I really appreciate the information presented here. Thanks. Interesting times.. However, it has been nagging at me for a while now, that the content used for examples, is a bit fixated on, how should I say it? Er.. Mainly, a particular subject.. I could have put that bluntly, but decided not to. Surely it would be more beneficial to show a mix of subject matter. Showing of the tools and their versatility. I have screen shotted this comment, as the last one, Written a few hours ago, which was very, very similar to this in wording has disappeared. I genuinely want to know what you all think? I am a fellow artist and I am not trolling.
@maggyai Yeah, this model bugged me. It seemed to have some heavy biases towards a specific face of a young woman. Try to prompt someone older, they still look young. Prompt a man, they look a bit feminine. Prompt celebrities, and it will look like a mix between them and that same repeated face. Prompt for an older celebrity, they will look young. Prompt for someone old and with wrinkles, and at best they looked like they might be in 30-40. It seems good at generating similar looking young women, but not much other than that even with more prompt weighting. Just for the hell of it, I prompted that included something like 'Queen Elizabeth with wrinkles " or similar and it was still a young woman with a crown.
T.I.’s, not Loras. They’re only good if you’re using the base model of epic though. If you mix your own off it they can do more harm than good if added.
Sorry to disagree but MJ 5 does a better job in my opinion… more artistic in composition & color palet. That is and has been the true usp of mj. It’s artistry. From v1 to v5.
True for controllability SD is the way to go. But I heard some rumours MJ6 will also have some controlNET like feature. Don’t get me wrong I used SD alot with ControlNet when I want small varations of an idea or ‘photorealistic’ renders of a colored sketch.
Watching this video is like starting to watch a movie with a complex storyline without having seen the first half of the movie. Important parts to tie this together into something useful are missing.
No, unfortunately you cant use 1.5 loras on SDXL, unless you use ComfyUI with a workflow using SDXL for image generation and 1.5 together with 1.5 loras for refiner.