Unfortunately all the skin details are just gone and we are back straight to the 1999 when all photographers just used to smooth the skin with the Gaussian blur filter 😂
The newer tools and models in mid 2024 tend to produce supermodel level beauty with almost all the models. I am actually looking for a base model (for SDXL) that actually has more amateur realistic looking faces, not the highly unrealistic supermodel type faces.
I can't seem to get the 9 face grid thing to work correctly. My character isn't doing the face angles, and also the white background isn't staying, no matter how highly I weigh it. Any suggestions?
This is such trash, I had to set the speed to 0.5 to even make out anything. And because of this shitty editing with jerks, I still didn’t understand how to upload a black and white sketch. Of course, you tried, but you need to somehow edit the video more clearly
🤣 don’t worry Fede, you’re not the only one who’s brain melted trying to understand this technique. I’m one of those people too, hence why the research phase takes so long.
@@sebastiantorresvfx oh yea I'm a massive fan. I subbed to that. Theres an issue with price with gen 3. With all my other subs I can only afford $15 one. Had 4 goes and they were horrific. Have u seen leonardo. It's amazing for slow panning shots of still objects. I use it a lot to save on credits. It's weird the way it can do quite good zoom , no morphing like gen 2 or pika labs . And real cheap.
I didn’t not, I have a few more credits so I’ll give it a try. In saying that the only issue with ‘raw’ and ‘log’ is that the prompts are used quite literally so I may end up with some wooden logs and raw meat in the shot 😂. I’ll let you know how it goes.
I hope that we finally make real cinema but way better… we finally get to see immersive cinema and things never before at the same time make them in real time imagine the crowd. That’s watching your movie being apart of the experience ⭐️⭐️⭐️⭐️⭐️
@@JorgeIvanovichbro, get with the program. EVERY SINGLE major company is interested in this. Hollywood, netflix, adverts, music execs. It's just another tool that ppl with imaginations can use.
hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance
Actually I already did the depth map method on a previous video. It’s fun but very limited in what you can achieve movement wise. You definitely wouldn’t be able to integrate the sort of 3D assets I wanted to in this video.
Not a huge fan of tutorials that just give you instructions without explaining the "why". For example saying "you need Roop" but not explaining what Roop does, what it is for, why you are performing that step
For the first time in 6 decades, we see exactly what we want to achieve in 3D cartoon animation. We are watching closely and learning. We thank you for sharing
Set extensions for indies! Woohoo! Need more resolution but can try upscale in Topaz! Eventually it will come. Used to projection map in AE & Lightwave... concept art mock-ups too.
I have been struggling with backgrounds for projects as it takes some time to make them look good despite knowing people are not gonna care much about it. I'll try using AI next time.
how come when I start doing batch processing after getting a single image right it looks completely different? I'm using all the same settings and same seed, just adding the input and output directories and I'm getting a completely different looking result. (It's consistently different too. The single image one is always in a blue room and the batch ones are always in a forest for some reason.)
yo.. what the fuck is STABLE DIFFUSION? I look it up and its like its NOT a program? But you said "in" Stable Diffusion. HOLY FUCK. eVERYTIME i've looked it up. its like this.
In adwance there are Lucid Dreamer github project tha't creates 3d gaussian splat's backgrounds from depth maps and not just bevel it. But I spend few weeks on it and don't find how to add shadows and it's require too powerfull CPU and a lot of VRAM
Do you know why that is? I've also been testing and they're slightly different. I'm not getting the same blurred or sharp images as you, but there's definitely some difference but I don't know if I should also just stay in the latent space as long as possible. Maybe it doesn't matter?
Not sure how much this will help but... Positive: white man, blonde, red face paint, blue metal, red cape, flat colors, simple colors, <lora:lcm_lora_sdv1-5:1> <lora:ThickerLines_RM-128_v1:1> Negative prompt: blurred, photograph, deformed, glitch, noisy, realistic, stock photo, Steps: 6, Sampler: Euler a, CFG scale: 1, Seed: 27846563 Have fun :)
Nope, nope, nope! We cannot do this in the USA! Will be sued for millions of dollars! There's reverse A.I. software that can be used to detect, in the data seeds itself, if you used a famous person as the model. Good information but bad advise on using it.
Thank you so much, as for the workflows you have two options, I post them on my Patreon or you can just pause the videos I go through all the nodes I use so you’d be able to rebuild the workflows quite easily.
Great tutorial, subscribed and liked, looking forward to watching your other stuff and building out a workflow similar to yours. Gotta get more into Blender, never really worked with it.