For anyone coming back to this tutorial like me a couple months late to the party. You need to lower the denoise value to between 0.6 and 0.7 to get the desired effects now. Controlnet and A1111 has had updates so the values have changed slightly. This still works really well though!
Never imagined being able to control light that well in stable diffusion. Honestly I would have only expected that level of light control in 3d applications like Blender, not when manipulating 2d images, just nutz what AI is doing
it'd be fairly easy to create some animated light patterns using after effects or blender, then save them out as frames. You should be able to use batch to automate the process of making the images.
this replay by AI agents : Let's get creative and whip up some dazzling light patterns using the powerful tools of After Effects or Blender! With just a few clicks, we can generate animated sequences that will leave your audience mesmerized. And don't worry about tedious or repetitive tasks - we can streamline the process with the Batch tool to save time and effort. So let's get to work and create some stunning visuals that will leave a lasting impression!
The video example shows that this isn't like photoshop when you put a bright element on overlay or linear dodge blending mode on a top layer, it actually generates appropriate shadows.
There are a lot of green screen video effects that are normally used in videos, but you can extract the frames and use them here. Things that are like moving lights, electric crackles, explosions, things like that.
Now, I just need a version for anime :') Honestly, this is really cool either way because you can now use this on any image to change the lighting of the image and be able to have exactly the kind of reference for light that you needed in your scene while drawing and I think that is super cool.
You can use high contrast to make hard shadows, just like in photography. Or even something like a shadow in the shape you want in front or in the back of something else.
Hi Sebastian and thanks for all your video and explanations. This is a very intruguing trick. A similar process I saw in a post-production facility, used the normal map channel to relight the scene. It should have a more mathematical approach to calculating lighting then the depht cannel (which is also fine for our poruposes), and maybe open the way to a more PBR-like workflow. I'm doing some tests, but I don't see great improvements, even switching pre-post processing or using colored lights. To animate lights you can make a simple animation moving a gradient over a black solid in PhotoShop (turn animation panel on), or AferFX, export a frame sequence, and use it as input for batch procecess in Automatic1111 Let me know if you want to do some tests and if you find this advice interesting, Cheers!
Thanks! Ah, very cool. If you decide to test it, please share your results in our Discord. I know Maui would be interested to test more in detail for sure, and me too.
Amazing content once again you nailed it with a very good lighting guide utilizing the amazing control net. I’ll be using this for all my photos now as lighting is a game changer when you can control it.
Very neat. Now if we could just have subjects on layers for easy comping. You know foreground background with these types of control, imagine the posabilities.
Hey! Some ideas for videos that are of interest to me, if you think they're worth, maybe you'll cover it: 1. How do you pick your models, I constantly see new models being used, how do you track them, pick them etc. I have a lot, but It's always hard to know "what's the latest edge cutting one" 2. About different types of models that can be used, in the beginning it was overwhelming with control net, LORA's, scripts etc. Each of them requiring different setups and purposes. A summary or some detailed intro would be really beneficial. Anyway, love your videos, keep them coming! Sending all the
There is so many emergent techniques coming out for IA generation it's so exciting. A bit complicated to understand everything but it's like future in the making.
Question; I'm trying to change the size of the selection as shown around 2:06, but it keeps cropping the source image instead of just resizing the selection grid. How do I fix that?
Heres Video #2 (actually my first test). Note I used Blender to make the lights animated. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-iM5KoIJ5HvE.html
@@sebastiankamph Right, it’s mindblowing. But when I pipe different light images into the process I would expect that it wouldn’t only alter the lighting of the result but also the shapes… even a tiny little bit. But in your video every hair strand seams to stay rock solid how and where it was before. Simply astonishing.
definitely check the other videos, but my understanding of how this works is that we’re using the image in control net and extracting a depth map from it. That image is always the same, so the depth locks down many of the details. The light image is being used as the source, and that high noise value allows controlnet to apply strong changes to the light image… using both the depth map and the pixels from the image of the woman. With both the depth and the pixels, and just the additional light cues, SD is pretty much forced into consistency. Apologies if any of that is incorrect, I’ve been experimenting with all the settings but there’s still a lot to learn!
Sebastian, thank for this! Can I ask, how do you crop that way? For me, when I crop in img2img it crops immediately after I let go. Yours seems to let you crop in and out again.
Thanks for this Video, but I have quite some issues with it. Even though I re-checked with your guide multiple times, the composition of my image changes way too much from its original. I don't think this has much to do with my prompt not perfectly fitting to my used image as it was created using inpainting in the progress, does it?
Wow! This is really crazy. An artist by the name Jeremy Cowart created a method of doing this in camera. I need to work this into some edits and lose my mind
Problem I have with this is it basically completely redraws a new image, rather than just changing the lighting in an existing image. Very frustrating to get a really solid image but you just don't like the lighting, and I can't find any good ways to change that, at least not without falling back to traditional tools.
My holy grail is re-lighting real-life faces to match them with changes in lighting and environment. I think maybe I have to do a full-face detailed close-up HED or something. Have you acheived this yet?
@Sebastian Nice tutorial vidio as usual, thanks for posting so much. Hoping you might be able to share you initial prompt you had for the character you worked on.
Thank you Sebastian fantastic video as always. my IMG2IMG resize/edit tool does not act the the same as your, When i select an area, that selection then becomes the new image that fills the IMG2IMG space. in your case the selected area remains at the same plac e and you can move the selection or tweak it.
It looks cool and promising. But. How do we make this work on complex scenes? I tried it with woman in black dress and this method just brighten the outfit (or part of it).
Maybe controlNet has changed since this process was outlined. If I follow this process, the outcome is not an image with altered lighting, but a depthmap of a combination of the original image and the lighting image. This is what we should anyway when using controlNet to create depthmaps. I don't see how this process should have worked in the first place, and it doesn't seem to work presently.
I've tried a couple of times, but the people and backgrounds keep changing a bit, even when I fix the seeds and prompts. Is there something I need to do differently, like install more extensions?
Amazing works thanks! However, I get grey monotone image whenever I do this. It does grey monotone process at the end of the step. Is there any way to fix this?
Sebastian, can I ask you whether when you render you speed up the video, or whether you're usng a grafics card in your computer? If so what card is it. I use Google Colab and it's not as fast as your renders by far... thanks
This is amazing ..too bad for some reason controlnet OoM’s on my system with purged models and lowvram enabled .. so gonna have to wait till I upgraded my gpu to test this
Thank you! Honestly, I haven't been using it for quite some time. If I get time I might revisit and see if the changes warrant an update. I got a few other ones first on the list.
@@sebastiankamph The reason why I ask, is that the workflow above, doesnt work with 1.1, and I am desperate to get the workflow running. Happy to pay consulting fee.
@@sebastiankamph I got it to work with regular portrait photos of people I know, but I used a 2nd controlnet (hed). You lose some of the resemblance of the person, but if you overlay the original image at say 25% opacity, then it looks good again.
I love your videos! So informative and detailed. Thank you! PROBLEM: When I try to generate after selecting the lighting png, it starts generating, I can see my image and the lighting effects start to form, and then it just stops, without actually generating an image?🤔