Rule of thumb in any 3D package. Everything except Diffuse, Albedo, or Color is RAW, Non-Color, or the equivalent. The only things that should ever be effected by any kind of color processing are things providing actual color information. Anything that is providing variables to a calculation like a normal, or roughness map should remain unaltered by any color space calculation. So depth maps = Non-Color, RAW, or equivalent. This also applies to most game engines.
Kind of funny how many programs have followed this exact route of launching for free and empowering smaller creators blablabla then becoming a subscription-based tool acquired by industry giants and ditch their targeted fanbase that gave them all the data necessary to expand. So many off the top of my head!
As he's previously mentioned, the key to making greenscreen people shots look good is to put all the effort into the actual recording itself rather than figuring it out later when the lighting and camera angles are basically baked into the footage. This could help in cases where you didn't exactly shoot something correct and dont have the ability to reshoot. Resolve already has a built-in relight feature that similarly does this, but for something directly integrated into Blender having the actual data in could help dial the look a bit better if it comes to having to do this.
@@jaym2267 Yeah this is right. A big mistake of "Bad greenscreen" is actually just bad on-set lighting, or lighting that is completely mismatched from the background. The other simple factor people forget to do is matching the black levels (the darkest thing in the footage, against your CG). You should never see things 100% pure black, which often times render engines make overly black things. Other things also lift the black level, for example if a light is near a person and it causes diffusion or glow "over" the character - that means you need to add that back over the top of your CG as well.
I want to see people like Corridor Crew making something fun out of this. Thank you for sharing this with us, looks amazing! Didn't think that AI could extract pbr maps for us.
Seen this a few times, I need to play around with this and give it a try. The main thing that always seems off with the result of these relights is the skin always get very flat. One technique I like to do is if you do a frequency separation (blur & divide the footage against itself) and multiply the texture detail you get from that divided result against this lighting pass, you could probably get a more realistic result. The shininess / roughness texture pass this thing gives isn't 100% there yet but with that trick I think it probably could be a good result.
I've been using it since it was dropped. I had a vfx scene where I had to make fake thunder lighting around the actor and it turned out amazingly realistic after composting.
So glad you covered this! I found out about this software from Joshua M Kerr’s video and joined the beta when it first launched, I even have videos on my channel that utilize this relighting method (street lights passing by characters on highway for instance). It’s been incredibly useful and the development team has been really interested in improving this technology, so the more eyes on it the better!
Ian have his own style and probably he find easiest way to add good lightning and detail to greenscreen character. Last time I see he's now using real 3D scanned characters
@@Ismailoff_eth Yeah, but here’s the thing-3D scanned people are pretty limited. They can handle basic stuff like idling or walking, but when it comes to main characters or anyone who’s front and center, 3D scans just don’t hold up. They end up looking fake, so you really need green screen footage in those cases. If you plan the scene right, you can match the CG lighting to what you used on set (or vice versa). But let’s be real, things don’t always go as planned. Whether it’s a tight deadline, missing gear, or just not knowing exactly how the scene will look, you might not get the lighting exactly as needed. That’s where this AI relight comes in-it’s a great backup for those moments when the lighting isn’t quite right. It’s not a replacement for the real workflow, but it’s nice to know you’ve got an alternative if things don’t go as expected.
There's no way that k for knife tool works in video editing view... it's so clever and I totally missed it up till now! I love watching blender videos and discovering more and more shortcuts! Can't wait for them to come back to shader nodes, shift+a, n, r is still ingrained in my muscle memory
I've been really fiddling with normal, specular, and roughness mapping for game development recently, and with great timing, this really cool bit of knowledge showed up!
The superimposed images remind me of the style of those old point and click adventure games after MYST that added (live action) characters that would walk out in frame and talk to you or do something (I used to play an old Goosebumps game where you were trapped in a haunted theme park that did this). This would be a neat way to bring back that kind of retro style, but with more flexibility with the environments you could put the characters in, and better visuals with the lighting of the scene affecting the actors, etc.
Hey if you guys are looking for another way to generate depth & normal maps , you can actually use a local version of stable diffusion and use a model for depth maps and normals to generate an image sequence,
Non color just means it's not apply any color transformation. Pretty much everything is non color unless it has a color texture. But that doesn't mean you can't apply a transformation for a certain effect. But, with normal maps it would likely ruin the directions.
Just FYI, not sure how much you wanna go down the rabbit hole, but these tools I believe are using the same underlying tools I use in ComfyUI using ControlNets, which let you pull depth maps (and many other style options like lineart), it's a long rabbithole to go down, no need to pay for software, all run locally, etc. and it can do all of those maps in one go, making it faster imo.
Lucky the free Open Beta is still here. A few months ago it was said that it would be closed for free, but luckily there is still time to use it for free while your GPU is strong enough...
Color and Non-Color (or in engines like Unreal Engine it's referred to as Linear Color) is about Gamma correction. For any texture where you may have data (e.g: you're storing a larger number outside of 0-255 in a 32 Bit HDR) you'll want in non-color. In the case of your Roughness texture, you want whatever value 'as is' without any adjustments, so Non-Color is correct here, same for Depth. Essentially it's treating your data from the texture 'as is' without doing any corrections for Gamma.
FYI: Basically all of this is a repackaging of largely pre-existing open source tech, often even somewhat out of date. Like, the background removal appears to probably just be a variant of the older Segment Anything Model; depth estimation has a ton of research being done all the time with stuff like Marigold and Depth Anything; lighting/color extraction has a bunch of research-produced open-source models too, although this one _might_ be unique with their additional channels.
I'm pretty sure the tech behind switchlight is IC-Light by lllyasviel. This is completely open source, though I guess harder to get working than switch light.
most interesting thing about this whole video was duh-vinkee magic mask working way more speedy than anything I've seen in ae tbh. never used it (i know its the free software for tubers but im unfortunately 15 years deep into ae brainrot) so hmm, this has piqued my interest the ai lighting thing was ok. end result looked very analog horror tbh
There are free ML models for each of the tasks done in the vid (Background isolation, PBR acquisition, Depth/Normal-map generation ...) This SwitchLight thing has just gathered all those models in a single package. If anyone's concerned about the privacy or anything, you can simply search & find high quality models (probably on hugging face) and do the steps yourself all on your machine & achieve the similar or even better results.
amazing, so the issue you are having is, dont use normals and displacement together (hence the glitch). just need one or the other. this is coming form blender guru. i am just a messenger
only reason color spaces is here is to match colors of real world to colors of display, so if it is any texture that captured not in real world should be treated as non-color data.
Im still new to this. Great video by the way. at 6:47 when your in settings and you click on Displacment and Bump, The displacement drop down doesn't show up. What am I doing Wrong , thanks
From their Terms of Use: "3. Solely for the purposes of operating or improving Beeble, you grant us a non-exclusive, worldwide, royalty-free sublicensable, license, to use, reproduce, publicly display, distribute, modify, create derivative works based on, publicly perform, and translate the Content. For example, we may sublicense our right to the Content to our service providers or to other users to allow Beeble to operate as intended, such as enabling you to share photos with others."
*COMPOSITING* *IT'S COMPOSITING* When you're compositing someone into another shot, whether it be green-screened or roto work, you're going to do your typical color correction passes. One big issue is light in-general. You can make something have the same values of another shot, but you're going to have to do extra work to make sure the light matches. In a normal production, where everyone has at least a _slight_ understanding what they're doing: The shot that's going to be comped either has flat light for editing later, or an attempt to match the light to the shot they're comping into. _This is not always a thing that happens._ So the alternative is having to do some very painful tweaks to the video. And if it's lit flat you're still going to do a ton of manual editing. Or do something like this: Start at around 20 seconds into the video. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ZU_NOoug-hQ.htmlsi=qBatuYj3n3iCGVcl&t=20 That's a digital double being animated and comped onto Black Widow for lighting.
The app looks cool but needs a login, like autodesk or creative cloud. I thought it was worth mention. May be some one could build an ai workflow in comfyui.