I'm Alex, a compositing supervisor who's been in the trenches of film and commercial projects for well over a decade now. This channel is all about sharing the tricks I've picked up along the way, plus some ideas and experiments I've been tinkering with. Hopefully what I share here helps you become a better artist.
Feel free to comment on any of my videos, Im always eager to learn new ways of achieving the same effect. Thanks for watching!
nice one, what about animated skies? You could use Panoramic render to obtain a sequence of HDRi (also not limited to 2k). Any good workflow to upscale a sequence preserving consistent image?
@@wix001HD In theory you should be able to render an animated latlong, I dont see why not. The render itself would be temporally stable but not the comfy output. Depending on the scene you could cheat some interpolation to make it more or less work but it wont be perfect.
Excellent content! I've been exploring ComfyUI lately and can really see its potential, especially with the detail enhancement capabilities. I'm envisioning its use in CG animation - rendering basic scenes, adding details with ComfyUI, then using Copycat to finalize the shot. While the setup is complex and there's a lot to learn, the possibilities are exciting. I hope you continue making videos on this topic
I was having a conversation with someone on reddit about this approach and they shared something that I think is a very good idea, so I figured I'd share it here. If you wanted to use this for some sort of lighting or light contribution, instead of match grading which only takes you so far, you can use the original hdr from unreal to light and the output from comfyui as your specular to get those higher detail reflections.
Love these videos. Subscribed. In comfy use the shortcut Ctrl + Enter to start processing. In UE you can bring your image plane and attach it to the camera using the image plate plugin. Cheers, b
Hey, thanks! I knew about the comfyui shortcut but dont use so people know when I start processing a new image. I didnt know about the image plate plugin, thanks for the tip!
Neat video. Lol pardon the pun. I just saw your other video on normal maps as well. Im curious, What is copycat doing differently that comfy ui cannot produce ? Cheers, b
I must have installed something wrong, I am getting faces in my clouds and the edges are also turning into tree roots lolol. Or perhaps i just need to adjust my parameters?
Cool appraoch you demonstrated. Thanks. It gives me ideas. In fusion I tried to use their relight AI normal map generator which is very low resolution but stable over time, and Fusion Create Normal Map from luminescence which gives details and its also stable. Than there is a macro to combine the two normal and than use shader node or relight node in different setting for relighting. Its relatively faster, faster than this method, but not as good. There is someone who used SwitchLightAI from Bible as script in Fusion for relighting, again not too impressed because often the problem is lack of materials for skin etc. so you end up with either plastic skin looking effect or with not much highlights where they need to be. I've also tried creating something that resembles depth mask with frequency separation methods and than displacing that on a 3D Plane and religthing it in 3D. Works good on some objects, but people are not that great. In the end, as you have commented on, I've found to get decent results by simply suing various light wrap combinations. It is fastest and produces relatively speaking best relighting results ,rather than all the huppla with AI. But what I am missing in fusion is something like copypat node to train it over time to do a custom relight job. This is about where I am at the moment with my methods in fusion: Flying Robot Relight Test In Resolve Fusion ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-iiE2f5rSy1I.html With Copypat that nuke has, I think it could be a lot better. Cool appraoch you demonstrated. Thanks. It gives me ideas. Cheers!
In theory yes. However, I've yet to meet anyone that made it work. I read somewhere there is a big cattery update coming soon with some new models for us to use. I hope that update touches on other custom pretrained models. Keep in mind that copycat has some pretrained models which im going to assume are cattery models. You can select from the ones available inside the node and keep training from there so you could have your own "improved" one as an inference node.
I'd refer you to the comfyui reddit - there are some really helpful people there. Im sure if you search there you will find the answer. Alternatively watch one of the countless comfyui installation videos on youtube.
Hi Alex, Thanks for sharing. I use a different workaround, maybe you will find it interesting. If you use a Constant with a Transform as the BG input, the BBox from scaling the Constant via the Transform node survives the Rayrender node and provides the overscan. Best, Tobi
The only downside with SUPIR is that it is really slow to get the final output and takes a lot of vram, so if you are not with a 24gb gpu as minimun you wont be able to use it. So it makes it really unsuable for video/image sequences. Other than that is pretty awesome what it gives.
This is very interesting! I feel like this could also be used in a way to fix flicker much better than Davinci Resolve's deflicker? Say you have source footage but there is some jitter in your stylized footage between frames - could you train it in a similar fashion?
Potentially. Im looking into relighting moving shots with comfy (without normals) at the moment. But as always, temporal consistency is the killer. There are other ways of mitigating the chatter/noise along with copycat. If I get a decent result, ill share my findings.