Is it possible to change the lighting without changing the main subject in the video? it creates too many deformities and it's not really usable for professional work
Yes, Unfortunately It re-renders the video from scratch with animatediff, which introduces the Artifacts caused by "AI" like morphing, bugged face, deformities.... etc This workflow might not be a good fit for professional projects yet.
"Rebatch" doesn't work when loading long videos. "Load video VHS" still loads all frames into RAM and then it run out of memory. I have tried "Meta Batch Manager" with "Load video VHS" and "Video Combine VHS" which only generated discontinuous scenes. By the way, I have 32G RAM which can only load 20~24 frames to process. I'm still figuring out how to generate long videos.
Hey, You have to follow the video from 7:43 to extract frames. If still you are facing ram issues while extracting the passes, then you can use the passes exporter workflow from here: drive.google.com/drive/folders/1hLU5MhikUe6SnEnEPQc3tKTaNGmFT6p2 and how it works is here: www.patreon.com/posts/v4-0-controlnet-98846295 Extract the passes you need for IC light batch workflow, which is depth, mask and frames. Then Follow as normal in the video from 11:00
@@Fucatstory hey please check if all the linked models are there or not... and only SD 1.5 models are used... if problem not solved please contact me on discord ID- jerrydavos I'll help you from there
FaceFix don't change the scene much... you can try changing the "Depth" controlnet model and it's processing node to LineArt Controlnet Model and Lineart Preprocessor .... and play with the strength and end percent, maybe it can help your situation.
It's so cool. However, IC Raw Ksampler is experiencing an error. "KSamplerAdvanced: The size of tensor a (20) must match the size of tensor b (10) at non-singleton dimension 0" How can I solve it?
The light map should also have same number of frames or greater than the source video. Example 1 Source Video = 5 seconds Light Map video = 1 second Result : Error - The size of tensor a (20) must match the size of tensor b (10) at non-singleton dimension 0" Example 2 Source Video = 5 seconds Light Map video = 5 second Result : Successful Render Hope this make it clear
@@jerrydavos I am using the source file you provided. helenpeng.mp4 and LightMap.mp4 The two are equal to 20 seconds. Do I need to set frame_load_cap? To zero?
Hi, when I started render, comfyui showed me an error message saying that " The size of tensor a (20) must match the size of tensor b (10) at non-singleton dimension 0" . I used chatgpt to fix it, and gpt kept to tried fixing execution.py codes which doesn't work at all. Have you had this kind of issue before? If you know how to fix it, I would really appreciate it. Thanks for your sharing.
Make the same video without the lighting. Its nothing but massive problems. Also why is comfyui impact pack so buggy. Newest version just refuses to work.
You will need to change scheduler and Sampler steps....a bit experimental.... I have not tried it yet, but others in the community has also successfully used LCM in this workflow
@@jerrydavos okay, cause i have low vram i decide to render every 10 frame. but every 10 frame the background note same. how to get same result background?
The load cap is set to 10 frames in the load video node ... Increase it to how much you need or put it to 0 to render all frames The light map video should also have same or longer length else it will be give error
btw I was trying to figure out how I can decrease level of stylization, so my character would look closer to the original, but I really couldn't, forgive me my newbieness😅 Could you please share some hint?
it can be made using simple shapes and animating with after effects. Else you can search more "Contrasting" geometric pattern animation videos on stock websites, like shutterstock, gettimages, pexels, pixabay etc... Also I've included some samples light maps already in the workflow link folder here: drive.google.com/drive/folders/1bFfBs8mkN1HLtT1Xy6wsuOV4jl2WqiO4
Where can I find this handsome original video of her, thank you!It can help everyone, I succeeded,Issue News: [SAMLoader#2] The issue where the SAMLoader of the comfyUI-YOLO node conflicted with ComfyUI-Impact-Pack has been patched. Please update ComfyUI-YOLO to the latest version.
Hey I've also mentioned the sources in the description... thanks Here are the links 1) www.tiktok.com/@monominjii 2) instagram.com/reel/C3FyWgYIc_x/ 3) www.youtube.com/@HelenPeng 4) instagram.com/p/C4Lih8DIhBq/ 5) instagram.com/reel/C19CswgrLD3/ Some are unknown...
Help! First off, thank you so much for the tutorial. I can tell you put a lot of effort into not only the project it's self, but the recourses for sharing this with us. I got everything set up and working correctly and ran a few quick generations to make sure all the models were installed. I then updated my control net custom nodes and now, even when I revert to your original work flow, get the error: Error occurred when executing ACN_AdvancedControlNetApply: ControlBase.set_cond_hint() takes from 2 to 4 positional arguments but 5 were given - any ideas? Thanks!
Hey, I updated all my nodes to check if any errors comes.. but it's working fine on mine. Check: 1) Check only SD 1.5 models are using in the CN models loaders... sdxl controlnets can cause this. 2) Check Clip text encode nodes and Controlnet Nodes are linked properly, no floating nodes... may be due to some bug it can get corrupt. Download the original workflow again and test. 3) Disconnect the Optional Mask Input from BOTH controlnets and test. If this fixes that means masks are not created properly. 4) Replace the SMZ clip text encode ++ nodes to the normal default clip text encode.... then check Hopefully the above should help!
Hi i would love to make this workflow work for me but i got a couple problems the output is heavily altered and looks really trippy with me simply inputing a video with ur settings disabling all loras at the start and press queue, there are no errors but the output is nothing at all like the source footage, also with load cap set to 10 it outputs 5 frames only?
1) Make sure light map is also have same number or greater than the source video. 2) Check Skip frame should be 0.. if you want to start render from beginning And for the Trippy part... this workflow re-renders the frames from scratch using the AI models... So the legacy AI Artifacts like bugged hands, faces will surely come in the output.
@@jerrydavos hey thanks for the reply seems the weirdness came from the upscale image node of the stationary lightmap not beeing set to crop. only thing to add for the future id recommend implementing keyboard shortcuts for the groups so u dont have to scroll through them everytime. But big thank you man
@@jerrydavos You are a hero for doing this with 8gig VRAM, I'm on a similar setup and apprecaite that this workflow can be run on low VRAM GPU, and that you also designate which settings help run on low VRAM. Keep it up!