Are you ready for an extraordinary journey into the world of AI and graphics software? If you are new in AI and graphics software or you want to improve your skills as a professional, you are at the right place. "CgTopTips" shares a wide range of tutorials from the beginner level to the upper levels on our up-to-date RU-vid channel. Our team provides a great number of videos which are too detailed and providing fundamental information. New videos are coming every day, don’t miss to check our channel :) openart.ai/workflows/@cgtips facebook.com/cgtoptips twitter.com/cgtoptips cg.top.tips@gmail.com #CgTopTips
Great video. BRIA RMBG is fantastic for removing backgrounds in pictures. When I applied this to my video, I had a lot of background flickering on and off, and some of my main image too. Do you have any tips? Thank you
Great, and now is the main question, how to add appropriate background (not static)? Let's say there is a car driving on track and you remove background and add the track with same angles, viewpoints, but different landscape, road surface etc. Thanks!
To achieve this purpose, you need to use a combination of ComfyUI and a program like After Effects. The ComfyUI program has not yet been optimized for video generation and, unfortunately, can only handle simpler tasks for now
Good video . Your workflow was a little broken on OpenArt. The Pos and Neg nodes from the Control Net group must connect into the Ksampler in the Output group. Your video was correct. Also - Faceswap is incorrect for the title of this video. This is a Bodypose video. Face swap means you would take the target picture face (indian girl) and put the face onto the Target picture (standing lady with jumper). Everything else in the target picture should remain exactly the same.
Thank you for your comment. In this video, the Target Face is the face we generated using a prompt (instead of loading another photo), and the Source is the face of the Indian girl, which we take and place on the generated face using the prompt.
@@CgTopTips 100% agree with you. That IS what you are doing :) I am only saying that is not really called a 'face swap', as per my comment earlier. I'd be really keen to see how you do a face swap. Take the indian girl's face to replace the face of another picture. You've done some really good work so far. Well done - and thank you
cool video. everything very clear. But my batch prompt looks very differently, it doesn't have a pre_text. Couldn't find the tutorial about animatediff models
Your tutorials are very good because you build everything from the ground up with a lot of patience and in the meantime we understand the connection between the nodes and their concepts. tnx for video. please continue
Connect to VAE Decode a Fast Face Swap Reactor node and set the Codeformer_weight value to 1. This way, the face will be approximately 99% similar to your original photo
Of course, it doesn't matter which character's photo it is, the body parts just need to be clearly visible. For example, it's difficult for the program to recognize the posture of a woman wearing a skirt, but it's easier if she is wearing jeans
This is freaking awesome. I love that you can control the whole 3d pose in real time inside of comfy prior I was going to the website to get the pose and bring it back to comfy load image. I love that it's all in the workflow.
Always consider the following points: 1- Ensure all Custom Nodes are correctly installed as per the video instructions. 2- Make sure ComfyUI and Custom Nodes are updated. 3- Have all the necessary models for Custom Nodes on your computer. 4- Always start exactly as shown in the instructional video before adjusting the settings to your preference. If you want, you can send your workflow to me so I can check it on my computer.
Error occurred when executing Hina.PoseEditor3D: [Errno 2] No such file or directory: '/data/app/aigc-worker-v3/temp/3dposeeditor/OpenPoseEditor_0_pose.png' [Errno 2] No such file or directory: '/data/app/aigc-worker-v3/temp/3dposeeditor/OpenPoseEditor_0_pose.png'
The type of CPU doesn't matter; rather, the type of GPU is important. Of course, you can run the program using only the CPU, but generating an image will take a long time. If you are a beginner, I suggest using Salt AI for practice, which provides free online GPU access. The tutorial is available on our RU-vid channel.
I just sent you an email regarding an error I encountered with your workflow and a question about controlling light direction. Could you please check your inbox? Thank you!
Please check your email. I have sent a image that show you how to adjust the light direction using gradients. Also, if you could send your workflow with the materials so that I can check it on my computer
Try using image negative. I think it helps to avoid generating the fire in the image but keeps the fire composition. I cant use Comfyui at the moment to confirm it, but I am sure thats one way image negative works.
hi! ive done this a couple of times but on this specific object im working on, the "bake sound to f curve" is grey, any idea how to fix? i have another object on the same scene reacting to audio and working fine!
Error occurred when executing ADE_AnimateDiffLoaderGen1: Motion module 'temporaldiff-v1-animatediff.ckpt' is intended for SD1.5 models, but the provided model is type SDXL.