I use Runway and Hedra for lip sync. Runway team MUST take a look at Hedra because is awesome. Runway lip sync works better with an animated video of a person it is too static with a sigle pic (you must have a pay account), hedra uses a single pic of a person and gives you back all body and facial animation with lip sync and AI voices (512x512 resolution, free account).
Thank you! It's genuinely exciting to see how much has changed in less than two years. I feel like Flux to Luma/Kling/Gen-3 to Lip Sync is such a good workflow right now.
Since people are facing forward, you could probably split screen and intercut the different jury members, so that more than four talk, by recording it twice (for now cropping out the first 4 so the next 4 are the “new” first 4). Then splice the shots together in the order you want, blending the portions of the shots together in the video editor or any program with compositing (perhaps something free like the basic Da Vinci Resolve:). Maybe when blending together you can take reaction shots from different takes at different moments, so everyone stays lively (feathering the edges between characters to blend the whole scene together and keep it lively/to sell it as real). Very good tutorial!:)
@@aivideoschool Actually, I have an idea for you… Let me know if I should quickly post a zoom link for sometime today or tomorrow. I would delete it from the comments after that, but let me know if there would be a good time for a quick zoom with screen sharing:)…
This was a really well developed video and to use Luma to tie it all into that one scene was next level. Luma uses mid journey which is the best image generator in my opinion
@@aivideoschool Thanks! I took a meeting today with an app working on over the shoulder lip syncs, etc, so hopefully things continue to improve at a rapid pace 🤞🏼😊
I just uploaded a short test of me morphing into Tony Soprano and then using LivePortrait to animate the face exactly how I wanted it to. And it works 10x better and looks more realistic. But the only issue is that you have to track it to your face in After Effects or another software
Looks like AI FILM tools are finally catching up... But still in early stages. I assume in 3-6 months, we will have more control over camera, facial animations etc
i didn't know about this feature . thanks for making this video . a question- can we upload our own voice in it like the voice cloning feature . can we do that?
You can upload your own voice like I did 00:08 but I believe you need to do the voice cloning in ElevenLabs. You are able to do voice-to-voice in Runway, so you could speak with your inflections etc and transfer it to sound like one of their voices.
Hi, I really appreciate your videos. I had a question about lip sync if you can help me. I until a year ago used to use an app (reface app) that had lipsync, you could talk over a video for a few seconds making the people in the video say what you wanted. Some time ago they removed lip sync and I am not finding a viable alternative. Can you tell me if runway gen 3 has the ability to insert its own video and give the characters in the video their own phrases to say? If runway doesn't do this can you tell me if there are any alternatives? thanks for everything.
If it cant handle a slightly angled face like at 14:15 then its not ready for filmmakers just yet as over the shoulder shot is the core of storytelling and this may not get that right however were headed in a positive direction for sure. Another year or two and wamn
Totally agree. A few months ago I tried an establishing shot with two people then cut to each of them in an over the shoulder style shot. Looking at it now, it's amazing how much better the lip sync is. But I think the over-the-shoulder shot might be one "cheat" for filmmaking with the current limitations. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-nL9UowfT0PE.htmlsi=p7T4JqO36_vFuYRh&t=557
@@aivideoschool I started following you some time ago with the same interest on making stories by leveraging AI, but it didn't take long for me to realize that technology just wasn't there yet. So I turned a completely different direction and decided to dive deep into learning 3D animation in the likes of iclone and unreal engine. My gamble was by the time I Master these tools, AI would have made enough advancements that I'll be able to combine the two to get perfectly Controllable characters, environments and camera angles while extracting the realism from generative AI. I have faith those two will meet soon and by then I will have everything ready on my side 🚀
As good as it is, it still can't do animal faces talking. If it does that then it will be something! Otherwise Pika and hedra are also really good at human face lip sync. I can't think of any scenario where more than two people will be talking n a singke camera shot without a change in camera angle. It's definitely a good display of capability, bit it does not have much practical application in film making. Even if you are able to show multiple people talking in a single frame without switching camera, it will look very unreal and mechanical. Only one scenario looks legit that is two people walking and talking facing camera in the same frame. Other than that there's not much of practical application. What is really needed is talking while turning head and animals talking.
100% to all of this. If you could control camera movement and have characters talking in profile, I could see how that might work for a scene with multiple speakers but we're probably a year or two away for that. (Watch it come out in three weeks now that I said that)
Hi I love your video but listening to your narration is a bit distracting since your voice seems to be unbalanced coming out more on the right side of the headphone... So other than that pls keep up the great work 👍
Thanks for pointing this out, I didn't notice when I was editing on my laptop speakers but I just looked in CapCut and the left channel is a little lower than right. This was the first audio I recorded with a Vocaster I recently got so it's probably a setting or dial I need to adjust there. Thanks for the heads up, I'll keep an eye out on the next one!
Heyy brother love from India ❤❤ can you please make a video on how to make a alien like your thumbnail of this video by any free version of ai or please I want to make exact alien like yours please make a tutorial on it
Hello, I'm glad this video has a fan in India! I used Flux for the image generation. Here is the exact prompt I used for that image: "cinematic medium close up of a bearded mechanic and a female extraterrestrial humanoid alien wearing a demin jacket walking down a road side by side, in the background is the bar they just left, their faces are lit by a street lamp they're passing under" This is probably the part you're looking for to get similar characters: "a female extraterrestrial humanoid alien" Here is more information about Flux, which is free if you have the right hardware: github.com/black-forest-labs/flux
Runways lip sync is absolute garbage, they really need to invest more in improving it because gen 3 is so damn good, it's a shame the lip syncing aspect is very lacking and def not ready for anything that looks passable. Still images look extremely robotic, let's not pretend that it looks good. Hedra is miles ahead of them. If Hedra can release lip sync on video, it would be awesome.