This is really cool to see. I've been working on a very similar pipeline but running everything local on my machine with Whisper, Mistral 7b and Edge-TTS with RVC. Came across this while looking to see if anyone else had done audio2face in a similar fashion already.
Amazing! Congrats for the pipeline! I was trying to do something similar but couldn't manage to get the emotions on the audio stream. How did you do? Do you pass a fixed audio and then save the blend shapes to the metahuman character at each iteration?
Привет, Олег. Спасибо за классный ролик. Уверен, что ты потратил не один десяток часов для реализации проекта. У меня сейчас есть примерно похожая задача. И конечно же куча вопросов в начале этого пути. Есть ли возможность пообщаться и узнать о твоем опыте?
Hi, friend. I couldn't message you. I really liked your code on Github. I work in a library and wanted to create a virtual companion for visitors. I tried your code. It looks like the code is outdated. I ask you for help. if possible please update or help me with two errors. thank you friend