Тёмный

Methuman Animator side-by-side with ARKit/LiveLinkFace, including new voice changer 

Greg Corson
Подписаться 6 тыс.
Просмотров 4 тыс.
50% 1

Опубликовано:

 

27 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 44   
@thejetshowlive
@thejetshowlive 9 месяцев назад
Greg, you are the unsung Patron Saint of VP.
@99SBX
@99SBX 6 месяцев назад
Thanks for the great comparison! Very informative.
@digitalpanther
@digitalpanther 7 месяцев назад
This is hella cool. The only issue I see is that the neck collides and clips through the collar of her shirt on MH Animator. But aside from that its really really cool. Thank you for sharing this demo! I can't wait to try them out.
@GregCorson
@GregCorson 7 месяцев назад
Actually, that clipping only happens because only the head is being animated here. If the whole character was being animated the shirt wouldn't clip, it would follow the head.
@adorablemarwan
@adorablemarwan 9 месяцев назад
Great Video Greg, we are crossing the valley in solid steps.
@williamreid-u7z
@williamreid-u7z 9 месяцев назад
Nice work! Thanks man!
@jubbadub
@jubbadub Месяц назад
Thanks for the detailed comparison. I'm gonna guess Rachael Weisz?
@GregCorson
@GregCorson 2 дня назад
Nope, wrong guess. Nobody has got it yet. Basically shows that voice-to-voice AI gets the tone of the voice correct but unless the input voice speaker can imitate the accent and speech mannerisms of the output speaker, it's not a really a copy.
@Tenchinu
@Tenchinu 9 месяцев назад
gaddam looks so good. Pls if u can, do the full tutorial. It looks and sounds amazing!
@elartistadelosfamosos
@elartistadelosfamosos 9 месяцев назад
Yes please full tutorial. That would be awesome. Thanks in advance
@michaeleissenberg637
@michaeleissenberg637 9 месяцев назад
thats really cool
@DdawgRealFX
@DdawgRealFX 3 месяца назад
Hi Greg… is a depth camera on one of the newer iPads and iPhones required? To use MHA ?
@GregCorson
@GregCorson 3 месяца назад
@@DdawgRealFX I think it is. I have been owing an iPhone XS Max which works till it overheats. Getting a phone cooler will fis thar
@DdawgRealFX
@DdawgRealFX 3 месяца назад
Understood. Was facial calibration required in metahuman identity or was that pretty much outta the box from capture? Ty.
@GregCorson
@GregCorson 3 месяца назад
@@DdawgRealFX if you use mha you need to create a metahuman identity four face but you can keep using that identy unless the actor being captured changes
@DdawgRealFX
@DdawgRealFX 3 месяца назад
Awesome. Thanks very much.
@DdawgRealFX
@DdawgRealFX 3 месяца назад
Amazing demo. Freeze frame can even decipher the difference. Nice work.
@auto_Silencer
@auto_Silencer 8 месяцев назад
We (Or Maybe I) Would love to see the pipeline of how you did the voice changer then implementing it on unreal engine. It will be a complete tutorial.
@GregCorson
@GregCorson 8 месяцев назад
There is not much of a pipeline really. When I recorded the animation the iPhone also records a video and audio. You can place the audio track into sequencer and render out the whole thing with sound to get a video with your own voice. When you have installed RVC from the link in the video, you must choose a voice model and an input audio file, then you just hit the big convert button and listen to the voice it produces. There is a field in the browser interface where you can shift the pitch of the voice up or down till it sounds right. For example if you are a man with a deeper voice and converting to a woman, you may have to up-shift 6 or 12 to make it sound correct. Now you can run the audio track from your video through RVC to convert it. Eventually you load your original movie (with sound) into your favorite video editor (I used Davinci Resolve) and load the sound track from RVC as an extra track. You can then sync the RVC track to the original audio, turn off the original audio and turn on the RVC track and output your video. This may be simpler than you thought because RVC preserves the timing of the original audio, so the two will match exactly. Only the tone and overall sound changes. For a voice to "sound right" you need to be able to imitate the way the original person talks and their accent. For example if I use an arnold schwarzenegger voice I won't sound anything like him unless I try to imitate his accent and timing. I think this is why nobody recognizes the actress Amanda's voice is based on, because that actress has a noticeable European accent that I didn't try to imitate. I will try to do a tutorial later on, there are also a lot of other people on RU-vid with RVC tutorials as well.
@arkemal
@arkemal 9 месяцев назад
Great job! How did you add the voice on top of meta human animation? (I don’t need AI voice, my voice only is sufficient)
@GregCorson
@GregCorson 8 месяцев назад
In this video, the voice was recorded by LiveLinkFace (animator mode) and exported through metahuman animator. After all the animations were done I just added the audio from LiveLinkFace into the sequence and rendered out the movie. Audio usually gets exported as a separate file when you render so I used Davinci Resolve to add the original audio to the video or to substitute the AI audio version. The only problem is that LiveLinkFace isn't all that good at recording audio. I ended up plugging in a microphone with a gain adjustment which worked much better.
@mariorodriguez8627
@mariorodriguez8627 9 месяцев назад
Great work !! Please do tutorial.
@brettcameratraveler
@brettcameratraveler 9 месяцев назад
I've been trying to find a way to take the files used for LiveLink Face and reuse them later in Metahuman Animator. The goal is mocap that is real-time at first And then higher quality later. The best of both worlds.
@GregCorson
@GregCorson 9 месяцев назад
I don't think this is possible, I believe when you are using Live Link Face in arkit mode it's not recording all the depth data that is required for Metahuman animator to work. To do this I literally had two iPhones stacked up, one running animator and the other running arkit. I'm not sure if you can even run the iPhone's depth camera and arkit face tracker simultaneously. It might be possible, but since both use the same physical camera...not sure. Actually, unless you really need a realtime preview, I think you can just use Metahuman animator. On a fast PC it doesn't take long to process it and produce an animation. The main thing that takes time is creating the "metahuman DNA" for the actor, but you only have to do that once per actor.
@brettcameratraveler
@brettcameratraveler 9 месяцев назад
@GregCorson It was my understanding that LiveLink was the only option that used the lidar. Instead, it only required the known camera/len combo found in the iPhone models 10+. If that's true, then it means we might be able to use the recorded video from LiveLink to later drive Metahuman animator. I would like it to be real-time for live demo purposes.
@GregCorson
@GregCorson 9 месяцев назад
Honestly I'm not sure, I believe there is additional data besides just the video that is sent over for metahuman animator, but I'd have to check. Pretty sure some of the data is based on depth because if you want to use metahuman animator with a normal video camera it needs to be a stereoscopic one.
@trinpictures
@trinpictures 9 месяцев назад
@@GregCorson I think animator uses sound somehow to drive the animation whereas livelink doesn't if I'm not mistaken. So that may play into it also.
@GregCorson
@GregCorson 6 месяцев назад
Yes, I have heard Metahuman animator uses sound for helping with lipsync, though I haven't found that documented anywhere.
@coderaven1107
@coderaven1107 9 месяцев назад
I like the left one more. Now I just wonder which one it is :D
@GregCorson
@GregCorson 9 месяцев назад
left is metahuman animator, right is older arkit. There is a caption at the start of the video, maybe I made it go away too quickly.
@richardchipstick235
@richardchipstick235 9 месяцев назад
Would so appreciate a tutorial on the voice changer please, just downloaded it (I think I just downloaded it) and extracted the Zip File, that was a scary moment. Thank you.
@GregCorson
@GregCorson 9 месяцев назад
Did you try to use it yet? Any luck? It should come with a few sample voices, you can run it on any .wav file to test it out.
@TheXarxus
@TheXarxus 9 месяцев назад
the Livelink one has better rendering on hair than the Animator one?
@GregCorson
@GregCorson 9 месяцев назад
Both of the heads were rendered at the same time from the same model, the only difference should be the animation. Because they are side-by-side there is a slight camera angle difference and a slight difference in the lighting angles because they are both lit by the same lights. I can see a bit of a difference down near her neck, I'm not sure where that's coming from unless I somehow didn't set the level of detail override correctly, I will have to check. The camera is so close that they really should both be running max detail no matter what though.
@original9vp
@original9vp 9 месяцев назад
V nice @greg
@RairAffair
@RairAffair Месяц назад
Thanks, was searching for a comparison in the direction of Blender. Now I see, that it will not work the way I wanted... ARKit version is just worse. Too symetricall everywhere.
@GregCorson
@GregCorson День назад
Metahuman animator can be very expressive, the ARkit face tracker is not bad, but has a limited number of variables to work with (around 40 I think). Though it still can be very useful as a starting point for animations. That is, get the basic animation with ARkit and then use something like blender to tweak it.
@chelo111
@chelo111 7 месяцев назад
please bro---we need that tutorial asap
@GregCorson
@GregCorson 6 месяцев назад
Working on some new stuff now, a lot going on here in the last month, I'll post an update soon.
@scrubspike
@scrubspike 9 месяцев назад
It sounds like Jennifer Hale
@GregCorson
@GregCorson 9 месяцев назад
nope, not jennifer
@robertomachado7581
@robertomachado7581 3 месяца назад
Great tutorial! ...."Metahuman animator" should be called "metahuman face animator." Metahumans' bodies suck.
@3DcgGuru
@3DcgGuru 7 месяцев назад
Angelina Jolie?
@GregCorson
@GregCorson 6 месяцев назад
Nope, it's not Angelina.
Далее
Blue Dot: A 3Lateral Showcase of MetaHuman Animator
3:57
MAGIC TIME ​⁠@Whoispelagheya
00:28
Просмотров 3,7 млн
Five 'worst' moments of Kamala Harris' CNN Town Hall
17:52
Wanted - Animator vs. Animation 9
15:49
Просмотров 35 млн
Microsoft Makes Windows Worse With AI
9:34
Просмотров 161 тыс.
Free Face Mocap NO IPHONE on your Metahuman
2:47
Просмотров 26 тыс.