Hey! Very good and clear tutorials! Thanks a lot for them! can you please answer what is your approach to record a video with metahuman like above and also how you record audio and synchronize it with video? many thanks in advance
For most recordings I use "take recorder" in unreal which will record the mocap, sound and other things. Then you can go back and adjust all the different elements in sequencer such as camera motion, delaying the audio to sync it up...etc. If you record audio in unreal it will usually come out in sync or pretty close. I usually include a sync marker/slate in the performance to help sync things up. Saying something like "pop" is usually a good mark because the character's mouth will open in sync with the pop sound on the audio track. An alternate way to record something like this is to just perform it live and use something like OBS to record the unreal window as you do it. OBS lets you delay the audio track if it is out of sync. As a side bonus you can live stream the performance or send it out the virtual camera of OBS so you can use your character in a teams/zoom conference.
The info on updating is good, but honestly the enunciation and lip movement on the Metahuman here looks about 4/10. Looks like a ton of cleanup needed from an iPhone Live Link setup
First off, mocap is never perfect, no matter how fancy a setup you have some manual cleanup is almost always needed. The point here is that it's MUCH better than it was before the calibration feature was added! Also, I recorded this in UE5 preview because it makes the character and lighting look better than the older versions. However looking at it I'm not sure the face mocap comes through as well. I'm trying a test today with 4.26 to see if the face motion looks any different.