Inspiring as always! I’m on a long path to get there, life has a way with detours but I’m not giving up the dream, still following all the great work you do Matt!
Hey great demos of mocap. When I worked at Microsoft back in 1995 we had some of the very first magnetic mocap systems attached to Silicon Graphics workstations, running Softimage 3D, I had all the 5ft wires attached to mocap boxes, was messy and it had only 4 and 8 points per box you could capture with, it used Velcro straps attached to the wire points to your wrist, arms, and legs. You had to stay as far away from desks or metal for it to track well. Mocap sure has come a long way. It would be interesting if you could try and mocap, crawling on your belly thru mud or dirt and grass with bobbed wire above you, as you crawled like a soldier in training. Also would be interesting if you could use cardboard sides taped on the side of the cube, a large fridge box cut up with doors, that could open and close, put the marker on the cardboard door that could mimic a car side, use it for opening and closing a car door. Sitting in car for dialog. Could add more boxes for 4 avatars sitting in car with you. Love your videos thanks.
@@CinematographyDatabase I dont think that the Lidar on the phone would give you high enough resolution for a good result. But photogrammetry would work well. I used Meshroom for these kinds of things, its free and pretty easy to use.
It's amazing! Something I'd love to see is to add weight to your weapons, because it seems like the meta handles toys, I'd love to see it feel the weight of it! Thanks for your test sharing with us :)
Yeah I was considering getting a gun that has simulated recoil, but they are a little too realistic to be around my kids. They are for police training etc. I think an actually heavy omega hammer would be hilarious.
@@CinematographyDatabase yeah it'd be! For the weapons you could scratch some sand bags if you seek weight only! Recoil could be great, but if facial expression or the rest of the body doesn't react like for a real weapon, it wouldn't work. Thanks for the tip regarding finger tracking!
I let this channel kind of die a long time ago and moved to Instagram mainly and prioritized short form content. But for content like this about very technical subjects I want to do longer form again
to get a better chair model... probably a free iphone 12 photogrammetry app would give you an excellent reference 3d model to start with. Even something worth cleaning up and use straight up.
This is just amazing! Here in my company we will probally get a Vicon system and i will have to do all those stauff to work with Unreal for live performances, i might have to get a little help for retargeting everything right, where did you got the information necessary for this? Amazing video!
Without trying to add too much to your work load. Would you be able to document or talk a little more some more issues/ complications you ran into? I know it's a bit odd but i feel like its almost more beneficial to know things to avoid lol
very perceptive, the shoulders are a trade off when I want positional accuracy on the hands. Some times the hands will pull down the shoulders, it's something I can start to address with the retargeting. The traps and neck are also a bit distorted from the retarget/calibration. The third major problem is the elbows when I face my palms up. ALSO, my heels aren't always touching the ground correctly. All of these little things take time to sort out between marker placement, calibration, and retargeting. Also many of the props alignments were very hastily setup.
Damn, this is a step up from inertial suits. I'm actively trying to convince myself I don't need this system lol. Once the system has been set up, how much time does it take to get going at the beginning of the day? Do you need to recalibrate throughout the day? Just curious about how much time of the day is dedicated to the actual performance vs how much time is used for technical stuff. Thanks for sharing!
I have a video on operating it on the channel. You calibrate it at the beginning of the session, takes like 3 minutes of waving a wand. Then you suit up and calibrate, like 10 minutes. Then you are good to go. Setting up props takes time but once you do it, the system will remember them.
@@CinematographyDatabase i think the biggest advantage is that in comparison to other mocap systems you can add multiple objects without much effort (apart from modelling them later in Maya), AND the high framerate capture is just amazing.
@@colibristudio3936 yeah I haven't shown recording the MOCAP, processing cleaning, editing etc. yet but it records natively at 120 FPS. So I end up baking SUPER clean 60 fps animation assets for Unreal Engine. It's a lot of data though.
This technology is cool and groundbreaking, but the CGI being used is still not good enough for replicating humans and all the intricacies of our movement and micro expressions. As far as I’m concerned this is really only useful for pre-production/planning things out for a project. In the next 5-10 years it’ll probably get properly used beyond set extension like in The Mandolrian, but for now I would only use this for a more VFX/CGI heavy project. Otherwise the expense of making a whole studio like this isn’t worth the time and money, even though it isn’t hugely expensive. I’d rather just explain it to people and draw on paper/digitally, or even use stock photos or videos to get my ideas across. That being said, groundbreaking work is being done here and I’m all for it!
Hello Matt, Fellow Epic VP Fellowship alumni here! The Deaf community have been trying to setup some Vtuber Metahumans with full hand tracking (for Sign Language). Can you do a demo showing the ASL finger spelling alphabet? This is a needed “holy grail” for any mocap hand tracking. We are currently struggling most with retargetting the hand finger bones as well as the inverse kinematics on the shoulders.. any insights into this area would be so helpful.
I believe StretchSense is being used with a Vicon project with MetaHumans who do sign language. When I get back into MOCAP I’ll do some more high precision hand captures.
@@CinematographyDatabase That'd be fantastic. You mentioned it in another video which I found after this one but looks like you didn't get around to the hand tracking demo yet. StretchSense seem to be the leaders in the hand tracking mocap space it seems but the Vicon tracker balls pretty pretty capable if you have one for each finger bone right? A little beyond our budget however at this time, so I'm playing around with Quest 2 for the hand-tracking part of the process in Unreal Engine.
I would like to know exactly how much this whole system is worth to be able to do the same thing you are doing. One pass, the capture of the body, surrounding elements and facial. How much does this whole set cost?
Vicon is very high end. It's literally medical/science grade motion tracking. So I don't see that price changing too much. BUT if there is non trivial amounts of interest from smaller/indie companies and user perhaps some sort of hardware/software package could be put together. I don't speak for them of course, just speculating. "Indies" needing a Vicon didn't exist IMO until recently with Unreal Engine (real time rendering being free) and MetaHumans and other high quality digital characters being available and affordable. So it's a bit of a new emerging market being accelerated by VTubers and general "Metaverse" popularity.
@@CinematographyDatabase I appreciate the reply. My use case is for narrative film making which requires a high degree of fidelity with fingers, facial and bodily expression. AS far as I can tell, IMUs aren't up to the challenge yet. I have an award winning script and actor ready to go but just like other small scalers; none of the budget required to be an early adopter.
Hello , i have some difficulty's with retargeting in shogun live , did you found a tutorial to do it or any other help , am stuck here for a few weeks now and i have no idea what is going on
wouldn't it be more practical to place markers on the end tips of items, so the base form is a straight line? and can you assign certain marker groups to a specific virtual prop? so that if you'd "throw" in an item from off screen, it would know what item it is supposed to project onto it? i have been conceptualizing various virtual production methods for years, and this is the one that comes closest to what i came up with.
you can do pretty much anything for prop markers in my experience so far. One of the markers becomes the origin/root and the others define the other axes, but besides that Shogun can recognize different props really well.
ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-YgT6XY6ldj8.html this video talks about the cost of the MOCAP system. This doesn't include the multiple computers and other support hardware I have for this setup
10:32 SteamVR can do that for like a fraction of the cost. lol And since newer VR games/tests use dynamic hand poses for grabbing objects. Gripping the gun would actually look better. (Tho thats a software feature.)
VR MOCAP is great and I’ve programmed multiple full body solvers with Vive Trackers and Index Controllers. One of their strengths is their positional accuracy. However, as you mentioned, the interaction is canned/baked and or a virtual collision. Which gives a perfect final result. But will never be different and if you want to interact with something the developer hasn’t scripted/posed, you can’t. This optical solve you could interact with any real world physical object and get a dynamic new hand pose every time.
@@CinematographyDatabase With accurate finger tracking like with the Stretch Sens you can have the same level of accuracy with fingers. Also with virtual collision you can interact with digital objects get a dynamic new hand pose every time.
Best quality MOCAP will almost always be optical MOCAP like Vicon. But "best" for your budget/space for many will be a simple inertial suit like Rokoko or Xsens.
@@CinematographyDatabase I underatnd. I can't buy MOCAP vision becuase work a lot and large room with track etc and expiensive too.. but its easy Rokoko just body, gloves sign language $ 2745 cheap price.