Hi Johnny , I have some models from UE marketplace with facial blendshapes that i want to animate , I am confused between Liveface and Accuface , I do not own an Iphone but do have a RTX 3080ti , what are your experiences with the two & which would be better for me ? Thanks.
Hello Man! Thanks so much! I tried to connect my avatar into dollar. The plug in recognizes it but nothing moves ...neither in record nor in preview mode. everything connects properly otherwise..I am using a cc4 character I created in CC4
This looks extremely janky and it was my experience as well that it's just jittering all over the place even though I have a high quality camera and strong PC setup
Hey, I'm glad to hear this! I haven't used a source code build in a long time, but had always wondered if that would indeed work. Thank you for the update! :D
That's a good question -- I didn't try that but I assume it would get the torso, legs, and head. Depending on where your arms are, it of course wouldn't "see" them from the camera's point of view -- so those wouldn't get tracked. If they were out at your side, though, I'm imagining they would.
Thanks for the video. I'm running into a problem where I'm almost done with my first animation but I am trying to use this to go back and add some hand movements. I'm working on a section at the end and when I hit "record" it takes the avatar back to the center of the stage instead of leaving it stationary where the last motion clip ended. I don t know if this has to do with the GI Anchor as I'm using a template for the background but any advice for a noob would be helpful!
Eek, sorry...I definitely would run into issues like this at times as well, which I'm assuming were mostly me just not being sure how to 'add' instead of 'replace'. I think inside of iClone you can 'add' animation with the puppet tools or something of that sort -- I'd maybe search Reallusion's documentation a bit to see if you run into that section. Sorry I couldn't be of more help. :(
I think I made sure to use a video resolution of 1280x720 -- I think running the webcam higher than that had my system bogged down a bit as well. Similar things happened with AccuFace, so I don't think it's a knock on Dollars Mocap or your system (nice system btw!). Let me know if that helps!!
Hi! I apologize for taking a bit to respond here -- I'm going to be doing so to quite a few comments! Solely for face motion capture, I didn't notice a 'giant' difference, as in both cases you'd want to run the audio through AccuLips, which would definitely improve the quality. Otherwise (with either one) the mocap gets the overall motions, but might miss some of the detailed shapes (and for sure the tongue movements).
Hi! I apologize for taking a bit to respond here -- I'm going to be doing so to quite a few comments! It is indeed the $99 one...it's a pretty unbelievable value TBH. I appreciate Reallusions own plugins as well, but bang-for-buck...it's pretty amazing!
Hi! I apologize for taking a bit to respond here -- I'm going to be doing so to quite a few comments! If you are using this, it should have export options (from what I recall) for Blender & most 3D packages. I've only used it with iClone (since I have it), but I'm "pretty sure" I saw that in there somewhere. ;)
If you're doing things live, you might have to look into how you can route the audio to your steaming method. I know that's VERY vague, but I'm sure there's a thousand different ways people get audio going for streaming. In the case of iClone, perhaps turn off audio capture all together, since you're not going to be recording the audio and refining the animation later. That should leave the audio 'free' for your streaming software (like OBS) to latch onto. :) Just some thoughts off the top of my head -- hope you get / got it working!
Thank you so much. I have a couple of questions. I’m using the same gear as this; iClone 8 & Mono. Is there a way for me to edit the motion file recordings in iClone? I only saw the recording in the timeline, but couldn’t find a way to edit the motion. Felt like I maybe just missed it. Do you know if it would allow me to capture the hands and body with one camera using Mono, and use a second camera to record the face capture? I would be trying to use MotionLIVE/AccuFace to record the face. I watched the video the dollars mocap people suggested that I watch, but I didn’t see any mention of a second camera. I only have one web cam, but was considering adding an action camera to record the hands body if it worked. Do you know? I’ll write the company if not. I tend to learn when I get my hands on it, and there aren’t many manuals for this new innovative 3-D gear. 😊 Thank you so much for your time and putting this video together. It was very helpful.
Hmm, well admittedly I haven't used the 'live' functionality very much (I'm almost always recording and then refining the animation). That being said, I'm pretty positive that's possible. If you wanted to create a custom avatar and then animate it, you'd need Character Creator 4 and iClone 8. As far as the actual face capture, you have a couple options. If you have an iphone with a TrueDepth sensor, you can get the LiveFace plugin for iClone 8. If you don't, but have an Nvidia RTX GPU you're using, then you can use AccuFace. Either one would work nicely I think.
If you find the process of lining up frames to each other too tedious, consider that you could just stabilize the footage first and then proceed as described patching your frames together. And since they are all stabilized, you can pick as many frames as you want and won't have to line up each of them. In perhaps another situation with many frames that each have useful background pieces, the reveal-painting each layer may also become impractical, in which case you could try simply punching out the action (Owen) first, with a loose roto stenciling, and then just stack stabilized frames over each other, which will reveal through to whatever layers have that region. You can then noodle the mask feathering etc while looking at the stacked-up result to make sure you don't get any hard edges.
I've never tried that myself...are you meaning using it as a live-stream source (live animation)? I'm pretty sure you can, though it of course wouldn't be as 'refined' as something recorded & then tweaked. :)
*Outstanding stuff for sure!* I use accuface and liveface all the time. Sometimes prefer one over the other but one thing for sure Acculips is critical with both. Really *great* examples you have shown :) ...Cheers! *IcloneFun🤗*
Thanks so much for leaving a comment that you enjoyed the video! Yes, it's nice to have options and I kind of keep going back and forth. Thankfully I already had the needed hardware, so getting up and running was a little cheaper than it would have been otherwise.
@@JhowT You are welcome. Great channel. I use accuface and liveface so much on my old gaming laptop rtx 3600 and my new gaming rtx 4800 and for me there is no speed different in Iclone 8 or CC4 or accuface or liveface. Maybe I did not need to buy the more expensive laptop haha :)
@@3DGraphicsFun Haha, yeah sometimes it's hard to tell what / where the payoff will be. I'd imagine that if you're rendering out of iClone at 4K, the extra VRAM of the newer GPU would allow it to work more efficiently.
Ever since I discovered Topaz VideoAI I never need to render anything in Iclone or CC4 at 4K. Because after all is done in Iclone I bring in to Topaz and in 10 minutes such beautiful Super enhanced 4K. Because of that I guess I really did not need the new Gaming Laptop after all. haha ... *Cheers😊IcloneFun🤗*
I would assume that somewhere along the line their code would change enough to where they wouldn't be compatible to run alongside each other. I use my version regularly though, and it's definitely on the older side of things.
I opened the Terminator T-799, but unable to find fbx model. I opened Arnold.obj but unable to see the texture. In modify tab I located the texture (Shader Type PBR) and set Strength to 100% , but it did not work. 😞
Hi! While I can't 100% say with confidence (at the moment), I "think" I might have made an intermediate stop to export it as an FBX from another application. That being said, you "should" be able to import the OBJ and then assign the texture(s) after the fact?
Hi, and sorry that it took me a short while to respond. How detailed are the horns? The most straight forward way I can think of doing this, would be to not include the horns in the headshot 2 topology, and then add the horns, basically, as an 'accessory' after the fact. It'd be the same process as, for instance, putting a pair of glasses on the character & will be parented to the head (so they will move along correctly). Hope this gives you some ideas & will be of some help!
Is it possible to do this workflow without attaching to a CC4 body? i actually just want to get my custom character head into CC4 and use this workflow to add all the CC4 blendshapes to a custom character
Hmm, I'm not "fully" sure. I think it's likely going to want you to attach it to a body...since I can't really think of a time where I've seen an isolated head as I've used the CC4 pipeline. This may just be me not having tried it though -- and that it is actually do-able. Sorry I can't give you a definite answer! If in doubt, I'd download the trial and see if it can do what you're wanting it to. I had done that in the past with a couple Reallusion plugins. :)
Hi there. You'd need to tell me what part of the video you are referring to for me answer. It can definitely make a nice difference though! Having that bounce light is wonderful. Lumen (in UE5) does similar things built-in, but I don't think is as performant as RTXGI is. Hope this somewhat-answers things for you!
This is definitely an avenue where you could create a digital-double of someone. Perhaps better suited to stylized purposes, but very, very effective! :)
Hi there! Sorry for being a bit late in responding to your question. So in the process you eventually get to a step where the application "shows" you the detected faces for the source and destination (to train off of). What you would do is delete all the faces for the other character you DON'T want swapped out. Does that make sense? :) Then the training will run and only work on what you specified.
I would love to see someone do this with a really badly lit BS/GS with seams all over the place. Most of these tutorials are just unrealistic and it takes a lot more finess to achieve a good key.
This is school-shot footage at a college I used to teach it for my compositing class. The school definitely had some nice facilities, but I wouldn't call it "unrealistic". Definitely shots can be more challenging though, I definitely agree. There's some other past shots I'd use in class for examples that were of much lesser quality, but I don't have tutorials with them up right now. Sometimes [in general] to show a concept, it's better to use something that's not the 'most challenging thing in the world' so it doesn't become overwhelming. In the case of me using really challenging footage, I'd likely call it something more along the lines of "keying not-ideal footage" or something silly like that. ;)
Thank you -- I sincerely appreciate you taking the time to tell me that. It was a huge road to try and learn the process on my own, but thankfully at that point I was able to verbalize it decently because of the 'struggle' to get there. ;)
Hi there. This is applicable for photos and digital cinema, so I'll somewhat talk about both. For video the RAW data is only going to show up for, well..."raw" formats like RED footage, Blackmagic RAW footage (used to be DNG's, but not BRAW files), etc. For photos if you've used a DLSR camera or a camera that can shoot in RAW format (and have it set to do so), it captures more data than you can actually see in the photo...and because of that it gives you more data to work with. For instance, if you take a photo as a JPG on your phone (or off the internet), if the sky is 'blown out' to white, there's no data there and if you try and darken down the photo, it's just going to be a 'light gray' color. If you shot in a RAW format though, there's a good likelihood that if you take it into something like photoshop, you can lower the exposure and...low and behold...there will be cloud or sky data there to bring back! Same thing goes for deep shadows...you can lighten those areas and there will be 'imagery' there. To lower file sizes, the 'easiest' way would be to set your camera to capture in something like a JPG, etc. That extra data will never be captured in the file, so it will be smaller. The 'safer' method, though, would be to take your photos in RAW, and then save them out to a different format after you make those adjustments. Hope the above helps a bit and makes sense; happy shooting! :)
Hi Johnny. Can you advise why I might not be getting the popup allowing me to select images for preview (see 18:35 in your video)? I've also tried doing this process without a preview image and when it says "press any key to continue" I don't get a response. Currently using latest stable release of deepfacelab. Thanks in advance for your tips!
Hi there. Hmm, that one is a slight mystery to me. At that point in the process I don't think it's trying to load things onto the GPU yet, so it wouldn't be an out of VRAM error. I'd maybe just double check and ensure that the images are extracted to the folder(s) correctly, as that would be (logically speaking) a reason why they wouldn't be popping up.
Cool, i create a really cool polaroid effect back in 2021 and i never saved it for future use, i was afraid of not being able to get my configurations back but i find way by using this video to learn about metadata. is truly amazing and quite scary of how much information that picture of mine was holding on. thank you much for this video sir!
That's super fun to hear how this video (in a round-about way) helped you find that information! But yes...it IS borderline scary the amount of information stored in photos -- that people don't necessarily even realize. For instance, when people send around phone photos, very often the GPS data is logged directly in there, so basically people know where the photo was taken! :O
I'm glad to hear from you that they are helpful...part of the reason I did these is because there wasn't much material that really explained things ultra well, and was intending to save others the 'learning pains'. ;)
Thank you! All going well overall here. I definitely need to hop back in with some new content -- I've been trying to decide what to cover for a while now. Thank you for saying hi!
Programmers and Game Artists: We just want to do coloured stained glass lighting for scenes in churches. The Industry: Here's a thousand ways to do bounce lighting and emission.
Haha yup...I definitely love that type of stained-glass lighting (in real life, and games!). It's definitely one of those things where baked lighting is a lot more efficient, but obviously not interactive at that point. :-/
@@JhowT I mean that when I mix the target video with the frames that I train, the trained frames look sooo pixelated, obviously the original video is in HD.
Thank you for making this video, it was very insightful! I was wondering if you were open to private tutoring and/ or available to hire. Thank you for your time