wow, you're a legend for doing this. I wonder how much work it would be to sync the GAS via the NPP, this then should allow both to play nicely together right?
Thank you. Honestly , i am not a believer in GAS with NPP. it's a lot more possible for a Ability System 2.0 that uses NPP than this one changing . it's practically rewriting it all.
Really cool project! Do you think epic will take this implementation as a pull request or have you been in contact with their team? Saw a presentation on the mover plugin raising some of these issues still not been implemented (to my knowledge).
@@hullabulla i did a PR request a while ago. Around time video was release. The Replay fix was implemented because it causes a crash, smoothing not yet. I assume they are deciding to get mover comp in a good state before moving to smoothing.
@@SirKaizoku Awesome! Thinking about using this for the base of a project I'm working on as especially the smoothing part is amazing (as you said in the video)
@@SirKaizoku Having issues compiling the engine (clean as well). 11>NetworkPredictionDriver.h(518): Error C2039 : 'GetDelta': is not a member of 'FMockAbilitySyncState' 11>MockAbilitySimulation.h(76): Reference C2039 : see declaration of 'FMockAbilitySyncState' 11>NetworkPredictionDriver.h(511): Reference C2039 : see reference to function template instantiation 'void FNetworkPredictionDriverBase::GetDeltaState(const StateType *,const StateType *,StateType *)' being compiled with [ ModelDef=FMockAbilityModelDef, StateType=FMockAbilitySyncState ] 11>NetworkPredictionDriver.h(509): Reference C2039 : while compiling class template member function 'void FNetworkPredictionDriverBase::GetDelta(const TSyncAuxPair &,const TSyncAuxPair &,FMockAbilitySyncState *,FMockAbilityAuxState *)' with [ ModelDef=FMockAbilityModelDef ] Lots of these issues with templates in NetworkPrediction.h Seems some path might be missing or similar?
This is really good stuff here! I haven't had much time to deep dive into mover so thanks for taking the time. Would you say its workflow is ready to start putting in games? What type of games do you think would benefit most from it in its current state?
If these mssing feature that i added were in the engine by default. i would say mover is ready to be used for real games. All multiplayer games would highly benefit from this but most genres would be shooters , mobas and fighting games. Networked fast paced games , with solid netcode would be possible for every unreal dev.
@@SirKaizoku No doubt. I'm making a basketball game which is mostly simple movement but saving bandwidth on the movement component is a plus for any game. But there would be benefits using NPP to keep the server and client's state in sync.
That looks awesome (and absolutely required) but I'm not quite sure yet about some things: - Why not offer it as a standalone plugin? Would be way easier to update. - This is a stupid one, but does this work for all movement or only with the default movement component? - What is exposed to BPs? - Any chance for continuous replays (deaths are sudden and unexpected) and on the same map (large-world problem)?
Hello, - it's not my plugin it's in the engine, i just did some updates to it. - you can create your own movement - almost everything can be done in BP for movement. except the structs that hold the state (EX: location ,rotation etc..) and input struct (EX:W,A,S,D , jump button etc). creating other simulation need c++. -Replay system in unreal does support different types of replay . including one that is suited for kill cams. they need c++ to setup and they are not directly connected to network prediction plugin . i just fixed a bug in it that prevented it from working with it.dev.epicgames.com/documentation/en-us/unreal-engine/using-the-replay-system-in-unreal-engine?application_version=5.3
So, regarding the input also sent to the sim proxy, wouldn't that be necessary for you to do lag compensation? ie. say someone pressed jump on certain frame, server can play back the press and adjust for those, while clients gets the smooth interpolated result. BUT, for the cosmetic effects, like the jump puff particle effects, you would still need the input timing to arrive on sim proxy so you can play the particle effects but set the start time to whatever time is already past. (or for a ability casting montage where you can start play + starting offset and blend current animation to it if you know when the input started. ) If that's what are the remaining feature to target? would those filter need to be hard coded in? is there better way to decide which inputs to replicated but ignore the rest. (so ignore the stick/wsad movement, but keep jump/ultimate/dash for example. )
For player controlled simulated proxy with a kinemtic moving pawn , it is impossible to predict without bad visual pops and giving up on the 100% reliable world state. - Lag compensation refer to the server rewinding actor back in time where you as a local player saw them when you are targeting (for example shooting a gun). it requires a 100% reliable world state to get 100% reliable lag compensation. does not refer to making other player move forward in time in local player world. - you can control what gets sent to sim proxies and what doesn't, but this has to be manually coded in the NetSerialize function of the input /state struct. - it is not recommended at all to use 1 frame bools for now. example JustJumped bool that gets set to true on jump frame and false on next is not best to use with simulated proxies since they can miss an update and it can be the 1 frame with JustJumped is true. best to use states like IsJumping and check for the change in state between the sim output last frame and this frame. if IsJumping was false and now is true then it's first jump frame. - This is c++ only but there are things called NetCues in NPP that allow you to trigger events that get replicated. such as playing a sound , spawning VFX etc.. creating them is only possible in c++ for now but their purpuse is same as Cues in ability system.
in the mover examples , one of the maps has a trajectory debug that activates by walking into a volume .. that trajectory would be used with motion matching. but am not sure if it's the same structure as the one used in character trajectory component. they should have the same data if not , a conversion function should be simple.
Hello, I'm a bit confused ^^' the link to the repository that you posted in the description doesn't work, and I can't seem to find the repo on github anywhere... Did you choose to keep it private, or am I missing something?
you definetly can, you can directly grab it as it is and try building it for ue4, the plugin itself didn't change but maybe small updates from engine source changes and deprecating stuff. or you can manually make the same changes. it's same plugin no changes since 2019.
@@SirKaizoku yes i know, my question is how this replication and client smoothing system compare to GAS since GAS also have network optimization and client prediction builtin along with it's other features . i am asking only for the network optimization part .
@@amigoface i don't think an ability system made with NPP will use less network bandwidth if that's what you want to know. that is not its purpose. it would allow for many things GAS doesn't provide. fully predictive simulation(like over time effects etc..) easily works with mover out of the box.. but the bandwidth usage is easily controlled in NPP and might be possible to use less bandwidth than GAS with right optimizations. But, When iris is ready and NPP uses its delta replication it might be a different story
Can You please make tutorial how to use it in blueprint with 1 animation or something. Like with safe and unsafe variables how to connect this with animations
@@SirKaizoku i get it thanks can u show us how to reply with key like u do in ur video. which Blueprint we have to call for replay actual i m not a c++ guy 😁😁😁
@@crazyguy7585 , unfortunatly replay needs some c++ to setup , it's unrelated to network prediction though, ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-8fEGDY7mg3s.html i pretty much did like this video.
ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-_rdt-v1nFlY.html time stamped , you can see he says he is running with command -trace=np . then you need to run the project through visual studio or rider. i still do not know how to start it through the editor yet since i never needed to but i will look into it.
That feature is totally undocumented, so impossible to implement it without spending hundred of hours in the source code. No one seems to know how to use it properly while it would be really interesting.
Hey this is really cool thanks for sharing your work on this! I was actually looking at this exact thing a while ago for a game I'm working on and decided it would be more work than it's worth to finish it and went with a model instead where the server just streams state to clients and they basically just play it back like a video. This obviously adds input lag, which isn't a huge deal in my particular game but the plan was to switch to rollback in the future so thanks so much for this!
happy you finding this useful, your videos on the custom editor graph and nodes were a great watch, there's barely any info on that stuff at all, thanks. Hope you got more coming.