The reason you're getting these results is because you need to shoot with a faster shutter. It wont look perfect but it will be definitely better. I recommend shooting at or over 1/1000 shutter for 120 fps recording that you would slow down to 10% to turn into 1200 FPS. I did it and the results were great. You also have a point on these softwares not having enough data to train on at the time you posted this video. I just learned about this last week and I've had some nice results so far.
@@WarriorsPhoto LOL! I didn't mean this as snarky, hope it didn't sound that way. I was really appreciative that Dave helps us learn more. Maybe "There's nothing like learning there's more to learn". :)
Basic Filmmaker Oh, I always see you a snarky. But, I also know you mean well. My comment was that, we know nothing and realize this more as we learn more. (:
This can be very effective, the golf swing is an extreme case which doesnt have enough info to work with, optical flow uses motion analysis to drive the morphing between frames. The technology is over a decade old. What can help is using multiple passes 1/2 speed then 1/2 speed again etc.
"Traditional" optical flow methods analyze the actual pixels of the frame. By using some frames before and after, the software tries to guess the direction that the pixels are moving. With a combination of blending color values and moving pixels around according to these vectors, it's able to generate new frames to insert inbetween what already exists, thus creating a slow motion shot. This new neural engine is likely being trained on a data set of images to be able to actually "recognize" the things in the scene. So instead of a bunch of pixels moving around, it can recognize a golfer, trees, a person on a bicycle, etc. In theory, using this understanding, it will be able to produce smarter, better looking inbetween frames, and thus more convincing slow motion. You can tell it already looks better than optical flow, even in these early stages. It's not there yet, but it will be soon.
Optical flow + speed warp is game changer in DR, quiete heavy on a weak machine. It will best give you results when you shoot actual slo mo. You can reduce 60fp to 20% without the footage ghosting using Optical flow and warp. Great tutorial
We can always count on you Dave for such interesting content! I, for one, think self driving autos is a bad idea in an already litigious society. thank you for educating us on Neural Nets.
So... What happens when you speed down a footage with audio, like a person playing a guitar solo, for example... Does it pitch correct it? Does that pitch correction works fine? Does the sound gets stuttered and artificial? What audio definitions should I use for best results? 96.000kHz maybe, so you have enough audio room to speed that down (just like it happens when you shot on 120fps)? There are A LOT of music teachers who would LOVE to know how to do that effectively!
Hi Dave, flowing water was always a challenge with early cgi, so I guess that's what we are seeing with BM's neural network. Early days, as you said. BobUK.
twixtore will still give you the best results for those kind of scenarios. but its great to see resolve getting better and better, even in those aspects. I think it's finally time for me to switch from premeire to resolve.
Dave, any thoughts on creating a video series on switching from Premiere to Resolve? I am on the fence, sick of Premiere and the crashes, but I have too many projects in the works to leave premiere. Any suggestions or good channels you would recommend for tutorials?
Hello Dave, im using also the studio version beta 6 now, are you having troubles playing back clips with NR applied and white balance? with beta 5 it was running smoothly for me. I might missing something
It is funny to see how really everyone in the proffesional world is moving away from premiere to resolve. Even the production company I work at quit the adobe subscribtion and bought some davinci resolve licenses.
isn't it better to record in higher FPS (120,240) and then slow it down? i did it with my phone at 240 FPS and put in free version of DR and its creamy smooth slow mo.
The is no neural processing here just morphing between frames, Ive been using similar technology for years with Autodesk Flame and the results are the same. Sometimes it works great sometimes it doesn't.
I though its the same as you said yes. Its still really not full on machine learning. I wish companies would do what iZotope is doing in RX audio restoration tool. That is really useful. Things like separating dialog from background noise. I wish we had similar like this for slow motion properly. As Dave said, it needs more toilet training. Hehehe