Maybe I skipped the section that clearly described prospective learning, but by 57 min mark, I just paused to go read part of the paper. My thoughts: First, this just sounds like branch prediction; Second, it's still bound by the curve fitting world of xNN's, which means it has no future path in AI. Third, brains might do something like this, but only in that we have a limit to the number of concepts we can keep active at one time (and not having the "correct" idea at the "right" time means taking the "wrong" action). It doesn't work for more complex problems, so likely isn't what our brains are doing.
**Cognitive science should inform AI research, not the other way around (at least, ML/DL should not be treated as an insight into how minds/brains work).