This was a great talk. I love hearing Vitalik speak...he's awesome. I was wondering though, when he was going over the different scenarios on the 'problem with fishermen' slide, wouldn't it also be possible to find a way to reward the one who sounded the alarm if it turns out that there was a valid reason for sounding the alarm but also penalize them if it turns out that there wasn't a valid reason? I'm sure there's probably a good reason why this wouldn't work but I'd be interested to know why.
My guess would be because of how this reward would in turn get implemented. In order to reward whoever sounded the alarm for valid reasons, we need to prove trustlessly that they had valid reason. This in some sense requires the same steps of proof as our original problem, i.e. if the alarm sounder provides proof that the original transaction is wrong, then someone can dispute this proof by finding a problem, and themselves providing proof that the alarm sounder is wrong. This process would I think have to continue indefinitely if we wanted to do this in a way that requires no trusted party. However, any finite number of iterations can leave you in the same position Vitalik describes at the end of that section, where we see an alarm and don't know if the last person in this proof chain to have raised an alarm did it for just reasons or not. The two cases are still indistinguishable. I may have missed another reason but this seems like the natural way to extend his toy problem to try and encourage "valid" alarm sounding. Hope this helps.