its huge Stay updated with the channel and some stuff I make! 👉 verynormal.substack.com 👉 very-normal.sellfy.store Try Shortform for free and get 20% off an annual subscription! 👉 shortform.com/verynormal
Well, I'm pretty sure you heard some of them before... But the question is: "are the statisticians I know still alive or they passed away?" 😂😂 I knew Cox from Box-Cox transformations and Rao from Rao-Blackwell and Cramer-Rao, but didn't have a clue about when they lived, so such a surprise they lived till a couple years ago
@@very-normal i just thought that these people deserve recognition. thats the least we can do using the free software we've been using. 🙂 nice videos by the way. i love ur content. always looking forward to your uploads.
The bootstrap and Crémer-Rao lower Bound are most important invention in stats in last century - they deserve the recognitions without doubt. My predition: Nobel Prixe of stats for 2025 is James-Stein Estimator resp. their proofs - that was huge surprise for many statisticians and showed that MLE is not the sufficient estimator and contradict to Crémer-Rao lower bound.
I'll definitely be interested to see who this year's prize goes to. In my opinion Andrew Gelman is definitely in the running. But given how new this prize is, there are others who ought to be considered first.
Man do I wish you made these videos when I was doing my bachelors in statistics, would've removed a lot of confusion. Still though I really enjoy watching your channel and I hope your goal of making statistics fun for everyone succeeds!
Very nicely presented, I learned a lot and really enjoyed the reasonable pace at which you walked the viewer through the contributions as well as their significance.
Well, Nobel died in 1896 and the prize started in 1901, before Von Neumann and Turing were even born, so I'm pretty confident nobody told Nobel that Computer Science existed lol
Hey, this is an amazing video! Cheers to these great statisticians. Rao taught one of lecturers in undergrad. He could never stop speaking so highly of him!
WAIT! I found out on Wikipedia that there has been a "Wilks Memorial Award" since the sixties! Famous names I know who won the prize are C.R.Rao, Neyman, Cochran, Snedecor and many others... No Idea of it is reserved only to residents in the US though
Thank you for the videos. The story I heard as a student was Nobel's wife was having an affair with a Mathematician, which is why there is no Nobel Math Prize.
Yeah! I’ve been cooking up an MCMC type of video for some time now. Jackknife would be cool too, tho it’s been overshadowed by the bootstrap I feel. Could be a part of a bigger video!
A topic thats fascinated me for a long time is the statistics of persuasion. How strong does the evidence need to be to persuade people one way or another? Of course, rhetoric is the main way we persuade other people, but it's a nice thought experiment and a very bayesian challenge
A question: i(θ) isn't just an approximation of the variance of the MLE based on asymptotical results, and moreover MLEs are very often biased because of Jensen inequality or other reasons, so there could be either unbiased as/more efficient estimators or more efficient biased estimators than the MLE. Am I wrong? I also saw a video about James Stein estimator for example, which doesn't take the MLE to get more efficient *Edit: my broken screen and my poor sight prevented me from seeing the bottom note
I agree with your prediction about Vladimir Vapnik. He would be a worthy recipient. It would also recognise the long term efforts of the Russian probability school.
Has any statistician come up with a statistical function that predicts, with any certainty, their chances of winning the International Prize in Statistics.
@@very-normal nah but seriously though, at least make a shorts with how other prizes are distributed and with some data crunching make statistical predictions especially since you havent done much of those
@@very-normal Also in my textbook, in some questions they use root (n) for t-test and in some places its root (n-1). Standard of error is the root of (variance per statistical individual). There wasnt an explanation as to why root of n-1 is used in some places. lmk asap pls, I have a test on 5th in inferential statistics.
In general, the one using root(n-1) is more correct than root(n) because it makes the estimator unbiased. I put root(n) here because that’s what you get with the MLE for estimating the variance of normally distributed data.
@@very-normal how does a root of (n-1) make a significant difference? A hypothesis test especially in your sampling sizes is gonna be large. diff between root of n and n-1 is gonna be in the 0.000x probably. Also how does it make it unbiased?! from an undergrad of Aswath Damodaran, my understanding was that bias is an error from human judgement. How can it be reduced if not eliminated by subtracting 1? Im highlighting my ignorance rn, but the days of mean median and mode were far more comprehensible.... I am stuck with the simplest of t-tests 😭😭
RA Fisher gets credit for popularizing it, but there were a bunch of people before him who made references to it. There’s a paper called “The Epic Story of Maximum Likelihood” by Stephen Stigler that answers your question more thoroughly
@@TheThreatenedSwanMostly yes but the contributions of Paul Cohen, Terrence Tao, Martin Hairer improved software verification and algorithms, medical imaging and climate and financial modelling respectively