Тёмный
No video :(

The Cramer-Rao Lower Bound ... MADE EASY!!! 

Brian Greco - Learn Statistics!
Подписаться 2,8 тыс.
Просмотров 1,1 тыс.
50% 1

What is a Cramer-Rao Lower Bound? How can we prove an estimator is the best possible estimator? What is the efficiency of an estimator?

Опубликовано:

 

9 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 8   
@user-cg2vy2pn9j
@user-cg2vy2pn9j 2 дня назад
Many thanks 🙏
@RoyalYoutube_PRO
@RoyalYoutube_PRO 27 дней назад
Fantastic video... preparing for IIT JAM MS
@ligandroyumnam5546
@ligandroyumnam5546 Месяц назад
Thanks for uploading all this content. I am about to begin my masters in data science soon and I was trying to grasp some math theory which is hard for me coming from a CS Background. Your videos make it so simple to digest all these topics.
@santiagodm3483
@santiagodm3483 Месяц назад
Nice videos. I'm now preparing for my masters and it will be quite useful; the connection between CRLW and the standard error of the estimates by MLE makes this very nice.
@ridwanwase7444
@ridwanwase7444 Месяц назад
Fisher information is negative of expected value of double derivative of log L, then why we multiply with 'n' to get it?
@briangreco2718
@briangreco2718 Месяц назад
I was assuming the L here is the likelihood of a single data point. In that case, you just multiply by n at the end to get the information of all n observations. If L is the likelihood of all n data points, then the answer will already contain the n and you don't have to multiply at the end. The two methods are equivalent when the data is independent and identically distributed.
@ridwanwase7444
@ridwanwase7444 Месяц назад
@@briangreco2718 Thanks for replying so quickly! I have another question, is MLE of population mean always guarantee that it will have the CRLB variance?
@briangreco2718
@briangreco2718 Месяц назад
Hmm, I don't think this is true in general. At some level, it's certainly not true if we're talking about the CRLB of unbiased estimators, because the MLE is sometimes biased. For example, in a uniform distribution on [0,theta], the MLE is biased, and the Fisher Information is not even defined. My guess is that this applies for some "location families", which the normal, binomial, poisson would all be. For a "scale family" like the exponential distribution, in the parameterization where the mean is 1/lambda, I do not believe the MLE meets the CRLB.
Далее
Sufficient Statistics and the Factorization Theorem
15:19
Introduction to Cramer-Rao Lower Bound
9:51
Просмотров 25 тыс.
ТРУБОЧКА СКВОЗЬ НОС 😳
00:40
Просмотров 445 тыс.
D3 LiXiang L6 Машина Года 2025?
15:14
Просмотров 366 тыс.
Вы чего бл….🤣🤣🙏🏽🙏🏽🙏🏽
00:18
Link functions for GLMs... MADE EASY!!!
8:56
The Method of Moments ... Made Easy!
9:02
Просмотров 10 тыс.
Maximum Likelihood Estimation for Normal Distribution
14:49
The better way to do statistics
17:25
Просмотров 198 тыс.
Probability vs. Likelihood ... MADE EASY!!!
7:31
Просмотров 27 тыс.
Are you Bayesian or Frequentist?
7:03
Просмотров 243 тыс.
What is Fisher Information?
19:24
Просмотров 19 тыс.
You DON’T Descend From All Your Ancestors
12:46
Просмотров 1 млн