Тёмный
No video :(

The Unreasonable Effectiveness of Bayesian Prediction 

ritvikmath
Подписаться 163 тыс.
Просмотров 21 тыс.
50% 1

Опубликовано:

 

5 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 36   
@kaanbcakc8168
@kaanbcakc8168 2 года назад
Your explanations about Bayesian concepts are very clear, keep it up! :)
@CodeEmporium
@CodeEmporium 2 года назад
We need a "Ritvik marker catching intro compilation"
@ritvikmath
@ritvikmath 2 года назад
🤣 good idea!
@abhishek50393
@abhishek50393 2 года назад
Great vid, you should do a full series of Bayesian methods
@robertbarta2793
@robertbarta2793 2 месяца назад
Super explanation.
@chenqu773
@chenqu773 Год назад
This video solves me a confusion around MLE and bayesian, by that magic p(β)=1. Thank you man!
@chiawen.
@chiawen. 9 месяцев назад
Your explanations are fantastic! I think Bayesian Optimization would be a nice topic as well! Could you make a video about that? :D
@kisholoymukherjee
@kisholoymukherjee 8 месяцев назад
Would be great if you can make some videos on the use of Bayesian approach for Marketing Mix Modelling
@sharmilakarumuri6050
@sharmilakarumuri6050 2 года назад
Clearly explained , awesome.....need more videos on bayesian stats
@johnpalmer8538
@johnpalmer8538 2 года назад
Absolutely incredible video. Your explanation of concepts is crystal clear and easy to follow. Amazing job man :)
@ChocolateMilkCultLeader
@ChocolateMilkCultLeader 2 года назад
Fantastic
@joelrubinson9973
@joelrubinson9973 2 года назад
very interesting. relevant to my work on creating uber models of advertising effectiveness where the signals come from different walled gardens and where the A/B test results can vary for the same publisher (e.g. Facebook) for the same brand across camigpans. True finding or statistical variation that can be 'shrunk' by prior distributions on the parameters in the uber model?
@klam77
@klam77 Год назад
Excellent!
@mathematicalninja2756
@mathematicalninja2756 9 месяцев назад
They asked me this today in flipkart interview.
@sabinewien2665
@sabinewien2665 10 месяцев назад
Hi Great Video 😊 I work on my thesis in Electro Engineering and your Movie helped me a lot. Can you please provide me with your sources so I can use them too? Thank you very much!
@TheBestTuber396
@TheBestTuber396 2 года назад
Can you do a video on how to understand power laws
@hameddadgour
@hameddadgour 2 года назад
Great explanation!
@victorviana4012
@victorviana4012 2 года назад
Great Video!!!! Do you know a study reference for code implementation of this concept?
@ericostring8182
@ericostring8182 2 года назад
This is awesome stuff
@rsilveira79
@rsilveira79 2 года назад
Very clear explanations
@chadgregory9037
@chadgregory9037 2 года назад
I feel like tensorflow probabilities is such a huge deal, like omg a huge deal
@taotaotan5671
@taotaotan5671 2 года назад
I think the posterior expectation may be in the sweet spot.
@OwenMcKinley
@OwenMcKinley 2 года назад
If I had the ability, I'd award you a Nobel prize
@nmtsmea
@nmtsmea Год назад
Isn’t the posterior the probability of beta given data? It’s contradictory to the other video you made.
@Darkev77
@Darkev77 2 года назад
What does he exactly mean in 9:14 by “we will sample a new beta vector from that posterior distribution”? Someone clarify please?
@hristovassilev7812
@hristovassilev7812 2 года назад
It means you you „learn“ the distribution of beta from the data. You can then sample that distribution(e.g. with MCMC). If the variance of the samples is high this means the model is uncertain about its prediction. That’s my understanding at least.
@Darkev77
@Darkev77 2 года назад
@@hristovassilev7812 I thought that after maximum likelihood estimation, you get a single set of weights and bias (B0 and B1) that maximize the probability of observing your data. So how does that turn into a "distribution"; when did you generate that distribution where you can sample a "new beta vector" from? Thanks!
@hristovassilev7812
@hristovassilev7812 2 года назад
@@Darkev77 What you describe is correct for maximum likelihood estimation. But since the video refers to a Bayesian method of estimating the parameters the idea is a bit different: you treat B0 and B1 themselves as a random variable. You generate the distribution of B0 and B1 using Bayes theorem: p(B | data) = p(data | B) * p(B) / p(data)
@Stem667
@Stem667 Год назад
Would the diversity not only matter when the likelihood function has multiple local minima? Otherwise, why "hedge our bets" if we are likelihood-maximising values of beta are all very similar?
@geoffreyanderson4719
@geoffreyanderson4719 2 года назад
Ritvikmath: Building on your nice ideas.... Q1 - Is there a jupyter notebook of this Bayes sampling model yet? It's nice concept. Q2 - It was found that recency of purchases (which you explicitly incorporated) and frequency of purchases too are both predictive of churn, in Univ of Virgina's Marketing Analytics course. Let's make a ML model using Bayes sampling, but now incorporating both these factors. Yet another AI research direction stares at us in the face....
@djlinux64
@djlinux64 2 года назад
How do you measure model performance if you are changing models every day?
@juneyang6534
@juneyang6534 Год назад
Perhaps we can think of the evolving models as one dynamic model and check the performance on a daily basis?
@ElbertMaata-cc2uq
@ElbertMaata-cc2uq 11 месяцев назад
Kindly make the theme dark so that the illustrations get cleared . Thank you.
Далее
Bayesians, Frequentists, and Parallel Universes
13:50
У ГОРДЕЯ ПОЖАР в ОФИСЕ!
01:01
Просмотров 3,6 млн
🫢 #tiktok #elsarca
00:11
Просмотров 3,7 млн
The better way to do statistics
17:25
Просмотров 215 тыс.
The Bayesian Trap
10:37
Просмотров 4,1 млн
Bayesian Linear Regression : Data Science Concepts
16:28
The Surprisingly Effective Magic of Partial Pooling
12:48
How Bayes Theorem works
25:09
Просмотров 542 тыс.
How to systematically approach truth - Bayes' rule
19:08
A visual guide to Bayesian thinking
11:25
Просмотров 1,7 млн
У ГОРДЕЯ ПОЖАР в ОФИСЕ!
01:01
Просмотров 3,6 млн