Тёмный

Bayesian Linear Regression : Data Science Concepts 

ritvikmath
Подписаться 159 тыс.
Просмотров 76 тыс.
50% 1

The crazy link between Bayes Theorem, Linear Regression, LASSO, and Ridge!
LASSO Video : • Lasso Regression
Ridge Video : • Ridge Regression
Intro to Bayesian Stats Video : • What the Heck is Bayes...
My Patreon : www.patreon.com/user?u=49277905

Опубликовано:

 

30 мар 2021

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 171   
@brycedavis5674
@brycedavis5674 3 года назад
As soon as you explained the results from Bayesian my jaw was wide open for like 3 minutes this is so interesting
@kunalchakraborty3037
@kunalchakraborty3037 3 года назад
Read it on a book. Didn't understand jack shit back then. Your videos are awesome. Rich, small, consise. Please make a video on Linear Discriminant Analysis and how its related to bay's theorem. This video will be saved in my data science playlist.
@tobias2688
@tobias2688 3 года назад
This video is a true gem, informative and simple at once. Thank you so much!
@ritvikmath
@ritvikmath 3 года назад
Glad it was helpful!
@sudipanpaul805
@sudipanpaul805 11 месяцев назад
Love you, bro, I got my joining letter from NASA as a Scientific Officer-1, believe me, your videos always helped me in my research works.
@icybrain8943
@icybrain8943 3 года назад
Regardless of how they were really initially devised, seeing the regularization formulas pop out of the bayesian linear regression model was eye-opening - thanks for sharing this insight
@dennisleet9394
@dennisleet9394 2 года назад
Yes. This really blew my mind. Boom.
@chenqu773
@chenqu773 Год назад
For me, the coolest thing about statistics is that every time I do a refresh on these topics, I get some new ideas or understandings. It's lucky that I came across this video after a year, which could also explain why we need to "normalized" the X (0 centered, with stdev = 1) before we feed them into the MLP model, if we use regularization terms in the layers.
@fluidice1656
@fluidice1656 Год назад
This is my favorite video out of a large set of fantastic videos that you have made. It just brings everything together in such a brilliant way. I keep getting back to it over and over again. Thank you so much!
@MoumitaHanra
@MoumitaHanra 2 года назад
Best of all videos on Bayesian regression; other videos are so boring and long but this one has quality as well as ease of understanding..Thank you so much!
@rajanalexander4949
@rajanalexander4949 Год назад
This is incredible. Clear, well paced and explained. Thank you!
@rishabhbhatt7373
@rishabhbhatt7373 Год назад
Really good explanation. I really like how you gave context and connected all topics together and it make perfect sense. While maintaining the perfect balance b/w math and intution. Great worl. Thank You !
@davidelicalsi5915
@davidelicalsi5915 Год назад
Brilliant and clear explanation, I was struggling to grasp the main idea for a Machine Learning exam but your video was a blessing. Thank you so much for the amazing work!
@mohammadkhalkhali9635
@mohammadkhalkhali9635 3 года назад
Man I'm going to copy-paste your video whenever I want to explain regularization to anyone! I knew the concept but I would never explain it the way you did. You nailed it!
@sambacon2141
@sambacon2141 3 года назад
Man! What a great explanation of Bayesian Stats. It's all starting to make sense now. Thank you!!!
@tj9796
@tj9796 3 года назад
Your videos are great. Love the connections you make so that stats is intuitive as opposed to plug and play formulas.
@mohammadmousavi1
@mohammadmousavi1 Год назад
Unbelievable, you explained linear reg, explained in simple terms Bayesian stat, and showed the connection under 20min .... Perfect
@jlpicard7
@jlpicard7 6 месяцев назад
I've seen everything in this video many, many times, but no one had done as good a job as this in pulling these ideas together in such an intuitive and understandable way. Well done and thank you!
@fktx3507
@fktx3507 2 года назад
Thanks, man. A really good and concise explanation of the approach (together with the video on Bayesian statistics).
@feelmiranda
@feelmiranda 2 года назад
Your videos are a true gem, and an inspiration even. I hope to be as instructive as you are if I ever become a teacher!
@qiguosun129
@qiguosun129 2 года назад
Excellent tutorial! I have applied RIDGE as the loss function in different models. However, it is the first time I understand the mathematical meaning of lambda. It is really cool!
@sebastianstrumbel4335
@sebastianstrumbel4335 3 года назад
Awesome explanation! Especially the details on the prior were so helpful!
@ritvikmath
@ritvikmath 3 года назад
Glad it was helpful!
@swapnajoysaha6982
@swapnajoysaha6982 4 месяца назад
I used to be afraid of Bayesian Linear Regression until I saw this vid. Thank you sooo much
@ritvikmath
@ritvikmath 4 месяца назад
Awesome! Youre welcome
@dylanwatts4463
@dylanwatts4463 3 года назад
Amazing video! Really clearly explained! Keep em coming!
@ritvikmath
@ritvikmath 3 года назад
Glad you liked it!
@SaiVivek15
@SaiVivek15 2 года назад
This video is super informative! It gave me the actual perspective on regularization.
@FRequena
@FRequena 3 года назад
Super informative and clear lesson! Thank you very much!
@Structuralmechanic
@Structuralmechanic 5 месяцев назад
Amazing, you kept it simple and showed how regularization terms in linear regression originated from Bayesian approach!! Thank U!
@mateoruizalvarez1733
@mateoruizalvarez1733 5 месяцев назад
Cristal clear! , thank you so much, the explanation is very structured and detailed
@antaresd1
@antaresd1 9 месяцев назад
Thank you for this amazing video, It clarified many things to me!
@user-or7ji5hv8y
@user-or7ji5hv8y 2 года назад
This is truly cool. I had the same thing with the lambda. It’s good to know that it was not some engineering trick.
@umutaltun9049
@umutaltun9049 2 года назад
It just blown my mind too. I can feel you brother. Thank you!
@TejasEkawade
@TejasEkawade 8 месяцев назад
This was an excellent introduction to Bayesian Regression. Thanks a lot!
@chiawen.
@chiawen. 9 месяцев назад
This is sooo clear. Thank you so much!
@AntonioMac3301
@AntonioMac3301 2 года назад
This video is amazing!!! so helpful and clear explanation
@marcogelsomini7655
@marcogelsomini7655 Год назад
very cool the link you explained between regularization and prior
@ezragarcia6910
@ezragarcia6910 Год назад
Mi mente explotó con este video. Gracias
@joachimrosenberger2109
@joachimrosenberger2109 Год назад
Thanks a lot! Great! I am reading Elements of Statistical Learning and did not understand what they were talking about. Now I got it.
@dodg3r123
@dodg3r123 3 года назад
Love this content! More examples like this are appreciated
@ritvikmath
@ritvikmath 3 года назад
More to come!
@julissaybarra4031
@julissaybarra4031 7 месяцев назад
This was incredible, thank you so much.
@JohnJones-rp2wz
@JohnJones-rp2wz 3 года назад
Awesome explanation!
@narinpratap8790
@narinpratap8790 3 года назад
Awesome video. I didn't realize that the L1, L2 regularization had a connection with the Bayesian framework. Thanks for shedding some much needed light on the topic. Could you please also explain the role of MCMC Sampling within Bayesian Regression models? I recently implemented a Bayesian Linear Regression model using PyMC3, and there's definitely a lot of theory involved with regards to MCMC NUTS (No U-Turn) Samplers and the associated hyperparameters (Chains, Draws, Tune, etc.). I think it would be a valuable video for many of us. And of course, keep up the amazing work! :D
@ritvikmath
@ritvikmath 3 года назад
good suggestion!
@benjtheo414
@benjtheo414 11 месяцев назад
This was awesome, thanks a lot for your time :)
@alim5791
@alim5791 2 года назад
Thanks, that was a good one. Keep up the good work!
@javiergonzalezarmas8250
@javiergonzalezarmas8250 Год назад
Incredible explanation!
@shantanuneema
@shantanuneema 3 года назад
you got a subscriber, awesome explanation. I spent hours learning it from other source, but no success. You are just great
@nirmalpatil5370
@nirmalpatil5370 2 года назад
This is brillian man! Brilliant! Literally solved where the lamda comes from!
@Maciek17PL
@Maciek17PL Год назад
You are a great teacher thank you for your videos!!
@Life_on_wheeel
@Life_on_wheeel 3 года назад
Thanks for video.. Its really helpful.. I was trying to understand how regularization terms are coming.. Now i got. Thanks ..
@dirknowitzki9468
@dirknowitzki9468 2 года назад
Your videos are a Godsend!
@juliocerono5193
@juliocerono5193 3 месяца назад
At last!! I could find an explanation for the lasso and ridge regression lamdas!!! Thank you!!!
@ritvikmath
@ritvikmath 3 месяца назад
Happy to help!
@juliocerono_stone5365
@juliocerono_stone5365 3 месяца назад
at last!!! Now I can see what lamda was doing in tne lasso and ridge regression!! great video!!
@ritvikmath
@ritvikmath 3 месяца назад
Glad you liked it!
@FB0102
@FB0102 Год назад
truly excellent explanation; well done
@curiousobserver2006
@curiousobserver2006 Год назад
This blew my mind.Thanks
@brandonjones8928
@brandonjones8928 3 месяца назад
This is an awesome explanation
@caiocfp
@caiocfp 3 года назад
Thank you for sharing this fantastic content.
@ritvikmath
@ritvikmath 3 года назад
Glad you enjoy it!
@mahdijavadi2747
@mahdijavadi2747 2 года назад
Thanks a lottttt! I had so much difficulty understanding this.
@chuckleezy
@chuckleezy Год назад
you are so good at this, this video is amazing
@ritvikmath
@ritvikmath Год назад
Thank you so much!!
@rmiliming
@rmiliming Год назад
Tks a lot for this clear explanation !
@chenjus
@chenjus 2 года назад
This is the best explanation of L1 and L2 I've ever heard
@undertaker7523
@undertaker7523 Год назад
You are the go-to for me when I need to understand topics better. I understand Bayesian parameter estimation thanks to this video! Any chance you can do something on the difference between Maximum Likelihood and Bayesian parameter estimation? I think anyone that watches both of your videos will be able to pick up the details but seeing it explicitly might go a long way for some.
@amirkhoutir2649
@amirkhoutir2649 Год назад
thank you so much for the great explanation
@dmc-au
@dmc-au Год назад
Wow, killer video. This was a topic where it was especially nice to see everything written on the board in one go. Was cool to see how a larger lambda implies a more pronounced prior belief that the parameters lie close to 0.
@ritvikmath
@ritvikmath Год назад
I also think it’s pretty cool 😎
@vipinamar8323
@vipinamar8323 2 года назад
Great video with a very clear explanation. COuld you also do a video on Bayesian logistic regression
@SamuelMMuli-sy6wk
@SamuelMMuli-sy6wk 2 года назад
wonderful stuff! thank you
@kennethnavarro3496
@kennethnavarro3496 2 года назад
Thank you very much. Pretty helpful video!
@souravdey1227
@souravdey1227 Год назад
Can you please please do a series on categorical distribution, multinomial distribution, Dirichlet distribution, Dirichlet process and finally non parametric Bayesian tensor factorisation including clustering of steaming data. I will personally pay you for this. I mean it!! There are a few videos on these things on youtube, some are good, some are way high-level. But, no one can explain the way you do. This simple video has such profound importance!!
@matthewkumar7756
@matthewkumar7756 2 года назад
Mind blown on the connection between regularization and priors in linear regression
@samirelamrany5323
@samirelamrany5323 Год назад
perfect explanation thank you
@ThePiotrekpecet
@ThePiotrekpecet Год назад
There is an error at the beginning of the video, in frequentist approaches X is treated as non random covariate data and y is the random part so the high variance of OLS should be expressed as small changes to y => big changes to OLS estimator. The changes to covariate matrix becoming big changes to OLS estimator is more like a non robustness of OLS wrt outlier contamination. Also the lambda should be 1/2τ^2 not σ^2/τ^2 since: ln(P(β))=-p * ln(τ * √2*π) - ||β||₂/2τ^2 Overall this was very helpful cheers!
@houyao2147
@houyao2147 3 года назад
What a wonderful explanation!!
@ritvikmath
@ritvikmath 3 года назад
Glad you think so!
@petmackay
@petmackay 3 года назад
Most insightful! L1 as Laplacian toward the end was a bit skimpy, though. Maybe I should watch your LASSO clip. Could you do a video on elastic net? Insight on balancing the L1 and L2 norms would be appreciated.
@danielwiczew
@danielwiczew 2 года назад
Yea, Elasticnet and comparison to Ridge/Lasso would be very helpful
@karannchew2534
@karannchew2534 Год назад
Notes for my future revision. *Priror β* 10:30 Value of Prior β is normally distributed. The by product of using Normal Distribution is Regularisation. Because the prior values of β won't be too large (or too small) from the mean. Regularisation keep values of β small.
@axadify
@axadify 2 года назад
such a nice explanation. I mean thats the first time I actually understood it.
@Aviationlads
@Aviationlads 8 месяцев назад
Great video, do you have some sources I can use for my university presentation? You helped me a lot 🙏 thank you!
@jairjuliocc
@jairjuliocc 3 года назад
Thank You , I saw this before but i didnt understand. Please , where can i find the complete derivation? And maybe You can do a complete series in this topic
@kaartiki1451
@kaartiki1451 3 месяца назад
Legendary video
@j29Productions
@j29Productions 5 месяцев назад
You are THE LEGEND
@alexanderbrandmayr7408
@alexanderbrandmayr7408 3 года назад
Great video!!
@millch2k8
@millch2k8 Год назад
I'd never considered a Bayesian approach to linear regression let alone its relation to lasso/ridge regression. Really enlightening to see!
@ritvikmath
@ritvikmath Год назад
Thanks!
@imrul66
@imrul66 Год назад
Great video. The relation between the prior and LASSO penalty was a "wow" moment for me. It would be helpful to see actual computation example in python or R. A common problem I see in Bayesian lectures is - too much focus on math rather to show how actually/ how much the resulting parameters differs. Specially, when to consider bayesian approach over ols.
@yodarocco
@yodarocco Год назад
At the end I understand it too finally. A hint for peaple who also struggle on BR like me: do a Bayesian linear regression in Python from any tutorial that you find online, you are going to understand, trust me. I think that one of the initial problems for a person that face a Bayesian approach it’s the fact that you are actually obtaining a posterior *of weights*!. Now looks kinda obvious but at the beginning I was really stuck, I could not understand what was actually the posterior doing.
@abdelkaderbousabaa7020
@abdelkaderbousabaa7020 2 года назад
Excellent thank you
@louisc2016
@louisc2016 2 года назад
fantastic! u r my savor!
@rachelbarnes7469
@rachelbarnes7469 3 года назад
thank you so much for this
@AnotherBrickinWall
@AnotherBrickinWall Год назад
Great thanks! .. was feeling the same discomfort about the origin of these...
@godse54
@godse54 3 года назад
Nice i never thought that 👍🏼👍🏼
@datle1339
@datle1339 Год назад
very great, thank you
@haeunroh8945
@haeunroh8945 2 года назад
your videos are awesome so much better than my prof
@manishbhanu2568
@manishbhanu2568 Год назад
you are a great teacher!!!🏆🏆🏆
@ritvikmath
@ritvikmath Год назад
Thank you! 😃
@chenqu773
@chenqu773 3 года назад
Thank you very much
@jaivratsingh9966
@jaivratsingh9966 2 года назад
Excellent
@vinceb8041
@vinceb8041 3 года назад
Amazing! But where did Ridge and Lasso start from? Were they invented with Bayesian statistics as a starting point, or is that a duality that came later?
@hameddadgour
@hameddadgour Год назад
Holy shit! This is amazing. Mind blown :)
@shipan5940
@shipan5940 2 года назад
Max ( P(this is the best vid explaining these regressions | RU-vid) )
@yulinliu850
@yulinliu850 3 года назад
Beautiful!
@ritvikmath
@ritvikmath 3 года назад
Thank you! Cheers!
@TK-mv6sq
@TK-mv6sq 2 года назад
thank you!
@alish2950
@alish2950 18 дней назад
I wonder if this is related to BIC, Bayesian Information Criterion. It's about choosing the simpler model with fewer variables, similar to regularization.
@AYANShred123
@AYANShred123 2 года назад
Wonderfully explained! Mathematicians should be more subscribed to!
@julianneuer8131
@julianneuer8131 3 года назад
Excellent!
@ritvikmath
@ritvikmath 3 года назад
Thank you! Cheers!
@debaratnanath955
@debaratnanath955 3 года назад
Great video. A quick question though. If we consider (beta) belonging to a N(0, t^2), how do we explain why beta can't be zero in case of Ridge?
@convex9345
@convex9345 3 года назад
mind boggling
@JorgeGomez-kt3oq
@JorgeGomez-kt3oq 10 месяцев назад
Great video, just a question, where can I get some example of the algebra?
Далее
The Unreasonable Effectiveness of Bayesian Prediction
15:03
Hidden Markov Model : Data Science Concepts
13:52
Просмотров 114 тыс.
SVM Kernels : Data Science Concepts
12:02
Просмотров 69 тыс.
The Bayesian Trap
10:37
Просмотров 4 млн
Video 1: Introduction to Simple Linear Regression
13:29
Bayes theorem, the geometry of changing beliefs
15:11
Maximum Likelihood : Data Science Concepts
20:45
Просмотров 35 тыс.