Тёмный

Linear Regression Algorithm In Python From Scratch [Machine Learning Tutorial] 

Dataquest
Подписаться 59 тыс.
Просмотров 28 тыс.
50% 1

We'll build a linear regression model from scratch, including the theory and math. Linear regression is the most popular machine learning algorithm, and implementing it in python will help you understand how it works.
First, we'll cover the theory and the equation to calculate the coefficients. Then we'll implement the equation in python. We'll end by calculating the r squared value to figure out how well our regression fits the data.
We'll be using data from the Olympics to implement our algorithm. We'll try to predict how many medals a country will earn based on how many athletes it enters into the Olympics.
You can find the code and data at github.com/dataquestio/projec... .
Chapters
00:00 Intro
00:20 Theory and equation
14:25 Python implementation
20:02 r-squared calculation
---------------------------------
Join 1M+ Dataquest learners today!
Master data skills and change your life.
Sign up for free: bit.ly/3O8MDef

Опубликовано:

 

10 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 47   
@ninobach7456
@ninobach7456 4 месяца назад
I recommend this video for those who understand the general concept of linear regression, but want to know what happens 'under the hood'
@namrata_roy
@namrata_roy Год назад
Amazing tutorial. Difficult concepts were explained with such ease. Kudos team Dataquest!
@Mara51029
@Mara51029 4 месяца назад
This is absolutely amazing and great video. I can’t wait to see more great work
@sulaimansalisu5833
@sulaimansalisu5833 Год назад
Very explicit. You are a wonderful teacher. Thanks so much
@dataprofessor_
@dataprofessor_ Год назад
this is a great tutorial. Beautifully explained.
@BTStechnicalchannel
@BTStechnicalchannel 2 года назад
Very well explained!!!
@learn-with-lee
@learn-with-lee 2 года назад
Thank you . It was well explained.
@zheshipeng
@zheshipeng Год назад
Thanks so much. Better than any E-books 🙂
@anfedoro
@anfedoro Год назад
Great and very clear explanation. The only point missed in the end is the regression visualisation 😉. Nice to have both initial data and the regression plotted
@abidson690
@abidson690 Год назад
Thanks so much for the Video
@JohnJustus
@JohnJustus 3 месяца назад
Perfect,,, thnks a lot
@HIEUHUYNHUC
@HIEUHUYNHUC 7 месяцев назад
Today you will my teacher. I'm from VietNam. Thank you so much
@dembobademboba6924
@dembobademboba6924 5 месяцев назад
great job
@user-ql7de7ud6q
@user-ql7de7ud6q 5 месяцев назад
THANKS ALOT🤯
@guilhermesaraiva3846
@guilhermesaraiva3846 10 месяцев назад
thanks for the lesson, but just a question, during the model the separation of x,y_train and x,y_test was not made, why would it not be necessary, and if it is necessary to do it, how would it be done? thanks
@AndresIniestaLujain
@AndresIniestaLujain Год назад
Would the solution for B be considered a least squares solution? Also, If we wanted to construct say a 95% confidence interval for each coefficient, would we take B for intercept, athletes, and prev_medals (-1.96, 0.07, 0.73) and multiply them by their respective standard errors and t-scores? Would the formula would be as follows: B(k) * t(n-k-1, alpha = 0.05/2) * SE(B(k)) , or does this require more linear algebra? Great tutorial btw, thanks for the help.
@television80
@television80 7 месяцев назад
Hi Vikas, which is better for GLM models in python: sklearn or statmodels package?
@joshwallenberg337
@joshwallenberg337 9 месяцев назад
Do you have an example like this with multiple x-values or features?
@cclementson1986
@cclementson1986 5 месяцев назад
Is there a reason you chose to implement the normal equation over gradient descent? I'm quite curious as I am more familiar with gradient descent.
@fassstar
@fassstar Месяц назад
One correction, not relevant to the actuall regression, but should be said nonetheless. The number of medals one athlete can win is not limitted to one, rather it is limited to the number of events the athlete competes in (maximum of one per event). In fact, numerous athletes have one multiple medals in one Olympics. Just wanted to clarify that. Of course, from a certain number of athelets, it will be impossible for a smaller team to compete in as many events as the large team, making it more likely that the larger team wins more medals.
@jeanb2682
@jeanb2682 9 месяцев назад
Hey, That is a great beatiful demonstration of linear regression. Thank you. But I didn't understand where prev_medals coming in building X matrix at the beginning? some one can give to me explanation on apparution of these value inside the X matrix?
@oluwamuyiwaakerele4287
@oluwamuyiwaakerele4287 Год назад
Hi, this is a wonderful explanation. Great job putting this together. The only thing that really confuses me is how you factor in previous medals in the predictive model. What would that look like in the linear equation at 1:54?
@Dataquestio
@Dataquestio Год назад
You would add a second term b2x2, so the full equation would be b0 + b1x1 + b2x2. x1 would be athletes, x2 is previous medals. Then you'd have separate coefficients (b1 and b2) for each.
@iamgarriTech
@iamgarriTech 8 месяцев назад
Why do we need to add those "1" when solving the matrix
@borutamena8207
@borutamena8207 Год назад
tnx sir
@bomidilakshmimadhavan9501
@bomidilakshmimadhavan9501 Год назад
Can you please make a video demonstrating the multivariate regression analysis with the following information taken into consideration? Performs multiple linear regression trend analysis of an arbitrary time series. OPTIONAL: error analysis for regression coefficients (uses standard multivariate noise model). Form of general regression trend model used in this procedure (t = time index = 0,1,2,3,...,N-1): T(t)=ALPHA(t) + BETA(t)*t + GAMMA(t)*QBO(t) + DELTA(t)*SOLAR(t) + EPS1(t)*EXTRA1(t) + EPS2(t)*EXTRA2(t) + RESIDUAL_FIT(t), where ALPHA represents the 12-month seasonal fit, BETA is the 12-month seasonal trend coefficient, RESIDUAL_FIT(t) represents the error time series, and GAMMA, DELTA, EPS1, and EPS2 are 12-month coefficients corresponding to the ozone driving quantities QBO (quasi-biennial oscillation), SOLAR (solar-UV proxy), and proxies EXTRA1 and EXTRA2 (for example, these latter two might be ENSO, vorticity, geopotential heights, or temperature), respectively. The general model above assumes simple linear relationships between T(t) and surrogates which is hopefully valid as a first approximation. Note that for total ozone trends based on chemical species such as involving Chlorine, the trend term BETA(t)*t could be replaced (ignored by setting m2=0 in the procedure call), with EPS1(t)*EXTRA1(t) where EXTRA1(t) is the chemical proxy time series. This procedure assumes the following form for the coefficients ALPHA, BETA, GAMMA,...) in effort to approximate realistic seasonal dependence of sensitivity between T(t) and surrogate. The expansion shown below is for ALPHA(t) - similar expansions for BETA(t), GAMMA(t), DELTA(t), EPS1(t), and EPS2(t): ALPHA(t) = A0
@josuecurtonavarro8979
@josuecurtonavarro8979 Год назад
Hi guys! Very interesting indeed! There is one thing I don't understand though. The identity matrix, as you mentioned , behaves like one in matrix multiplication when you multiply it with a matrix of the same size. But in this precise case (around the 13:08) the matrix B doesn't have the same size. So how come you can eliminate the identity matrix here from the equation? Thanks!
@Dataquestio
@Dataquestio Год назад
Hi Josué - I shouldn't have said "of the same size". Multiplying the identity matrix by another matrix behaves like normal matrix multiplication. So if the identity matrix (I) is 2x2, and you multiply by a 2x1 matrix B, you end up with a 2x1 matrix (equal to B). The number of columns in the first matrix you multiply has to match the number of rows in the second matrix. And the final matrix has the same row count as the first matrix, and the same column count as the second matrix.
@hameedhhameed1996
@hameedhhameed1996 Год назад
It is such a fantastic explanation of Linear Regression. My question is, is there any possibility that we can't obtain the inverse of matrix X?
@Dataquestio
@Dataquestio Год назад
Hi Hameed - yes, some matrices are singular, and cannot be inverted. This happens when columns or rows are linear combinations of each other. In those cases, ridge regression is a good alternative. Here is a ridge regression explanation - ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-mpuKSovz9xM.html .
@sunilnavadia8203
@sunilnavadia8203 Год назад
In predictions we got values as 0.24,-1.6,-1.39 so can you explain does -1.6 medals is valid? Or I need to use some other dataset to perform regression like house prediction? Can you suggest me some dataset in which i can apply ridge regression?
@Dataquestio
@Dataquestio Год назад
Hi Sunil - with the way linear regression works, you can get numbers that don't make sense with the dataset. The best thing to do is to truncate the range (anything below 0 gets set to 0). Other algorithms that don't make assumptions about linearity can avoid this problem (like decision trees, k-nn, etc).
@sunilnavadia8203
@sunilnavadia8203 Год назад
@@Dataquestio Thank you for your message, As the prediction of this data(Medals is in decimal ) so do you have any suggestion regarding other dataset in which i can make prediction which make sense using ridge regression?
@yousif533
@yousif533 2 года назад
Thank you for this video. Could you please share the ppt slides of this lesson?
@Dataquestio
@Dataquestio Год назад
Hi Yousif - this was done using video animations, so there aren't any powerpoint slides, unfortunately. -Vik
@sunilnavadia6347
@sunilnavadia6347 Год назад
Hi Team... Very well explained Linear Regression from scratch... Do you have any video for Ridge Regression from Scratch using Python?
@Dataquestio
@Dataquestio Год назад
Hi Sunil - we don't. I'll look into doing ridge regression in a future video! -Vik
@sunilnavadia8203
@sunilnavadia8203 Год назад
@@Dataquestio Thank you
@manyes7577
@manyes7577 Год назад
@@Dataquestio thanks you are awesome
@abidson690
@abidson690 Год назад
@@Dataquestio thanks
@im4485
@im4485 9 месяцев назад
This guy is old, young, sleepy and awake all at the same time.
@gabijakielaite3179
@gabijakielaite3179 Год назад
I am wondering is it okay to have a model which predicts country to receive negative amount of medals? Isn't that just impossible?
@Dataquestio
@Dataquestio Год назад
This is one of the weaknesses of linear regression. Due to the y-intercept term, you can get predictions that don't make sense in the real world. An easy solution is to replace negative predictions with 0.
@adityakakade9172
@adityakakade9172 6 месяцев назад
I rather use statsmodel than using this mthod which makes things complex
@oluwamuyiwaakerele4287
@oluwamuyiwaakerele4287 Год назад
I guess another question I have is how to invert a matrix
@Dataquestio
@Dataquestio Год назад
Hi Oluwamuyiwa - there are a few ways to invert a matrix. The easiest to do by hand is Gaussian elimination - en.wikipedia.org/wiki/Gaussian_elimination . That said, there isn't a lot of benefit to knowing how to invert a matrix by hand, so I wouldn't worry too much about it.
@HIEUHUYNHUC
@HIEUHUYNHUC 7 месяцев назад
sorry teacher. i guess you were confused SSR was SSE and R2 = 1 - (SSE/SST) = SSR/SST
Далее
마시멜로우로 체감되는 요즘 물가
00:20
Просмотров 3,7 млн
Cat Plays with Window Washer
00:22
Просмотров 2,1 млн
Linear Regression From Scratch in Python (Mathematical)
24:38
Gradient Descent From Scratch In Python
42:39
Просмотров 17 тыс.
Predict NBA Games With Python And Machine Learning
58:33
마시멜로우로 체감되는 요즘 물가
00:20
Просмотров 3,7 млн