Тёмный

Linear Regression Cost Function | Machine Learning | Explained Simply 

Coding Lane
Подписаться 26 тыс.
Просмотров 88 тыс.
50% 1

Learn what is Linear Regression Cost Function in Machine Learning and how it is used. Linear Regression Cost function in Machine Learning is "error" representation between actual value and model predictions.
To minimize the error, we need to minimize the Linear Regression Cost Function. Lesser the cost function, better the learning, more accurate will be the predictions.
------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------
Learn what is Linear Regression here : • What is Linear Regress...
Linear Regression Playlist : • Artificial Intelligenc...
------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------
You will get a New Video on Machine Learning, every Sunday, if you subscribe to my channel, here : / @codinglane
I am dedicated to help you Learn Machine Learning in a cool way !

Наука

Опубликовано:

 

18 июл 2020

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 158   
@CodingLane
@CodingLane 4 года назад
If you found any value from the video, hit the red subscribe and like button 👍. I would really love your support! 🤗🤗 👉 You will get a New Video on Machine Learning, every Sunday, if you subscribe to my channel, here : ru-vid.com/show-UCJFAF6IsaMkzHBDdfriY-yQ
@HailayKidu
@HailayKidu 2 года назад
This is my first time accessing such a video and it is a clear way of description, thank you!
@CodingLane
@CodingLane Год назад
Thank you for the appreciation. Glad it helped! 🙂
@rasengan9743
@rasengan9743 2 года назад
You got a new sub man, i didn't know this can be this simple, i really appreciate it.
@CodingLane
@CodingLane 2 года назад
Thank you 😇
@madhoounni
@madhoounni 3 месяца назад
You are a GEM. explaining in simple words with drawing.
@muaazi13
@muaazi13 3 года назад
Explained in such easy to understand language, this was my first video tutorial and I understood it pretty well. Thank you.
@CodingLane
@CodingLane 3 года назад
I am very glad to be able to help you! Your comment really means alot to me.. Thanks alot !!
@morenomt27
@morenomt27 2 года назад
WOW! Thank you for this.. I had been trying to look everywhere which simplifies the explanation of Mean Squared Error and what its purpose into few minutes and you did superb! Just in time for my research.
@CodingLane
@CodingLane 2 года назад
Glad I could help!
@raselkarim2731
@raselkarim2731 2 года назад
Than you! Very clear explanation, never imagined this would be such easy to understand.
@CodingLane
@CodingLane 2 года назад
Your welcome!
@rockysingh2200
@rockysingh2200 3 года назад
Thanks for explaining in the best possible and easy way. I was finding this very hard but after watching this video I understood it
@CodingLane
@CodingLane 3 года назад
Thanks ! I am happy that you found value from the video !
@devanshishah532
@devanshishah532 23 дня назад
This was such a great explanation! thank you !!!!
@codingstyle9480
@codingstyle9480 Год назад
Dividing the cost function by 2 is not for finding the average. It is just for convenience when the squared function is differentiated, the nominator gets a 2 for multiplication factor. So to eliminate this we just put a 2 in the denominator, I suppose.
@hjlruk
@hjlruk Год назад
Actually,i m seeing for this in comment's section .thanks :)
@abhishekdhaka4833
@abhishekdhaka4833 11 месяцев назад
i could be possible , gradient decent for weight(dw) and bias(db) have 2 in multiplication while differentiating it , but the question is there will a lot of change in error and value of weight in equation : weight = weight - LearningRate*dw and same for bias . @CodingLane can you please Answer it
@cleisonarmandomanriqueagui9176
@cleisonarmandomanriqueagui9176 2 месяца назад
@@hjlruk x2
@jeevapriya4853
@jeevapriya4853 3 года назад
jz luv the way u gave the intro ..... it's quite easy to learn from ur videos
@CodingLane
@CodingLane 3 года назад
Thank You so much Jeevapriya ! This really means to me !
@HengChamroeunboysmall
@HengChamroeunboysmall Год назад
The lesson from my professor(formula without demonstraton) make me sleepy all the time, but u make it look very simple problem and understandable. Thank you!
@CodingLane
@CodingLane Год назад
Glad I could help 😄
@KSanofficial
@KSanofficial 2 года назад
Very clear explanation. Thank you for the video!
@CodingLane
@CodingLane 2 года назад
Your welcome
@matahariramadhan
@matahariramadhan 11 месяцев назад
great video, thanks
@987dan987
@987dan987 3 года назад
Thank you for this excelent class!
@CodingLane
@CodingLane 3 года назад
Your Welcome Daniel !
@stephenmurphy5055
@stephenmurphy5055 3 года назад
Really well explained! Thanks!
@CodingLane
@CodingLane 3 года назад
Thank You Stephen ! My Pleasure 😇 !!
@nikhilsastry6631
@nikhilsastry6631 2 года назад
That's great as I took a premium course from google.... I came here for revision, as I wonder if you could explain it in simpler terms.... Great explanation 👍
@CodingLane
@CodingLane 2 года назад
Thank you so much!
@Murmur1131
@Murmur1131 3 года назад
I love it. Greetings from Germany!
@CodingLane
@CodingLane 3 года назад
Thank You so much! I am really glad you liked it !
@serene3202
@serene3202 2 года назад
Are u student of prof. Andrew Ng? Your teaching style is very much similar to him. Anyway thank you for a great video.
@CodingLane
@CodingLane 2 года назад
I learned ML first from him only... and really liked his teaching. Yea... and that is why my teaching style has been adapted to his. Good to know that you found my video helpful.
@mussabafridi
@mussabafridi Год назад
thank you so much dudee
@cristianistrying7866
@cristianistrying7866 3 года назад
Excellently explained, thanks!
@CodingLane
@CodingLane 3 года назад
Thank You... I appreciate your comment!
@habibullahsarker4339
@habibullahsarker4339 2 года назад
Easy to understand this video. Thanks, coding lane.
@CodingLane
@CodingLane 2 года назад
You’re welcome!
@johans7585
@johans7585 2 года назад
Thank you man! Keep it up 💪
@CodingLane
@CodingLane 2 года назад
Sure 🤗
@shajanreynold8545
@shajanreynold8545 3 года назад
nice explanation about the cost funtion bro understood the concept
@CodingLane
@CodingLane 3 года назад
Thanks alot Shajan! This means much to me. Glad to help !
@theforester_
@theforester_ 2 года назад
really good explanation! thanks very much. greetings from brazil
@CodingLane
@CodingLane 2 года назад
Hi Mauricio Jean... Good to see you. Thanks for the compliment.
@hemantsah8567
@hemantsah8567 3 года назад
Brief... Explanatory... Great....
@CodingLane
@CodingLane 3 года назад
Thanks a lot Hemant ! Really appreciate it !
@Icandleii
@Icandleii Год назад
Thank you so much
@mohammadawwad7832
@mohammadawwad7832 2 года назад
Thanks man, very good job !
@CodingLane
@CodingLane 2 года назад
Welcome!
@theoronno7709
@theoronno7709 Год назад
That is a very much good explanation. thanks for your this help
@CodingLane
@CodingLane Год назад
You’re welcome!
@pujaroy7182
@pujaroy7182 6 месяцев назад
He is just great
@EmoblazeXD
@EmoblazeXD 3 года назад
The 2 exponent is there to eliminate negative numbers and not because of the magnitude of the data points. If anything squaring would make them even larger.
@CodingLane
@CodingLane 3 года назад
Yes right . Making them larger won’t be a problem, because the graph of cost vs weight will remain the same way, and we will be able to reach to the local minima of that graph. You can try once by taking absolute value without squaring, and then try again with taking square, you will get the same accuracy and results in both cases.
@ivelinkarageorgiev3111
@ivelinkarageorgiev3111 Год назад
Thanks man , that was an excellent explanation^^, keep rocking \m/
@CodingLane
@CodingLane Год назад
Glad it helped!
@abdelrahmane657
@abdelrahmane657 Год назад
Excellent
@kevinalexis9886
@kevinalexis9886 2 года назад
This really helped thanks!
@CodingLane
@CodingLane 2 года назад
Your Welcome!
@cleisonarmandomanriqueagui9176
@cleisonarmandomanriqueagui9176 2 месяца назад
cost function can be any function for those who are asking why dividing by 2 . of course the ones that makes sense and we can minimize is a function power of 2 so that's why
@rahar6009
@rahar6009 2 года назад
This was awesome thank you so much
@CodingLane
@CodingLane 2 года назад
Glad to help!
@BigAl6405
@BigAl6405 3 года назад
GOod explanation. Thanks!
@CodingLane
@CodingLane 3 года назад
Thank you !
@geezgus
@geezgus 2 месяца назад
thankyou!!
@jasonkhongwir1302
@jasonkhongwir1302 3 года назад
great tutorial !
@CodingLane
@CodingLane 3 года назад
Thanks alot Jason !! Glad to help !!
@bruh-hp3hu
@bruh-hp3hu 2 года назад
thank you this was perfect
@CodingLane
@CodingLane 2 года назад
Your Welcome!
@johnnynguyen7657
@johnnynguyen7657 3 года назад
You sir should be a professor
@CodingLane
@CodingLane 3 года назад
I am just a student for now . Thank You so much ! It really means a lot to me ! I am glad my videos helped you.
@vinayakpatil5214
@vinayakpatil5214 2 года назад
thanks bro explanation is superb
@CodingLane
@CodingLane 2 года назад
Your welcome!
@soryegetun529
@soryegetun529 3 года назад
thanks soo so much mate.
@CodingLane
@CodingLane 3 года назад
Your Welcome !
@salahokba6837
@salahokba6837 3 года назад
Cool video thank U ...when you try to tune for example the PI control Gains (kp, ki) of a closed loop sys by " genetic algorithm " the cost is the same as fitness function ...and if want to suggest to me any thing can help to code this in matlab. Thanks again for the effort
@kmanju6854
@kmanju6854 2 года назад
Woow nice explanation 👏👌
@CodingLane
@CodingLane 2 года назад
Thank you
@ekleanthony7735
@ekleanthony7735 Год назад
Nice explanation dude
@CodingLane
@CodingLane Год назад
Thank you! 🙂
@-alfeim2919
@-alfeim2919 2 года назад
Great math explanation!
@CodingLane
@CodingLane Год назад
Thanks a lot!
@narendrapratapsinghparmar91
@narendrapratapsinghparmar91 6 месяцев назад
Thanks
@bmuralikrishna8054
@bmuralikrishna8054 3 года назад
Hello, JP. You explained it really very well. Thank a lot. Just have a query... why do we apply square for our cost function. is it really necessary?
@CodingLane
@CodingLane 3 года назад
Hi Murali, you either need to square the cost function or take modulus of it. What we want is positive number for the difference between actual value and predicting value. Otherwise we will not be able know how large is our error. For eg, if our error differences are 3, -4, 2. Then cost should be = (3 + 4 + 2 )/3 = 3. If we don’t square or take modulus of these errors then it will mislead us giving the cost function value to be = (3 - 4 + 2 )/3 = 0.33
@bmuralikrishna8054
@bmuralikrishna8054 3 года назад
@@CodingLane thanks a lot for your quick reply. I really understood it in now in very good manner. I really appreciate your effort for teaching others.
@nandiniyadav5337
@nandiniyadav5337 2 года назад
Very well explained. U are amazing 😊
@CodingLane
@CodingLane 2 года назад
Thank You so much! It's my pleasure to create content
@CodingLane
@CodingLane 2 года назад
Thank You so much! It's my pleasure to create content
@khaledsrrr
@khaledsrrr Год назад
nice
@akshayjadhav2213
@akshayjadhav2213 3 года назад
cool video!!
@CodingLane
@CodingLane 3 года назад
Thanks Akshay !
@farhansomali1091
@farhansomali1091 3 года назад
Thank you.
@CodingLane
@CodingLane 3 года назад
Your Welcome Farhan !
@blu_silver
@blu_silver 3 года назад
Thanks bro 😜👍
@CodingLane
@CodingLane 3 года назад
Thank you so much ! Your comments mean something to me.
@thepythonprogrammer4338
@thepythonprogrammer4338 2 года назад
Thanks mate
@CodingLane
@CodingLane 2 года назад
Your Welcome!
@sumerujain3057
@sumerujain3057 2 года назад
Thanks man
@CodingLane
@CodingLane 2 года назад
Your Welcome
@HettyPatel
@HettyPatel 3 года назад
ty
@CodingLane
@CodingLane 3 года назад
Welcome 😇
@jahnavibehara5823
@jahnavibehara5823 Год назад
thank you :)
@CodingLane
@CodingLane Год назад
You're welcome!
@AJ-vq9ym
@AJ-vq9ym 2 года назад
good explanation
@CodingLane
@CodingLane 2 года назад
Thank you!
@bhargavsai2449
@bhargavsai2449 3 года назад
excellent
@CodingLane
@CodingLane 3 года назад
Thank you !
@user-wb7qr1wz4c
@user-wb7qr1wz4c 10 месяцев назад
Hi, great lecture. Besides your lectures, which book or online course would you recommend to learn ML topics (KNN/SVM etc.) in depth?
@anuragpanda7058
@anuragpanda7058 6 месяцев назад
did you find a book like that?
@BTae9293
@BTae9293 Год назад
Thank you
@CodingLane
@CodingLane Год назад
Welcome!
@siamami9177
@siamami9177 2 года назад
Great explanation, but didn't get the point of multiplying with 1/2m in case of squared or 1/m in case of absolute. Thank You
@CodingLane
@CodingLane 2 года назад
I have explained that in comments section below… you can have a look at it… it might help
@tejaspatil3978
@tejaspatil3978 3 года назад
Are you from india.. your english accent is good.. And btw video is nice..
@CodingLane
@CodingLane 3 года назад
Thank You so much! Yes, I am from India
@dens3254
@dens3254 Год назад
In this example you initiated randomly a curve. But in fact, how can we initiate the first equation/curve/line?
@supriyamanna715
@supriyamanna715 2 года назад
my qn is why the square is taken? modulas was okay but why sq? next, I'm not getting why again 1/2m is needed, 1/m is okay for average but again the 1/2??
@CodingLane
@CodingLane 2 года назад
Hi Supriya... good question... the answer of this is already given in comments below.
@drelijahmikail3916
@drelijahmikail3916 2 месяца назад
why 2m specifically? There are differences of 2m points, but there are only m values.
@malihanadeem4854
@malihanadeem4854 3 года назад
Can someone please explain why did we multiply by 1/2m?
@CodingLane
@CodingLane 3 года назад
Hello Maliha, thanks for your question. As we are taking the sum of squared errors of all m datapoints, dividing it with (1/m) will give us the average. Thus we multiply (1/m) to get the average of squared error. You can choose to multiple (1/2m) or (1/m), it wont affect our model. But multiplying (1/2m) makes the expression mathematically simple when we are dealing with squared error. Hope it helps! Let me know.
@malihanadeem4854
@malihanadeem4854 3 года назад
@@CodingLane Thank you for the response! I finally get it
@ahmetozel3011
@ahmetozel3011 Год назад
@@CodingLane worse language i have ever seen. Bro she asked 2/m. i already know why we multiply with 1/m but why 2? WHY 2 TWO
@ijajahmed3618
@ijajahmed3618 3 года назад
go ahead
@CodingLane
@CodingLane 3 года назад
Thanks alot Ijaj. This means alot to me !!
@fullbridgerecrifier
@fullbridgerecrifier 2 года назад
💯
@CodingLane
@CodingLane 2 года назад
Thank you!
@vsmemegenix3512
@vsmemegenix3512 Год назад
It should be y predicted - y actual
@Karthik-yy6up
@Karthik-yy6up 3 года назад
I honestly don't get why you squared it, and divided it by 2m. What's the problem in just considering the summation of all distances?
@CodingLane
@CodingLane 3 года назад
(Y - y_hat) can be negative (when our straight line is above a point), so it will lead to wrong answer if we just sum it. So if you don’t want to square it, then take its modulus i.e, | Y - y_hat | . And about the 2m, i have already answered that in the comment down below. Do check that out.
@Karthik-yy6up
@Karthik-yy6up 3 года назад
@@CodingLane Thanks for the response 😀
@AnkitKumar-ss7sx
@AnkitKumar-ss7sx 3 года назад
Great explanation brother
@CodingLane
@CodingLane 3 года назад
Thank You Ankit !
@neturonix
@neturonix 3 года назад
👍👍👍👍👍👍👍
@CodingLane
@CodingLane 3 года назад
Thanks 😇 !!
@rahmatulakbar6748
@rahmatulakbar6748 2 года назад
Why you devide by 2
@CodingLane
@CodingLane 2 года назад
I have answered this in the comments below
@imtiaziqbal3041
@imtiaziqbal3041 8 месяцев назад
why divide by 2M?
@amaankhilji9810
@amaankhilji9810 3 года назад
you explained better than Andrew ng honestly
@CodingLane
@CodingLane 3 года назад
Thank you so much ! This really means a lot to me.
@franciscogalvez4163
@franciscogalvez4163 Год назад
I think that theres an error i your cost formula, isnt (y-y^) its (y^-y).
@franciscogalvez4163
@franciscogalvez4163 Год назад
Actually in the implementation withh Python you use this (y^-y)
@CodingLane
@CodingLane Год назад
Hi Francisco, it’s appropriate.
@franciscogalvez4163
@franciscogalvez4163 Год назад
@@CodingLane Please upload more videos with other models... Are very helpfull
@nitika9769
@nitika9769 11 месяцев назад
Your hands are pretty 😂
@CodingLane
@CodingLane 11 месяцев назад
Haha… thanks! Its the most unique compliment I have received 😁
@user-zf6lr4rg3c
@user-zf6lr4rg3c 10 месяцев назад
Zesty
@ajaykushwaha-je6mw
@ajaykushwaha-je6mw 3 года назад
what is the catch behind fake accent.
@CodingLane
@CodingLane 3 года назад
Its now kind of natural to me 😅. But i will improve to have more of an Indian accent
@AaftabShaikh567
@AaftabShaikh567 Год назад
I think the formula of cost function in this video isn't right
@CodingLane
@CodingLane Год назад
Hi Aaftab, it is correct. There can different cost function formulas, depending on the type of model you are creating.
@bob77097
@bob77097 3 года назад
Good job. Just please stop trying to force an american accent. Doesn't matter what accent you have, you are intelligent enough to explain tought concepts to everyone. Just be you
@CodingLane
@CodingLane 3 года назад
Thank you so much ! And I am not trying to force my accent, its now natural to me 😅. I am glab that you found my videos helpful.
@mohamedelsabbagh5681
@mohamedelsabbagh5681 6 месяцев назад
He is not trying,actually i enjoed his videos though I don’t like to hear indian accent-@the channel onwer u did great
Далее
133 - What are Loss functions in machine learning?
6:50
Бмв сгорела , это нормально?
01:01
Gradient Descent, Step-by-Step
23:54
Просмотров 1,3 млн
Support Vector Machines: All you need to know!
14:58
Просмотров 139 тыс.