Тёмный

Implementing Linear Regression using matlab 

Mohammad Altaleb
Подписаться 404
Просмотров 56 тыс.
50% 1

This is an Implementation of Linear Regression Algorithm with one variable using matlab.
the algorithm predicts the profits that could be gained from a city depending on it's population.
the examples were taken from week two assignment of the machine learning course provided be Stanford university:
www.coursera.o...
Code on github:
github.com/moh...
music created using:
www.jukedeck.com/
email: mohammadaltaleb@gmail.com
facebook: / mohammad.n.altaleb
linkedin: / mohammad-altaleb-52204185

Опубликовано:

 

1 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 60   
@SebinMatthew
@SebinMatthew 6 лет назад
this is from andrew ng's course on machine learning
@SomebodyOutTh3re
@SomebodyOutTh3re 6 лет назад
yeah ! am i the only one who couldn't complete the first programming assignment easily ?
@shanakaj007
@shanakaj007 5 лет назад
+1 haha
@jorgeortiz7926
@jorgeortiz7926 5 лет назад
@@SomebodyOutTh3re No, you are not the only one, i had to come here for help haha
@subrattrivedi605
@subrattrivedi605 5 лет назад
@@SomebodyOutTh3re me too !!!
@vikrantlovely
@vikrantlovely 5 лет назад
me too
@aboss2814
@aboss2814 7 лет назад
Great explanation I was looking for this video so that I could undertand how to implemnt it in many different languages! TYSM!
@gauravjha8237
@gauravjha8237 5 лет назад
Why is theta(2,1) ? Why 2,1??
@peepypoopy8917
@peepypoopy8917 4 года назад
I have enrolled on Andrew Ng's course on Machine Learning. I have literally been sitting for a *week* trying to work out the theta values. Theta matrix is of [theta0 ; theta1] format. The correct theta value is [-3.6303 ; 1.1664] but the theta value I am getting is [-3.8958 ; 1.1930]. Also while submitting it, I am getting "out of memory error". I'm desperate for any help.
@oliversymon4297
@oliversymon4297 5 лет назад
i keep getting compute cost errors every line even after following this video exactly, has anyone got any solutions??
@subhodhks1879
@subhodhks1879 6 лет назад
i have followed your code however I'm getting an error while calling gradient function , it says "nonconformant arguments (op1 is 1x1, op2 is 97x1)" is anyone else getting the same error? I'm using octave.
@himanchalchandra6202
@himanchalchandra6202 6 лет назад
you cant multiply 1x1 and 97x1 matrix, try reverse
@shanakaj007
@shanakaj007 5 лет назад
you miss () for sum function
@amandarash135
@amandarash135 5 лет назад
You deserve more likes and subscribers
@kasrasadatsharifi7962
@kasrasadatsharifi7962 5 лет назад
Helpful video...cool music as well...where can I find the music?
@vallurirajesh
@vallurirajesh 7 лет назад
How do we deal with a situation where we have an extended number of features needing a number of theta. I was trying to vectorize this formula without much luck on Octave.
@suriyaalangaramohan6044
@suriyaalangaramohan6044 6 лет назад
i am getting error is Error in gradientDescent (line 4) m = length(y); same also in costcompute how can i solve my problem
@ContentCocktailOfficials
@ContentCocktailOfficials 6 лет назад
is there any video on regression using multivariable
@allaboutece8752
@allaboutece8752 3 года назад
why we need to add the ones to the matrix X in the cost function
@sumailaadams3787
@sumailaadams3787 3 года назад
thank you very much Sir. exactly what i have been looking for to complete machine learning course I am taking in cousera
@S.Nafis.S
@S.Nafis.S 2 года назад
Same as me....
@letifmyriam3440
@letifmyriam3440 5 лет назад
i saw that you used an iteratif number. is it a stepwise regression ? thank's for sharing
@rashidrumman871
@rashidrumman871 6 лет назад
t2 = theta(2) - (alpha * (1 / m) * sum((h - y) .* X(:, 2))); here can you explain the .* X(:, 2) part? why did we multiply the 2nd column of X with each element of (h-y) ?
@shrutosom
@shrutosom 6 лет назад
Rashid Rumman Check out the Video tutorial of the course. It was derived earlier that when the cost function is differentiated with respect to theta1 during implementing gradient descent, the difference of hypothesis function and the target value is multiplied by the ith feature.
@khushwindersinghuniversali8317
@@shrutosom ---X(:, 2)-- is selecting the second column of Matrix X and multiplying it by Theta1. Look up the Formula for cost function. It is part of cost function.
@MuhammadShahzad-dx5je
@MuhammadShahzad-dx5je 5 лет назад
Very well explained. Good Work brother. JazakAllah
@vajihetavakoli9740
@vajihetavakoli9740 6 лет назад
this code is for online gradient or batch gradient?
@csvakil
@csvakil 6 лет назад
why did we initialize theta value as zeros(2,1)? and moreover how can we decide the value of theta?
@tsheringtamang1008
@tsheringtamang1008 6 лет назад
Its better to initialize with small number and zero works fine.
@oras4940
@oras4940 7 лет назад
Great video Mohammed, it does clarify implementation in easy to follow way
@jundou7858
@jundou7858 6 лет назад
thank you very much, this is what I am looking for
@jamescai4137
@jamescai4137 6 лет назад
Do you know how to add an interaction term and implement it?
@ridwa
@ridwa 5 лет назад
Straight outta coursera
@Falconoo7383
@Falconoo7383 4 года назад
Thank you..
@lexiliu16
@lexiliu16 6 лет назад
Thank you so much, it helps a lot!
@marinafuster7005
@marinafuster7005 5 лет назад
Awesome! Thank you so much
@mayankpatel5437
@mayankpatel5437 5 лет назад
Thank You so much 😊😊
@arungade2
@arungade2 6 лет назад
can someone explain the data(:,1) -what is ':' in the code
@gankster9007
@gankster9007 6 лет назад
It's data slicing like you do in Python. data(:,1) means u want to select all rows and the 1st column.
@dipikad9501
@dipikad9501 6 лет назад
X(:,1) means all rows in X and the first column in X
@smartgroupacademy
@smartgroupacademy 6 лет назад
Perfect!!! thanks bro God bless you Please continue this good videos thanks again!
@theinstigatorr
@theinstigatorr 6 лет назад
This was great thanks a lot
@juhyun925lee
@juhyun925lee 6 лет назад
Why did you add additional ones in the 1st column of X matrix (3:48)? Thank you for the video.
@doupanpan7271
@doupanpan7271 6 лет назад
h = theta0 + theta1 * X1, here add 1 to the left of X, is to say X0 = 1, so new formula becomes h = theta0 * X0 + theta1 * X1, so for theta0 * X0 + theta1 * X1 (X0 = 1), you can do the vectorization.
@juhyun925lee
@juhyun925lee 6 лет назад
thank you! If I want to implement multivariable linear regression (more than 2 X values), how can I implement?
@jundou7858
@jundou7858 6 лет назад
For this part, it will be the same approach, so now the formula becomes h = theta0 * XO + theta1 * X1 + theta2 * X2 (X0 = 1), and then use vectorization method to handle it.
@juhyun925lee
@juhyun925lee 6 лет назад
Thank you! I have tried. When I put simple training, such as X = [1 1; 2 2; 3 3; 4 4], and Y = [2 3 4 5]'. It works good. However, when I put more complicated number such as X = [1 20; 2 18; 5 30; 8 40]; Y= [10 16 20 15]'. Cost values diverged. Can you please help me?
@knectt4643
@knectt4643 3 года назад
@@juhyun925lee you needed to apply feature scaling methods to your data set. Hope you found a solution.
@nicoleleung3177
@nicoleleung3177 6 лет назад
does anyone know what stands for iteration?
@singhmohit7
@singhmohit7 6 лет назад
means each execution of formula like gradient descent = iteration
@anirbanray8602
@anirbanray8602 5 лет назад
iteration means repetations like in exercising
@vikassonwani773
@vikassonwani773 7 лет назад
what is theta value ??
@MohammadAltaleb
@MohammadAltaleb 7 лет назад
theta is the coefficients vector of the hypothesis, also called the weights
@vikassonwani773
@vikassonwani773 7 лет назад
please tell me why we use gradient descent.. if we can get result via simple linear regression? and use of theta update ?
@MohammadAltaleb
@MohammadAltaleb 7 лет назад
gradient descent is used to find the weights (theta vector) that will minimize the cost function, you can lean more about this topic from the mooc linked in the description
@vikassonwani773
@vikassonwani773 7 лет назад
thanks... but I am getting a problem... cost function value is decreasing but not coming constant value... So. is this wrong sign of function? or not
@MohammadAltaleb
@MohammadAltaleb 7 лет назад
i'm sorry, i didn't understand your question, but in general, you may not find the global minimum value for the cost function, because sometimes gradient descent sucks at a local minimum, that's why there are better algorithms, but i didn't need to use them in this simple example
Далее
Linear Regression 1 [Matlab]
12:05
Просмотров 39 тыс.
MATLAB - Gradient Descent
15:31
Просмотров 15 тыс.
Обменялись песнями с POLI
00:18
Просмотров 214 тыс.
Model Fitting and Regression in MATLAB
9:11
Просмотров 208 тыс.
Implementing Logistic Regression using Matlab
13:27
Просмотров 46 тыс.
Linear and Polynomial Regression in MATLAB
8:55
Просмотров 56 тыс.
Linear and Multivariable Regression in MATLAB
1:03:27
Просмотров 15 тыс.
Linear Regression in MATLAB
12:14
Просмотров 27 тыс.
Linear Regression in Matlab
2:13
Просмотров 64 тыс.
Logistic Regression in MATLAB
9:31
Просмотров 15 тыс.
Gradient Descent, Step-by-Step
23:54
Просмотров 1,3 млн
Обменялись песнями с POLI
00:18
Просмотров 214 тыс.