Тёмный

Lecture 6 - Support Vector Machines | Stanford CS229: Machine Learning Andrew Ng (Autumn 2018) 

Stanford Online
Подписаться 616 тыс.
Просмотров 229 тыс.
50% 1

For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: stanford.io/ai
Andrew Ng
Adjunct Professor of Computer Science
www.andrewng.org/
To follow along with the course schedule and syllabus, visit:
cs229.stanford.edu/syllabus-au...

Опубликовано:

 

5 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 38   
@a7744hsc
@a7744hsc 2 года назад
SVM starts from 46:20
@manueljenkinjerome1107
@manueljenkinjerome1107 Год назад
Thank you very much!
@yuchenliu8449
@yuchenliu8449 Год назад
Thanks bro
@WillzMaster85
@WillzMaster85 Год назад
thanks alot for the timestamp.
@c0r5e
@c0r5e Год назад
51:00 to be precise
@ZeeshanAbbasiplanz
@ZeeshanAbbasiplanz 6 дней назад
saved my life
@coragon42
@coragon42 2 года назад
32:07 It helps me to think of Laplace smoothing as Pr(observation gets label) = (count of observations with label)/(number of observations) --> Pr(observation gets label) = (count of observations with label + 1)/(number of observations + number of possible labels)
@thatsharma1066
@thatsharma1066 Год назад
1:11:20 I don't understand how w/17, b/17 can prevent increasing functional margin by just increasing weight and bias?
@samurai_coach
@samurai_coach Год назад
26:30 memo. he explains the difference between the multinomial event and the multivariate Bernoulli event model.
@creativeuser9086
@creativeuser9086 Год назад
too many side quests in this level
@gnuhnhula
@gnuhnhula Год назад
Done!
@abhigyanganguly1988
@abhigyanganguly1988 Год назад
just had a doubt....... at 54:56 , what does g(z) denote ? is it the sigmoid function ?
@vigneshreddy6121
@vigneshreddy6121 Год назад
Yes, for sigmoid function, when theta transpose x > 0, the sigmoid will be > 0.5
@MrSteveban
@MrSteveban Год назад
In 19:15 wouldn't it be more accurate to say multinouli instead of multinomial, since the concept of number of trials that's a parameter of the multinomial distribution doesn't really apply here?
@kevinshao9148
@kevinshao9148 7 месяцев назад
Thanks for the great video! One question: 8:00, if you have this NIPS in your feature, were you even able to train your model if you don't have any email contains NIPS? Your MLE formula will yield 0 probability. (Or actually you not really train your model, you got analytic solution directly, and prediction just use the counting solution?) Thanks in advance for any advice!
@littleKingSolomon
@littleKingSolomon 5 месяцев назад
We estimate the parameters in the analytical solutions using MLE. If NIPS didn't occur, we can resolve the problem of zero division with Laplace smoothing. Or perhaps u mean NIPs not in your training dictionary. In which case a sentinel value it used to represent all other values not present in training data
@vemulasuman6995
@vemulasuman6995 Месяц назад
where can I find class notes for this lecture? please any one know this
@fahyen6557
@fahyen6557 Год назад
1/4 done!😵
@jaskaransingh3200
@jaskaransingh3200 Год назад
A doubt : When talking about NIPS conference making zero probability in Naive Bayes ; in the first place, probability of word NIPS shouldn't come up in the calculation P(x /y=0) , as the the binary column vector of 10000 elements won't have this word in it as its not in the top 10000 words cuz it started appearing very recently.
@traveldiaryinc
@traveldiaryinc Год назад
I think he said a dictionary with 10k words where Nips is a he 6017th word, dictionary doesn't contain top 10k words
@jayanjans
@jayanjans Год назад
NIPS is the 6017 word in 10000 words dictionary but as the word doesn't appear in the mails that is received in the beginning the MLE is a product so it would tend to be zero, now when the word started appearing in the mail the detection by the model would be still zero as the product in the MLE is already at zero
@haoranlee8649
@haoranlee8649 7 месяцев назад
laplace smoothy
@bakashisenseiAnimeIsLove
@bakashisenseiAnimeIsLove Год назад
at 35:21 shouldnt there be ni in general instead of the 10000 that is being added
@timgoppelsroeder121
@timgoppelsroeder121 Год назад
No n_i is the number of words in the i'th email but the term we add to the bottom in laplace smoothing is the number of possible labels which in andrews example is the dictionary size=10'000
@timgoppelsroeder121
@timgoppelsroeder121 Год назад
I was wondering the same thing for a second
@deniskim3456
@deniskim3456 Год назад
Don't buy drugs, guys.
@realize2424
@realize2424 Год назад
I did drugs so I could become a machine learner!
@The_Quaalude
@The_Quaalude 6 месяцев назад
A lot of software engineers take Adderall and micro doses of molly, shrooms, and acid 😂
@floribertjackalope2606
@floribertjackalope2606 3 месяца назад
too late
@microwavecoffee
@microwavecoffee Год назад
They lost 😭
@karanbania2785
@karanbania2785 Год назад
The exact thing I was wondering
@HyeonGon90
@HyeonGon90 5 месяцев назад
so it doesn't need to do Laplace smoothing
@marciamarquene5753
@marciamarquene5753 9 месяцев назад
G a gente se vê se fala ET r viu o cafezinho tava no forno
@marciamarquene5753
@marciamarquene5753 8 месяцев назад
T amo demais essa noite foi tão rápido mas se Deus quiser vir buscar f xi tô falando w se quiser ir comigo te amo e fica tranquilo então obrigada pelo convite lá pegar o valor é é é só o mesmo do trabalho e depois do jogo e do trabalho é melhor hoje
@marciamarquene5753
@marciamarquene5753 9 месяцев назад
C BB GG GG GG GG e GG e GG GG GG GG e o e um pouco então né eu tenho e um beijo e o cafezinho e o carro de manhã r ER r viu o jogo é só no é só no é o nome de quem é
@jaivratsingh9966
@jaivratsingh9966 4 месяца назад
camera person - please do not move it frequently next time. It should focus on what is written on board. You are tracing professor and losing content. We can relate voice to what is written on board. It should always be on vision what he talking. Your and his hard work got wasted a bit.
@maar2001
@maar2001 7 месяцев назад
He needs to learn how to speak loud and more clearly... Otherwise it's a good lecture 👍🏾
@LoneWolfDion
@LoneWolfDion Месяц назад
Turn up your head phones. I listen on 2x speed and can understand him. When I went to normal speed I understood less.
Далее
ЮТУБ БЛОКИРУЮТ?
01:52
Просмотров 761 тыс.
Китайка Шрек всех Сожрал😂😆
00:20
Lecture 14 - Support Vector Machines
1:14:16
Просмотров 311 тыс.
Support Vector Machines: All you need to know!
14:58
Просмотров 139 тыс.
The moment we stopped understanding AI [AlexNet]
17:38
Просмотров 851 тыс.
Support Vector Machines Part 1 (of 3): Main Ideas!!!
20:32
The Turing Lectures: The future of generative AI
1:37:37
Просмотров 581 тыс.
Andrew Ng: Opportunities in AI - 2023
36:55
Просмотров 1,8 млн
What are AI Agents?
12:29
Просмотров 117 тыс.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Просмотров 426 тыс.
ЮТУБ БЛОКИРУЮТ?
01:52
Просмотров 761 тыс.