Тёмный

Lecture 2 "Supervised Learning Setup Continued" -Cornell CS4780 SP17 

Kilian Weinberger
Подписаться 22 тыс.
Просмотров 60 тыс.
50% 1

Опубликовано:

 

30 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 47   
@dvs6121
@dvs6121 5 лет назад
These videos make me want to apply to Cornell just so I can take every one of his classes. His lectures are exceptional. They're clear, well-paced, interactive, funny, and still rich with interesting technical content. I am in awe...
@jeffreyanderson5333
@jeffreyanderson5333 3 года назад
Everybody wants to go to Cornell
@dantemlima
@dantemlima 2 года назад
Raise your hand if you are watching this in 2021! I just can't stop watching Prof Weinberger's Lectures. There's a lot to take in, and they are delightful. Thank you from a Brazillian admirer, Professor!
@michaelmellinger2324
@michaelmellinger2324 2 года назад
5:15 Lecture begins 7:10 Nokia phones with face recognition 9:30 Describes image problem representation 12:00 Dense vs sparse representation 14:05 More formal description of …ML process, train/test 17:00 What is my label space and what is my data. Feature representation. Regression vs classification vs … Also, choose function h from H, the hypothesis class 18:40 Data scientists choose H, ML programs choose h. That’s what we call learning 19:15 Decision tree description 20:40 Linear classifier, ANN, SVM 21:20 There’s no best algorithm 22:05 Terrible algorithm #1 24:10 Terrible algorithm #2 25:40 Loss functions - always non-negative. Lower is better. zero means no mistakes 26:45 delta function 32:35 square loss vs absolute loss question 35:35 Learning H, choose h in H. Choose h with small loss on data 36:45 Terrible Algorithm #3 39:30 US Army mistake. Bright images vs dark images. Different distributions 41:45 Generalization 46:20 We can’t compute it but we can estimate…How to approximate. Train/test split
@alexenax1109
@alexenax1109 5 лет назад
The lecture starts at 5:17!
@avimohan6594
@avimohan6594 5 лет назад
Thanks!
@SundaraRamanR
@SundaraRamanR 5 лет назад
The ad for the other course actually sounds interesting (what resources they have, what problems they solve, etc) but unfortunately the audio quality is too bad.
@dragonslayer31415900
@dragonslayer31415900 5 лет назад
I would go to a lot more classes if all of my classes were taught as well. Thanks for posting this!
@davidbarrar5968
@davidbarrar5968 4 года назад
Someone get this poor guy a glass of water!
@dvs6121
@dvs6121 5 лет назад
I like his policy of making students take an exam to get INTO the class. This helps students know if they're sufficiently prepared for the course. This works out better for students and professor.
@inserthere6387
@inserthere6387 5 лет назад
this guy is amazing, the lecture notes are amazing too, really makes machine learning accessible with examples, not drowned out in just in theory
@yusuffarah351
@yusuffarah351 4 года назад
I am currently reading a book title an introduction to statistical learning with Application in R written by four professors which I really it. But, this lecture series went beyond my expectation so far. I hope I finished all the way. Thanks professor for putting those lectures online. You are making difference in the world through teaching machine learning and making accessible for anybody with interest for machine learning.
@vivekmittal2290
@vivekmittal2290 5 лет назад
Very Well Explained may be the best machine learning course I have come across. Thank you Kilian for such an amazing material.
@vishchugh
@vishchugh 4 года назад
By far the best high level explanation on machine learning I’ve seen. Great work Kilian !
@pkatmusic5800
@pkatmusic5800 4 года назад
With all due respect to professor andrew ng, this video has THE best explanation for loss functions. Like THE BEST.
@Yousafkhan-gv7cs
@Yousafkhan-gv7cs 4 года назад
Thanks Kilian for sharing your knowledge it was very helpful to understand these difficult topics!
@zinebriad3218
@zinebriad3218 4 года назад
this guy is amazing, the lecture notes are amazing too,
@juliocardenas-rodriguez1986
Here I am again. Revisiting the basics and "stealing" Dr. Weinberger's approach to teach ML to others at work and beyond =) Thank you !
@ON-ld6se
@ON-ld6se 2 года назад
Hey professor, thank you so much for uploading these videos. The lecture was really cool - I enjoyed a lot the example from the military. Cheers from Spain!
@ashraf736
@ashraf736 2 года назад
Easy to understand explanation of concepts.
@thefourbytes
@thefourbytes 2 года назад
Wow the lectures are really good. This is a good refresher to ML. :)
@studentgaming3107
@studentgaming3107 5 месяцев назад
wauuw in my second video already i scrolled through the videos it seems it will be much better to learn this parallel with the pattern recognition book of bishop
@hello-pd7tc
@hello-pd7tc 4 года назад
Second day! hope I can finish the 37 courses.
@kilianweinberger698
@kilianweinberger698 4 года назад
Good job, keep going!! :)
@TrentTube
@TrentTube 5 лет назад
You brought bigger chalk :)
@TrentTube
@TrentTube 5 лет назад
@zinebriad3218
@zinebriad3218 4 года назад
Thanks for the great lecture prof
@shrishtrivedi2652
@shrishtrivedi2652 3 года назад
5:15 start
@meenakshisarkar7529
@meenakshisarkar7529 4 года назад
Dear Professor @Weinberger @25:13 when you were discussing the problems with TA #2, we can also argue that picking the best function h that works best for the training data might also lead to overfitting and thus bad performance on the test data. Here I am assuming by some miracle we are be able to exhaust the \calmath{H} space and find the best function h.
@orkhanbayramli2407
@orkhanbayramli2407 4 года назад
22:15 said and put the video on youtube :D
@anirbanghosh6328
@anirbanghosh6328 5 лет назад
Thank u soo much sir
@adiflorense1477
@adiflorense1477 4 года назад
21:12 Sir, what is the difference between a linear classifier and a non linear classifier? Is the decision tree a non-linear classifier?
@isaiahduck656
@isaiahduck656 5 лет назад
The guy at 00:26 looks like a chiller.
@BDONGLI
@BDONGLI 4 года назад
There's one person who disliked this video. Just one.
@prwi87
@prwi87 2 года назад
i have a question about generalized loss, as my understanding of expected values, they take a random variables as an input, so shouldn't there be L(h, (X, Y)) with X being a random vector and Y a random variable with joint dist. P? Because in the notation presented we have an arbitrary data point (x, y) taken from distribution P. I just want to know if i understand the subject well. Also what is the probability dist that we are taking the expected value with respect to? is is P? and is P a joint distribution of (X,Y)?
@usurfnow
@usurfnow 2 года назад
Is there a way to access the Teaching assistant (TA) recitations and relevant material? Thanks for uploading the course !
@XoOnannoOoX
@XoOnannoOoX 4 года назад
Thanks for the great lecture. Prof. Weinberger says "in this course we won't get into collecting the data". Is there a course that focuses specifically on gathering datasets and evaluating them (like whether the distribution is good or not for the specified use case)?
@kilianweinberger698
@kilianweinberger698 4 года назад
actually, what I meant here was more that we are not getting into how to collect the data, and how to pre-process it / extract features. Typically that's pretty domain specific, so it is hard to include any details in a general ML course. Sorry, not sure if there are other courses that go into that.
@adiflorense1477
@adiflorense1477 4 года назад
40:59 why machine learning algorithms can only be used on data with the same distribution?
@amorphous8826
@amorphous8826 11 дней назад
👍
@elmirach4706
@elmirach4706 10 месяцев назад
is there like pdf or ppt of the lectures?
@dimitrihendriks2342
@dimitrihendriks2342 4 года назад
Can anyone tell me why it is crucial that the datapoint you're testing your h function on, should come from the same distribution as the datapoints the h is learned from? And what does it mean to be the same distribution?
@biesman5
@biesman5 2 года назад
Somebody correct me if this isn't accurate but I believe it's because that way we can make more accurate predictions. For example, in the beginning he talked about Nokia's face recognizing software only detecting caucasian faces, because that was the data it was trained on. So, trying to detect a face of a non-caucasian didn't work be distribution of the data consisted only of caucasian faces.
@subhanali4535
@subhanali4535 5 лет назад
TA#3 is memorizing algorithm?, it's not in the notes
@adiflorense1477
@adiflorense1477 4 года назад
31:46 What noise
@mathslectures1437
@mathslectures1437 3 года назад
The students are discussing among themselves. It’s a class of more than 350 students