Тёмный

13. Classification 

MIT OpenCourseWare
Подписаться 5 млн
Просмотров 132 тыс.
50% 1

MIT 6.0002 Introduction to Computational Thinking and Data Science, Fall 2016
View the complete course: ocw.mit.edu/6-0002F16
Instructor: John Guttag
Prof. Guttag introduces supervised learning with nearest neighbor classification using feature scaling and decision trees.
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu

Опубликовано:

 

18 май 2017

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 48   
@MrCatandMe
@MrCatandMe 6 лет назад
Watching MIT OpenCourseWare videos identifies how completely lacking in substance my college education really was.
@leixun
@leixun 3 года назад
*My takeaways:* 1. Nearest neighbours 4:18 2. K-nearest neighbours 8:11 3. Performance metrics 16:50 4. logistic regression 30:20
@shivayshakti6575
@shivayshakti6575 Год назад
thanks buddy!
@avelmira
@avelmira 4 года назад
An unintended consequence of learning the difference between linear and logistic regression from Prof. Guttag in this video: the scene from The Princess Bride intrusively popped in my head where Miracle Max says: "There is a big difference between mostly dead and all dead. Mostly dead is still alive." Then I spent a few minutes giggling before I can focus again.
@iLoveTurtlesHaha
@iLoveTurtlesHaha 6 лет назад
I LOVE this man. I found this video from a search and didn't see the other 12 videos in the series and I am picking up everything he is saying. Also, it's so cool how he encourages class participation. Great teachers are amazing and a gift to humanity.
@sololife9403
@sololife9403 Год назад
agree with you. and he is very calm
@kingofgods898
@kingofgods898 3 года назад
Listening to my professor try to lecture on classification makes me nauseous and hate my life. Listening to this guy lecture on classification and I'm actually enjoying it and understanding it. People are not equal.
@naheliegend5222
@naheliegend5222 5 лет назад
Everythime I see something like that, I wonder how brilliant a human can be to break down complexity so simple like that.
@saveryd
@saveryd 6 лет назад
Prof. Guttag and Grimson are really great ! I wish I had those professors when I was in college !
@adiflorense1477
@adiflorense1477 3 года назад
same here
@nomad_manhattan
@nomad_manhattan 6 лет назад
Absolutely the best ML courses I have encountered and I have tried many. This is the only one that keeps me focus and intrigued :) Do get Prof. Guttag's book! Good companion for this class
@w1d3r75
@w1d3r75 2 года назад
If it wasn't that expensive. All of the MIT books are expensive (the ones in the MIT publications page)
@haneulkim4902
@haneulkim4902 3 года назад
Amazing lecture as always! Thanks for great resources👏
@aravindsankaran3778
@aravindsankaran3778 6 лет назад
Precission is Positive predictive value and not specificity! 19:20
@creponnekarim2865
@creponnekarim2865 2 года назад
this man seems like an old very wise man that spent most of his time either in the research, or with his grand childs plus he's a good teacher
@sandeepgill2693
@sandeepgill2693 3 года назад
Hats off to you sir for the way you share you knowledge.
@markk6594
@markk6594 5 лет назад
42:46 line "for i in range(len(probs)):" because you just need i as a index for testset and probs, you could zip these lists, e.g. "for p_i, ts_i in zip(probs, testSet):" then you can use p_i and ts_i instead of probs[i] and testSet[i]. All in all a really good lecture, thank you very much!
@henrikmanukyan3152
@henrikmanukyan3152 3 месяца назад
Stopped at the most important point 😀 makes you go to the next lesson . Anyway I am glad he mentioned it
@isbestlizard
@isbestlizard 3 года назад
YES this is what I need to do the titanic challenge on kaggle
@mohanraj7697
@mohanraj7697 2 года назад
I came here for the same. Your comment assures I can watch this, thank you
@Jcastellanoss123
@Jcastellanoss123 3 года назад
Thanks a lot for this classes, not only learn about the computational thinking, also the reason that Leonardo DiCaprio dont survive in the movie.
@Speed001
@Speed001 Год назад
34:24 fiting linear regression into a range, logistic regression. The machine learning model that's always visualized.
@guilhermeaguilar6477
@guilhermeaguilar6477 7 лет назад
very nice this videos about machine learning
@littlenarwhal3914
@littlenarwhal3914 6 лет назад
This is complicated, but the prof explains things well. Now i just need to learn more python to be able to understand it fully...
@sandipdey2033
@sandipdey2033 5 лет назад
Can anyone here tell me where can I find the video for "Regression" from the same set of MIT videos? Under what name is it present in the MIT lecture videos from the above-mentioned link?
@mitocw
@mitocw 5 лет назад
Linear regression is covered in lecture 9: ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-0002-introduction-to-computational-thinking-and-data-science-fall-2016/lecture-videos/lecture-9-understanding-experimental-data/. Best wishes on your studies!
@batatambor
@batatambor 4 года назад
Why didn't the professor fall in the 'dummy variable' trap? He used classes C1, C2 and C3 but he shouldn't have used all the 3 to create the regression model since C1 = 1 - C2 - C3, which means that the variables are dependent on each other. Someone knows the answer?
@TheJustinmulli
@TheJustinmulli 4 года назад
27:30 Wouldn't it be better to set label and k as keyword arguments instead of creating a separate knn function via lambda abstraction? He talks about using this to build much more general programs, yet he created two functions when you could just create one that does both, which would be more general than creating two.
@ChrisAdvena
@ChrisAdvena 3 года назад
Prof. Guttag talks about problems finding k nearest neighbor for large data sets due to number of distances needed to be calculated. Good old-fashioned relational databases have had a solution to this for decades. They use, for example, partitioning, multi-level indexing and calculated columns. The calculated columns can be stored or cached. In fact, we want our database to live in a disk / cache balance that optimizes our multitude of parameters , which boil down to preprocessing time and real-time processing time as constrained by money. This makes finding nearest neighbor, or any other math based comparison, faster by multiple orders of magnitude for large data sets. Recognizing, much of this can be done in memory, my question is, at which key places in machine learning do we most apply what we have learned in other data science fields about quick data access? In other words, where can we largely mitigate these and how do we decide if it is preferable to maximize performance of a function as opposed to utilizing a different ML approach?
@adiflorense1477
@adiflorense1477 3 года назад
12:01 why is the k nearest neighbor data training separated into testing and training again?
@danielmelendrez1616
@danielmelendrez1616 Год назад
3:12 I believe that this statement is wrong. He is ACTUALLY using the full representation using the number of legs too. If you do the math, using the binary rep only, then the distance matrix shown is incorrect. CORRECTION: They are NOT using the number of legs, however, they erroneously threw a 2 in the last element of the binary data for chicken while it should be 0. I tested my own algorithm with this number and I get the same result as shown in the video. Additionally, the last binary feature should be 'reptile', correct? In the python data set the last element is zero in various of the reptile cases. Please let me know if I am missing something obvious...
@annakh9543
@annakh9543 5 лет назад
i'm already sad that im gonna finish these series of lectures soon :/
@fuzzyip
@fuzzyip 4 года назад
wow, i wish you were my professor
@Trazynn
@Trazynn 3 года назад
"The more legs an animal has, the less likely it is to be a reptile."
@landrynoulawe1565
@landrynoulawe1565 Год назад
Animal with 4 legs has more chances to be a repitile than animal with 2 legs.
@jongcheulkim7284
@jongcheulkim7284 2 года назад
Thank you.
@amishsethi1799
@amishsethi1799 5 лет назад
Is there any way to get access to the posted code?
@mitocw
@mitocw 5 лет назад
The full course site on OCW has the lecture notes and code files: ocw.mit.edu/6-0002F16. Good luck with your studies!
@adiflorense1477
@adiflorense1477 3 года назад
it turns out that linear regression and logistic regression use the term coeff to denote weight. that interesting
@pierreehibertcortezcortez5547
@pierreehibertcortezcortez5547 6 месяцев назад
Lo máximo!
@rsd2dcc
@rsd2dcc Год назад
Finally got applause for something 😂😂😂
@fabianusmonepatimonepati6721
@fabianusmonepatimonepati6721 2 года назад
Wow I'intersting it
@TheRelul
@TheRelul 4 года назад
tough crowd here..
@hannukoistinen5329
@hannukoistinen5329 2 года назад
If this a level of MIT, forget it!! There are much more usable courses for example professor Gilbert Strang.
Далее
14. Classification and Statistical Sins
49:25
Просмотров 54 тыс.
12. Clustering
50:40
Просмотров 292 тыс.
skibidi toilet 74
07:02
Просмотров 20 млн
REALLY LOVES CHIPS
00:19
Просмотров 2,5 млн
I need your help..
00:28
Просмотров 5 млн
4. Stochastic Thinking
49:50
Просмотров 181 тыс.
Machine Learning Zero to Hero (Google I/O'19)
35:33
Просмотров 1,8 млн
6. Monte Carlo Simulation
50:05
Просмотров 2 млн
MIT Introduction to Deep Learning | 6.S191
1:09:58
Просмотров 211 тыс.
Naive Bayes, Clearly Explained!!!
15:12
Просмотров 1 млн
Lecture 3 "k-nearest neighbors" -Cornell CS4780 SP17
49:42
16. Learning: Support Vector Machines
49:34
Просмотров 1,9 млн
ML Was Hard Until I Learned These 5 Secrets!
13:11
Просмотров 181 тыс.