Тёмный

Ali Ghodsi, Lec 19: PAC Learning 

Data Science Courses
Подписаться 21 тыс.
Просмотров 27 тыс.
50% 1

Опубликовано:

 

8 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 43   
@lamis_18
@lamis_18 2 года назад
THE BEST - BRIEF - DETAILED - EXPLANATION I HAVE FOUND ,THANK YOU VERY MUCH.
@trontan642
@trontan642 6 месяцев назад
Much more clear than my professor.
@sudowiz9126
@sudowiz9126 Год назад
This lecture is pure gold. Thank you prof.
@maanasvohra8133
@maanasvohra8133 2 года назад
I wish I attended these lectures. Truly amazing!
@pranavchat141
@pranavchat141 4 года назад
And done. Thanks Prof. Your style of teaching agrees most with me.
@Aaaa-jp7cx
@Aaaa-jp7cx 4 года назад
Hi, It's really a very clear and detailed lecture! Thank you sooo much! And could you please talk more about APAC learning and its sampling complexity? Thanks!
@davidecremona3312
@davidecremona3312 7 лет назад
Wish my professor was good as you =) great lesson!
@ahsin.shabbir
@ahsin.shabbir 3 года назад
that was an awesome explanation. its starting to make more sense, but like he said, its a topic that one could dedicate a whole semester to.
@nelya.kulch11
@nelya.kulch11 2 года назад
Thank you! Now it became clearer for me
@solalvernier4609
@solalvernier4609 4 года назад
Thanks a lot for you course, I hope it will help me to reach a high mark for my coming exam ! :)
@MrAngelofGame
@MrAngelofGame 4 года назад
I hope also for mine exam bro !! This is a very good explanation of PAC learning, thanks a lot !
@alitabesh5362
@alitabesh5362 3 года назад
Appreciate your very clear and simple explanation. Looking forward to watch more of your videos.👏🌸
@mionome501
@mionome501 Год назад
thank you clear explanation!
@haniyek7811
@haniyek7811 3 года назад
Super clear, thanks! I guess I used to persian professor's explanations and teaching styles!
@vectoralphaSec
@vectoralphaSec Год назад
This guy looks like he's never slept in his entire life.
@shakesbeer00
@shakesbeer00 4 года назад
Thanks very much for the video. I still think the setting is quite problematic. By stating yi=c(xi), we are presuming that the true models are deterministic, rather than probabilistic. In other words, yi can be fully determined by xi. This is a huge unrealistic restriction in most applications.
@farshadnoravesh
@farshadnoravesh 3 года назад
Great Professor.
@nandkishorenangre8244
@nandkishorenangre8244 5 лет назад
Such a great prof !!
@lamis_18
@lamis_18 2 года назад
Where can we get the slides ?
@desiquant
@desiquant 3 года назад
11:41 Why are we considering all of the m points? He clearly said that this classifier correctly classifies the m points from the training data. Then, he looked at the probability that it will classify a random point (from test set) correctly. P(classifying random point correctly) = 1 - P(misclassifying random point) = 1-epsilon. Now, we want the probability it will classify all the random points correctly. And these random points should be from the test set. Why does he do (1-epsilon)^m? Where am I going wrong?
@mohammadsadilkhan1875
@mohammadsadilkhan1875 3 года назад
Yes. The classifier classifies all the m points in the training data correctly. But he wanted to show that given any classifier with error rate epsilon; what is the probability of correctly predicting m points?
@sidddddddddddddd
@sidddddddddddddd 3 года назад
@@mohammadsadilkhan1875 That means we can set m to n? Because we are talking about how good is our hypothesis when it comes to the real data set. If we have total n data points (out of which training set is made of just m points), we are talking about true error so we should take n, right?
@winmintun2992
@winmintun2992 2 года назад
well I also have the same exact question. I am also confused at that point.
@taozhang7696
@taozhang7696 5 лет назад
a really good explanation. thanks
@samlaf92
@samlaf92 5 лет назад
Not sure I understand why the hypothesis class of planes in 2D has VC dimension 3. + - + can't be classified correctly by any plane. Am I misunderstanding the definition of shattering?
@samlaf92
@samlaf92 5 лет назад
24:56 answers my question. The "classifier" gets to choose the set. Why does this make sense though..? Why isn't it the labeller that also chooses the set?
@madhusudanverma6564
@madhusudanverma6564 5 лет назад
Hi Samuel, Suppose h(x) =1 if a
@KW-md1bq
@KW-md1bq 4 года назад
Are you specifying a 1-dimensional example, with 3 alternating points in a line? (i.e. not linearly separable at all)
@alryoshakrylov7118
@alryoshakrylov7118 7 лет назад
excellent.. Thank you very much
@musasall5740
@musasall5740 5 лет назад
Excellent!
@soryahozhabr5675
@soryahozhabr5675 4 года назад
I have a question. How can shatter a positive and negative case that are located in a line. I mean consider a straight line where we have three points on it, + - +.
@soryahozhabr5675
@soryahozhabr5675 4 года назад
I mean with linear class ,for the example that we have + - + points located respectively at (1,0), (2 ,0), and (3,0) . How we can separate by a line?
@san_sluck
@san_sluck 4 года назад
@@soryahozhabr5675 u could od this using kernels. Even though I'm not sure what that has to do with this lecture
@sidddddddddddddd
@sidddddddddddddd 3 года назад
@@soryahozhabr5675 I think three collinear points cannot be shattered.
@amangarg96
@amangarg96 3 года назад
The points should not be shattered for ANY three points. There should exist three points such that however you label them, a classifier should be able to shatter it. Think of it this way. You and I are playing a game. I place the points(n). You label them anyway you like. I choose the classifier to shatter it. The maximum number of points (n) for which I can always win, is the VC dimension
@shynggyssaparbek2209
@shynggyssaparbek2209 5 лет назад
Super prof
@KW-md1bq
@KW-md1bq 4 года назад
Minor quibble: In the proof of error bounds, when you drew e^-epsilon and 1-epsilon, I think you've drawn the mirror image. They should slope up and to the left. I also think it seemed a bit arbitrary to point out that e^-x is greater than 1-epsilon. There are infinite functions that are greater than 1-epsilon. What made Leslie Valiant pick e^-x specifically, in the first place?
@yanjenhuang
@yanjenhuang 8 лет назад
Hi Prof, Nice tutorial! I have two question here! In PAC Learning: Is |H| the size of all hypothesis which training error = 0? How can we count it?
@LuisFernandoValenzuela
@LuisFernandoValenzuela 8 лет назад
he answers that around 13:20, he is basically working on a finite space, think n-class classification, there is only n possibilities (classes) for the hypothesis.
@yanjenhuang
@yanjenhuang 8 лет назад
Thanks! I understand it now! =)
@diegomabrary4974
@diegomabrary4974 5 лет назад
No, where the training error = 0 is the version space
@diegomabrary4974
@diegomabrary4974 5 лет назад
hope he can think straigntly before speak it out. times & times recorrect his statements...
@smartyprerak
@smartyprerak 4 года назад
You couldn't even write the spelling correctly before posting your comment. Irony!!
Далее
Ali Ghodsi, Lec 1: Principal Component Analysis
1:11:42
Просмотров 101 тыс.
PAC Learning and VC Dimension
17:17
Просмотров 17 тыс.
PAC Learnability
30:24
Просмотров 8 тыс.
PAC learning: the framework
23:32
Просмотров 3,2 тыс.
16. Learning: Support Vector Machines
49:34
Просмотров 1,9 млн
Model Complexity and VC Dimension
21:20
Просмотров 32 тыс.
PAC learning
54:19
Просмотров 199
Rademacher Complexity & VC Dimension
40:32
Просмотров 17 тыс.