I really enjoyed this talk by Vladimir. Here's the outline: 0:00 - Introduction 0:46 - Overview: Complete Statistical Theory of Learning 3:47 - Part 1: VC Theory of Generalization 11:04 - Part 2: Target Functional for Minimization 27:13 - Part 3: Selection of Admissible Set of Functions 37:26 - Part 4: Complete Solution in Reproducing Kernel Hilbert Space (RKHS) 53:16 - Part 5: LUSI Approach in Neural Networks 59:28 - Part 6: Examples of Predicates 1:10:39 - Conclusion 1:16:10 - Q&A: Overfitting 1:17:18 - Q&A: Language
Thanks for noticing that. I just paid for English captions to be created. It should be done in 30-40 hours. I'll update the video then. *Update:* The completed English captions are now added to the video.
That's cause youtube probably doesn't use Vapnik's algorythms :) BTW I'm native Russian speaker, and for me his English is much clearer than US or British one :)
@@thewiseturtle you can support him in other ways. It is likely easier for Lex to manage this in a transactional way as opposed to managing all the drama that goes with community involvement. And even though he is paying for a service, he is likely supporting an individual's small business, which is a wonderful way to share wealth with hard workers.
My master's thesis was forecasting using SVM. That was the first time I fell in love with machine learning and even Math. Thank you Vladimir for living.
I'm studying SVM in my MCS program. I was so surprised to find this video with Dr. Vapnik. We live in such blessed times to have easy access to this level of high-quality content. Thank you!
@Lex Fridman 1.5 years ago I listened to your first podcast with prof vapnik and was blown away. Great man, great story. I love it. Funny is that while pursuing the topic of machine learning and deep learning myself at the moment I hit the subject of learning curves, cross-validation and other methods to learn more efficient and remembered the podcast in which he mentioned his Complete Statistical Theory and as a former math major I appreciate his approach so much. Thx for this opportunity
His concept of predicates is intriguing: Everything can be deconstructed to see what it is consisting of - the basic building blocks. With that, what is left to do is only one more step: analyzing the structure. Excellent concept!
Amazing talk and amazing contributions to the field of statistical learning theory. This is definitely a piece of the puzzle that I feel like is very under represented today.
Arriving here from the podcast, must say that horizontal expansion will give us the models that we need and yes, even after than it would be an imitation. Intelligence seems to be far from our reach as of now.
My master's thesis was forecasting using SVM. That was the first time I fell in love with machine learning and even Math. Thank you Vladimir for living.