Тёмный
Aurélien Géron
Aurélien Géron
Aurélien Géron
Подписаться
Welcome to my channel on Machine Learning. My plan is to make one or two videos per month to clarify complex topics, dive into the code, offer tips and tricks about TensorFlow, Keras, Scikit-Learn, PyTorch, deployment, performance and more. Hope you'll enjoy it!

About me: I am the former lead of RU-vid's video classification team, and author of the O'Reilly book Hands-On Machine Learning with Scikit-Learn and TensorFlow. I'm blown away by what Deep Learning can do, and I feel incredibly fortunate to call it my job. I hope I can help as many people as possible join the party!
Комментарии
@StanleySalvatierra
@StanleySalvatierra 21 день назад
I’m from the future, Transformers won the war.
@ronakraj
@ronakraj 23 дня назад
You should write a book.
@MissPiggyM976
@MissPiggyM976 25 дней назад
Very clear, thanks!
@tonywang7933
@tonywang7933 Месяц назад
Still can't believe these can be taught in such a short video.
@samyakbharsakle
@samyakbharsakle Месяц назад
God;y explanation. I wish I could understand this in first try
@shy1992
@shy1992 2 месяца назад
Very interesting and informative talk even tho some times has passed and machine learning has probably changed something. But its nice to hear from an expert how they put things together and give some references where someone can start from scratch.
@sushilkhadka8069
@sushilkhadka8069 2 месяца назад
Wow best explaination ever, I found this while I was in college. I just come here once a year just to refresh my intution.
@sebstemmer
@sebstemmer 2 месяца назад
Awesome video, thank you!!!
@ramensusho
@ramensusho 3 месяца назад
The no. of bits I received is way higher than I expected !! Nice video
@taehyeokjang6951
@taehyeokjang6951 4 месяца назад
fantastic explanation!
@AladinxGonca
@AladinxGonca 4 месяца назад
You are the most talented tutor I've ever seen
@jacky-824
@jacky-824 4 месяца назад
In the last slide of the red panda example, why the sum of predicted distribution exceeds 100%? The sum of both the distribution should be equal to 100% right? If that is not the case, how to explain the situation that all the classifications have 100% predicted probability? And the CrossEntropyLoss would be 0 in that case.
@AurelienGeron
@AurelienGeron 4 месяца назад
Oops, you're right, the sum of the predicted probabilities should indeed be 100%, good catch! Apparently I can't count. 😅 I'll add this error to the video description.
@danielmyers76
@danielmyers76 5 месяцев назад
Question: at about 4 minutes where you talk about equivariance, would it be fair to assume that all the capsules should have moved if you rotate the image? All the smallest capsules stayed still and just the larger ones rotated.
@anngladyo5668
@anngladyo5668 5 месяцев назад
I bet he was really clear but my adhd brain still spaced out
@robertbarta2793
@robertbarta2793 5 месяцев назад
Super explanation!
@adaslesniak
@adaslesniak 7 месяцев назад
What if we send only changes. E.g. 0 bits are send if there is no change. That should improve information content. Would it be optimal if we send first bit which direction weather moved 0-more sunny, 1-more rainy followed by amount of change. Then if we have 8 total weather states possible, so max change is 7, to encode 7 we need 3 bits + directional bit in most extreme case. And plenty often we send 0 bits and it works for both weather patterns. Is that reasoning valid?
@AurelienGeron
@AurelienGeron 4 месяца назад
Interesting question. Indeed, only sending the changes would be quite efficient, especially if there are frequent repetitions (such as 10 sunny days in a row). In practice, it's a very commonly used optimisation in telecommunications. One drawback of this approach is that you can't tell the difference between "there's no change" and "the weather station is broken". 😃
@frncsngy
@frncsngy 7 месяцев назад
Thanks a lot!
@darth_c0der
@darth_c0der 8 месяцев назад
Explained really well . Thank you.
@junkid3559
@junkid3559 10 месяцев назад
Bro this guys ML book rocks. Very well written. Very clear. Thiugh you need to be familiar with the necessary calculus and linear algebra imo.
@francosoloqui9557
@francosoloqui9557 10 месяцев назад
Thank you, the explanation was clear, and with the code, I can understand the Capsule Networks better.
@pratyushparashar1736
@pratyushparashar1736 10 месяцев назад
what an amazing video!
@ikechukwuuchendu5801
@ikechukwuuchendu5801 11 месяцев назад
Incredible explanations
@Gett37
@Gett37 11 месяцев назад
Can you please give me a hint? You was a yt video classification PM. I can't find anything on how youtube defines topics of videos and if this info is available in youtube api. Can you please give me a clue where to find this info, I can't find literally anything.
@AurelienGeron
@AurelienGeron 4 месяца назад
I did a video on RU-vid video classification: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-zzTbptEdKhY.html I left RU-vid in 2016, so this may be outdated information, but hopefully it will help you.
@camf1991
@camf1991 11 месяцев назад
Thank you for this video. It's been helpful for my capstone
@colletteloueva13
@colletteloueva13 11 месяцев назад
One of the most beautiful videos I've watched and understood a concept :')
@susdoge3767
@susdoge3767 Год назад
i just came across this thing and holy shit this is absoutely captivating,!!brilliant
@yb801
@yb801 Год назад
Thank you , I have always confused about these three concepts, you make these concepts really clear for me.
@Dr.Roxirock
@Dr.Roxirock Год назад
I really enjoyed the way you are explaining it. It's so inspiring watching and learning difficult concepts from the author of such an incredible book in the ML realm. I wish you could teach via video other concepts as well. Cheers, Roxi
@willw4096
@willw4096 Год назад
4:22 5:07 5:13 5:49 6:39
@rozhanmirzaei3512
@rozhanmirzaei3512 Год назад
anyone from hands on ML book?
@Finding.mlllll
@Finding.mlllll Месяц назад
Most of them !!
@VincentKun
@VincentKun Год назад
Ok, i maybe should pay more attention when reading my books, but when i heard here that CrossEntropy is entropy + KL it made sense, then when i read my notes i wrote something similar, but without even realizing how big it was.
@HadiseMgds
@HadiseMgds Год назад
In a word, it was great!
Год назад
the best video on cross entropy on youtube so far
@anonymous.youtuber
@anonymous.youtuber Год назад
Magnificent explanation! 👍
@AR-iu7tf
@AR-iu7tf Год назад
This is by far one of the best explanations of cross entropy loss on RU-vid. Another video that complements this by asking why does weighting the predicted distribution by the true distribution be even considered a loss? ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-LOh5-LTdosU.html Also how does one make a model output a probability distribution? The role of softmax ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-p-6wUOXaVqs.html
@wibulord926
@wibulord926 Год назад
nll
@erezd8252
@erezd8252 Год назад
briliant explenation
@wibulord926
@wibulord926 Год назад
cc
@forheuristiclifeksh7836
@forheuristiclifeksh7836 Год назад
2:11
@Shady9
@Shady9 Год назад
thank you so much for this thorough and very clear explanation of a complex subject.
@clray123
@clray123 Год назад
It is not really a clear explanation because it assumes the viewer already knows what Shannon was trying to formalize. At 1:06 you jump into "dividing uncertainty by 2" without having defined what "uncertainty" means first and how it could be measured itself. So you are kind of explaining unknown by unknown.
@vladimirfokow6420
@vladimirfokow6420 Год назад
Amazing video! Small corrections: 5:05 - there should be minuses in the formula, not pluses 7:39 - equal to the *negative* message length
@AlpakaAntifa
@AlpakaAntifa Год назад
Wonderful explanation! Thank you!
@JoeVaughnFarsight
@JoeVaughnFarsight Год назад
J'ai posté ta présentation sur mon Linkedin.
@JoeVaughnFarsight
@JoeVaughnFarsight Год назад
Le monde de l'OPA regorge de possibilités, présentant une tapisserie d'éléments complexes et dynamiques. L'entropie, la cross-entropie et la divergence de Kullback-Leibler apportent de la précision et de la précision, débloquant les outils pour former une entité unifiée. Grâce à l'échange de connaissances et à la poursuite de l'harmonie, une intégration optimale permet aux entreprises d'atteindre une vision partagée du succès. La langue de la théorie de l'information a le pouvoir de débloquer un avenir vibrante.
@JoeVaughnFarsight
@JoeVaughnFarsight Год назад
Merci Aurélien Géron, c'était une très belle présentation !
@afsanarabeya4417
@afsanarabeya4417 Год назад
i am stuggling with the implementation in python as i have different num of lengths for the distributions (means different row numbers). while using scipy.special rel_entr i am having the error of shape mismatch. anyone ? any idea ?
@jingwen2974
@jingwen2974 Год назад
super clear. Thank you so much!! <3
@paramn.pathak6036
@paramn.pathak6036 Год назад
4:08 are those idiots sleeping??
@CowboyRocksteady
@CowboyRocksteady Год назад
i'm loving the slides and explaination. I noticed the name in the corner and thought, oh nice i know that name. then suddenly... It's the author of that huge book i love!
@agarwaengrc
@agarwaengrc Год назад
Haven't seen a better, clearer explanation of entropy and KL-Divergence, ever, and I've studied information theory before, in 2 courses and 3 books. Phenomenal, this should be made the standard intro for these concepts, in all university courses.