Тёмный
Arxiv Insights
Arxiv Insights
Arxiv Insights
Подписаться
My name is Xander Steenbrugge, and I read a ton of papers on Machine Learning and AI.
But papers can be a bit dry & take a while to read. And we are lazy right?

In this channel I try to summarize my core take-aways from a technical point of view while making them accessible for a bigger audience.

If you love technical breakdowns on ML & AI but you are often lazy like me, then this channel is for you!
Комментарии
@soundninja99
@soundninja99 7 дней назад
I wanna try pretraining the RL model with supervised learning to see if it can circumvent some of the problems with reward shaping
@khansa1436
@khansa1436 8 дней назад
i'm glad I watched this video
@HarutakaShimizu
@HarutakaShimizu 9 дней назад
Wow, this was a very clearly explained video, thanks!
@AryanMathur-gh6df
@AryanMathur-gh6df 18 дней назад
Thank you so much for this video, helped a lot
@sancelot88
@sancelot88 21 день назад
You explain something you are mastering. However in order for other people to understand you are speaking too fast. And more difficult to understand when English is not your native language
@husseinalmansory7370
@husseinalmansory7370 Месяц назад
i think without know math you will dive in sea
@OriginalJetForMe
@OriginalJetForMe Месяц назад
You should watch the section on dangers and politics now, six years later. I’d be curious to know your opinions now. 😂
@mister_meatloaf
@mister_meatloaf Месяц назад
This is brilliant. Thank you.
@yinghaohu8784
@yinghaohu8784 Месяц назад
very good explanations
@luxliquidlumenvideoproduct5425
@luxliquidlumenvideoproduct5425 Месяц назад
One must stress what you say at the end of the video at 28:20, that although AlohaFold 2.0 can predict native confirmation of an amino acid sequence, there are other contributing factors, and the algorithm isn’t able to answer the why, nor how proteins find their native state out of the vast combinatorial complexity of native confrontation structures. Levinthal’s Paradox.
@anishahandique4815
@anishahandique4815 2 месяца назад
After going through most of the RU-vid videos on this topic. This one was one of the best out of all. Very clear and crisp explanation. Thank you ❤
@muhammadhelmy5575
@muhammadhelmy5575 2 месяца назад
4:00
@tugrulz
@tugrulz 2 месяца назад
subscribed
@forheuristiclifeksh7836
@forheuristiclifeksh7836 2 месяца назад
1:00
@bishnuprasadnayak9520
@bishnuprasadnayak9520 2 месяца назад
Amazing
@conlanrios
@conlanrios 3 месяца назад
Great breakdown and links for additional resources
@ViewsfromVick
@ViewsfromVick 3 месяца назад
Bro! you were soo ahead of your time! Like Scooby Doo
@teegeevee42
@teegeevee42 3 месяца назад
This is so good. Thank you!
@noahgsolomon
@noahgsolomon 3 месяца назад
GOAT
@lamborghinicentenario2497
@lamborghinicentenario2497 3 месяца назад
12:28 what did you use to connect the machine learning to a 3d model?
@bikrammajhi3020
@bikrammajhi3020 3 месяца назад
This is gold!!
@azizbekibnhamid642
@azizbekibnhamid642 3 месяца назад
Great work
@iwanttobreakfree701
@iwanttobreakfree701 3 месяца назад
6 years ago and I now use this video as a guidance to understanding StableDiffusion
@commenterdek3241
@commenterdek3241 3 месяца назад
an you help me out as well? I have so many questions but no one to answer them.
@zzewt
@zzewt 4 месяца назад
This is cool, but after the third random jumpscare sound I couldn't pay attention to what you were saying--all I could think about was when the next one would be. Gave up halfway through since it was stressing me out
@sELFhATINGiNDIAN
@sELFhATINGiNDIAN 4 месяца назад
this guy too hadnsome, itlain hands
@BooleanDisorder
@BooleanDisorder 4 месяца назад
Rest in peace Tishby
@Matthew8473
@Matthew8473 4 месяца назад
This is a marvel. I read a book with similar content, and it was a marvel to behold. "The Art of Saying No: Mastering Boundaries for a Fulfilling Life" by Samuel Dawn
@LilliHerveau
@LilliHerveau 4 месяца назад
feel like beta should be decreased as training progresses and the learning rate decreases too. Sounds like hyperparameter tuning though
@NoobsDeSroobs
@NoobsDeSroobs 4 месяца назад
Figuratively exploded*
@LuisFernandoGaido
@LuisFernandoGaido 5 месяцев назад
Five years later and RL is a dream's product. Nothing was really solved in real world. I think there's pratical areas of IA better than that.
@p4k7
@p4k7 5 месяцев назад
Great video, and the algorithm is finally recognizing it! Come back and produce more videos?
@user-xz6ld7nl2l
@user-xz6ld7nl2l 5 месяцев назад
This kind of well-articulated explanation of research is a real service to the ML community. Thanks for sharing this.
@obensustam3574
@obensustam3574 5 месяцев назад
Very good video
@erickgomez7775
@erickgomez7775 5 месяцев назад
If you dont understand this explanation, the fault is on you.
@SurferDudex99
@SurferDudex99 5 месяцев назад
Lmao this must be a joke. Anyone who supports this theory has no understanding of the exponentially nature of how AI learns.
@alaad1009
@alaad1009 6 месяцев назад
Excellent video
@infoman6500
@infoman6500 6 месяцев назад
Very interesting. It looks like Nature is alive -very much alive.
@infoman6500
@infoman6500 6 месяцев назад
Glad to see that human biological computer network is still much efficient than machine with artificial neural network.
@infoman6500
@infoman6500 6 месяцев назад
Excellent educational video on artificial and deep neural network learning.
@infoman6500
@infoman6500 6 месяцев назад
Excellent video education on bio-molecular technology.
@alexanderkurz2409
@alexanderkurz2409 6 месяцев назад
Another amazing video ... thanks ... any chance of some new videos coming out on recent papers?
@alexanderkurz2409
@alexanderkurz2409 6 месяцев назад
5:03 "to test the presence and influence of different kinds of human priors" ... this is pretty cool ...
@alexanderkurz2409
@alexanderkurz2409 6 месяцев назад
3:12 This reminds me of Chomsky's critique of AI and LLMs. Any comments?
@yonistoller1
@yonistoller1 6 месяцев назад
Thanks for sharing this! I may be misunderstanding something, but it seems like there might be a mistake in the description. Specifically, the claim in 12:50 that "this is the only region where the unclipped part... has a lower value than the clipped version". I think this claim might be wrong, because there could be another case where the unclipped version would be selected: For example, if the ratio is e.g 0.5 (and we assume epsilon is 0.2), that would mean the ratio is smaller than the clipped version (which would be 0.8), and it would be selected. Is that not the case?
@moozzzmann
@moozzzmann 6 месяцев назад
Great Video!! I just watched 4 hours worth of lectures, in which nothing really became clear to me, and while watching this video everything clicked! Will definitely be checking out your other work
@bowenjing3674
@bowenjing3674 6 месяцев назад
I didn't forget the subscrip, but you seems to forget updating
@hosseinaboutalebi9998
@hosseinaboutalebi9998 7 месяцев назад
Why have you stopped doing wonderful tutorial? I wish you would have continued your channel.
@kaiz6997
@kaiz6997 7 месяцев назад
extremely amazing, thanks for creating this incredible vedio
@negatopoji7
@negatopoji7 7 месяцев назад
The term "activation" in the context of neural networks generally refers to the output of a neuron, regardless of whether the network is recognizing a specific pattern. The activation is indeed a numerical value that represents the result of applying the neuron's activation function to the weighted sum of its inputs. Just posting here what ChatGPT told me, because the definition of "activation" in this video confused me
@davidenders9107
@davidenders9107 7 месяцев назад
Thank you! This was comprehensive and comprehendible.