Тёмный
No video :(

The Future of Deep Learning Research 

Siraj Raval
Подписаться 770 тыс.
Просмотров 64 тыс.
50% 1

Опубликовано:

 

29 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 231   
@tunestar
@tunestar 7 лет назад
Please start a "really" in-depth series on Reinforcement Learning. Nobody has done it in a way it's easy to understand and I think it's an area where much has to be done yet and it's the closest thing we have to how we humans really learn.
@SirajRaval
@SirajRaval 7 лет назад
seriously considering this.
@bjornsundin5820
@bjornsundin5820 6 лет назад
Alejandro Rodriguez his deep q-learning video is about a reinforcement learning algorithm. I had a hard time learning from a video going that fast though, i learned it from a bunch of different sites instead. (I agree)
@SirajRaval
@SirajRaval 6 лет назад
great feedback will go slower
@normanheckscher
@normanheckscher 6 лет назад
Björn Sundin RU-vid has speed function and pause button.
@bjornsundin5820
@bjornsundin5820 6 лет назад
Norman Heckscher yeah, but it's more about how much he explains different things. Sometimes he says just a few words about something important that i'd need explained a bit more clearly. Of course, it's different for different people and i'm not telling him to change his teaching style.
@ortinsuez2052
@ortinsuez2052 7 лет назад
Keep up the good work Siraj.
@SirajRaval
@SirajRaval 6 лет назад
thanks Ntinda!
@KellenChase
@KellenChase 6 лет назад
Every single time I think to myself "Why isn't anyone talking about X or Y" in ML/AI/DL research you come out with a well done, entertaining, easily explained video summarizing and waxing poetic on the subjects giving me ever more rabbit holes to go down. You are awesome. thank you for doing what you do. Started a meetup in my city to discuss AI and many of your videos will be shared. Thank you for the Book reco as well. I will be listening to it on Audible at 3x. By the way, I just finished Max Tegmark's Life 3.0 and would highly recommend it if you haven't yet read it.
@justsomerandomguy933
@justsomerandomguy933 7 лет назад
Geoffrey Hinton demonstrated the use of generalized backpropagation algorithm for training multi-layer neural nets, but not invented it. The approach was developed by Henry J. Kelley and Arthur E. Bryson
@SirajRaval
@SirajRaval 6 лет назад
yes he just popularized it
@alzheimancer
@alzheimancer 7 лет назад
The best explanation of the Back-propagation I've ever seen
@dishonmwabashfano3627
@dishonmwabashfano3627 2 года назад
Everything is a function! Math is everywhere, math is all around us. Math is beautiful. Siraj, you are a genius bro.
@whickked
@whickked 7 лет назад
Really appreciate you breaking down Deep Learning and AI concepts as well as recommending blogs, books, and articles to check out. You're the man!
@BocaoLegal
@BocaoLegal 7 лет назад
Love you Siraj, you are doing a job that no one else do.
@asleepius
@asleepius 7 лет назад
Much love Siraj, you put alot of work into everything you do. If I where your parent and you came to visit, I would slightly lower my newspaper and pull my reading goggles to the tip of my nose and give you nod in unapologetic validation.
@SirajRaval
@SirajRaval 6 лет назад
hah thanks Jordan!
@MarkJay
@MarkJay 6 лет назад
Great talk Siraj. I always get inspired after watching your video. Keep it up!
@I77AGIC
@I77AGIC 7 лет назад
i really enjoy videos like this. you can't really find these kind of discussions often
@Fuckutube547465
@Fuckutube547465 7 лет назад
I had a feeling you would have this view on Hinton's comments. While back-propagation is extremely useful with today's processing capabilities, its real world applications weren't too great in '86. I hope that the success of the 'next backprop' won't be dependent on brute force capabilities, 30 years later. Thanks for bringing attention to this topic, looking forward to the next one!
@math_in_cantonese
@math_in_cantonese 7 лет назад
Thanks Siraj, your channel is my source (no "s") of getting recent news about the field of I.T.
@TheAizaz420
@TheAizaz420 6 лет назад
This is one of the best videos regarding Research side of Deep Learning and A.I. Siraj you are awesome
@douglasoak7964
@douglasoak7964 7 лет назад
Its not sparse. its focused. babies start with a pre-programmed classifier (faces). This classifier is faces. Positive/negative. Thats why they are so focused on faces. From this simply classifier, the baby move on to build its own classifiers. Essentially a general AI will be a classifier fed by another simply classifier with the ability to build off of the initial classifier to build new classifiers.
@sortof3337
@sortof3337 6 лет назад
What about babies who are born blind? So, how do they function and become conscious?
@zacharykeener1990
@zacharykeener1990 6 лет назад
I believe this example can be expanded to a number of "pre-programmed" classifiers, e.g. those things that the senses respond to. So blind/deaf/touch as the senses and the physical world as the pre-programmed classifiers
@sgaseretto
@sgaseretto 7 лет назад
I'll really love to see videos of you talking more about this, like the other, more experimental learning algorithms out there. For example, more about: - Synthetic Gradients (that you already have mentioned in this video) - Feedback Alignment - Target Propagation - Equilibrium Propagation - and others
@tthtlc
@tthtlc 6 лет назад
You make go crazy again....emotionally motivating. Good for my neural network.
@Majorityy
@Majorityy 6 лет назад
Siraj, it's a pleasure listening to you. I like your energy, what you're saying is very clear, and it's very motivating to sense such interest and passion for what you're explaining in the video. Please keep the good vibes going, awesome work. Milan
@tonycatman
@tonycatman 7 лет назад
Excellent video. I've been thinking about Hinton's comments over the last week too, and also in the context of learning from tiny data sets. I'm thinking that humans are actually rubbish at classification from small data sets, but we kid ourselves that we are good at it.
@shreyashervatte5495
@shreyashervatte5495 7 лет назад
Hey Siraj! Thank you so much for making videos like this.. Your videos really inspire me to do more with my life... The amount of information that's out there.. So much to learn... You're influencing lives out here.. Keep up the good work!!
@SirajRaval
@SirajRaval 6 лет назад
awesome thanks!
@mihaitensor
@mihaitensor 6 лет назад
Backpropagation algorithm was invented in the 1960's. Hinton showed in 1986 that backpropagation can generate useful internal representations of incoming data in hidden layers of neural networks. He didn't invented backpropagation.
@moejobe
@moejobe 7 лет назад
This might be your best video im terms of depth.
@lourensed
@lourensed 7 лет назад
I'm claiming this as my favorite video on your channel, maybe it's because I am getting a better grasp of machine learning but I got a real concentration flow during this video, only being happily uninterrupted at 14:00 and 16:00 Keep it up siraj!
@derasor
@derasor 6 лет назад
IMO Best video from this great channel to this date. Thank you Siraj, keep improving!
@SirajRaval
@SirajRaval 6 лет назад
thanks derasor!
@giraudl
@giraudl 7 лет назад
Really love how you explain Chain Rule. 10^∞ Kudos !
@SirajRaval
@SirajRaval 6 лет назад
thanks Luc!
@GlassScissors
@GlassScissors 7 лет назад
Siraj, you are an inspiration! I actually took a lecture on NN and this is a great 40 min summary, don't be afraid to tell us more :D I like how you surf on the edge of going too deep into explanation and giving us just enough :)
@sandeepsrikonda7352
@sandeepsrikonda7352 6 лет назад
"it's got to be more than just gradient based optimization" ......been waiting for this
@jakobsaadbye5309
@jakobsaadbye5309 7 лет назад
Explained amazing, feel like you connected almost every of your videos into 1 video, so that it is understandable and correlative
@rougegorge3192
@rougegorge3192 7 лет назад
Big future for Siraj, big...
@hypersonicmonkeybrains3418
@hypersonicmonkeybrains3418 6 лет назад
Also you have to consider Morphic Resonance. Morphic resonance, Rupert Sheldrake says, is "the idea of mysterious telepathy-type interconnections between organisms and of collective memories within species.
@guitarheroprince123
@guitarheroprince123 7 лет назад
cmon siraj tomorrow is my computer architecture test and I had to study but u dropped an awesome video.
@guitarheroprince123
@guitarheroprince123 7 лет назад
well fkit cause internet > college.
@chrismiles3838
@chrismiles3838 5 лет назад
Artificial Life is the closest field studying the topic that you concluded as the most promising direction. It's an exciting field! I would love to see more interactions between AI and AL. Siraj, this could possibly be an interesting topic for a video?
@bernardofn
@bernardofn 7 лет назад
Thanks Siraj. Very insightful video. I watched GH's interview to ANg two weeks ago and that kept me wondering about the new directions! :-)
@SirajRaval
@SirajRaval 6 лет назад
thanks Bernardo!
@harishshankam6268
@harishshankam6268 6 лет назад
No words for ur video.Excellent
@shirshanyaroy287
@shirshanyaroy287 7 лет назад
I watched this video on 2x speed just like you suggested. I feel like a god.
@y0d4
@y0d4 6 лет назад
i liked your video because of your passion which you show in ~16 min :)
@inamothosan
@inamothosan 7 лет назад
Learn something new today...thanks Siraj
@elektronik2000
@elektronik2000 7 лет назад
Siraj you becomes much better at explanation !!
@billykotsos4642
@billykotsos4642 5 лет назад
Hinton is one of the GOATS
@apachaves
@apachaves 7 лет назад
Excelent video again, thank you Siraj. I definitely agree we should do more exploration, and I hope one day to contribute in that sense.
@rahul.chandak
@rahul.chandak 7 лет назад
Nice video, just started learning ML, can you make a video explaining how to decide the hidden layer,when to assign a base value and how to decide it, and also how to assign initial weights with some practical examples. If any link available already, that will help too :)
@RAJATTHEPAGAL
@RAJATTHEPAGAL 6 лет назад
Thanx . Thats all the inspiration i needed...... And as for backprop ya just like u i too started to think maybe we r overusing and more and more models are arching towards it. O.o and i m just learning now.
@yourfriendlyrider
@yourfriendlyrider 7 лет назад
Hey siraj, awesome video as always. I want to say that I have been studying a lot about ml these days and it amazes me but there is one thing that still keeps me and I guess a lot of people from trying and exploring ML. The thing is computation power. I think people are not aware of what amount of computation is generally required to do these operations and because of that they are afraid of trying things out because they just dont have any idea about how much time it takes to train a model. So I request you to make a video explaining what computation power these ml models require to train and can anyone with a decent i5 laptop with say 930m nvidia GPU do it or not :)
@shafeeza136
@shafeeza136 6 лет назад
Thank you for your videos. I am learning a lot from them :)
@larryteslaspacexboringlawr739
@larryteslaspacexboringlawr739 7 лет назад
thank you deep learning research video
@natesh31588
@natesh31588 6 лет назад
Allow me to add one more option from a hardware perspective. To build on to that requires clearing up some fundamental issues. You say everything is a function. The more accurate statement is that everything can be described as a function. The key word being description. Neural network algorithms are computational descriptions on how learning can be achieved to satisfy an input-output mapping. The option I propose is trying to understand the underlying physical (thermodynamic) process that we end up describing as learning. For eg: a refrigerator taking in electrical energy to cool things down can be described computationally using an input-output function and implemented using a transistor circuit. I can also always build a refrigerator to take in power and cool things down. Both my circuit and the refrigerator are now doing the same thing computationally but only one of them will actually cool things down. So why not attack the question of general intelligence the same way? Is it possible to build a hardware system that satisfies specific thermodynamic (energy/power/work/heat) conditions so that their dynamics can now be described as learning. For fun, let's call this system a thermodynamic computer.
@MrFaxt
@MrFaxt 7 лет назад
you da baddest Siraj
@KatySei
@KatySei 6 лет назад
Amazing video siraj.
@whiteF0x9091
@whiteF0x9091 6 лет назад
Great presentation ! Thanks
@arzoo_singh
@arzoo_singh 6 лет назад
Siraj Great work ,just a small feedback speak slowly and give sometime ...lets say you explaining about regression speak slowly and give a pause in between .
@icyrich4456
@icyrich4456 7 лет назад
thx for another portion of knowledge
@fabfan12
@fabfan12 7 лет назад
Awesome video. I love the enthusiasm!
@solid8403
@solid8403 7 лет назад
love is a function. Awesome stuff.
@MultiSaitox
@MultiSaitox 7 лет назад
Excellent video as usual Siraj, thank you so much !
@y__h
@y__h 7 лет назад
Man you should totally cover Google's TPU impact on Machine Learning-specific hardware. Heck, Nvidia recently started TPU-like open source ML accelerator called NVDLA.
@SirajRaval
@SirajRaval 6 лет назад
will consider
@bosepukur
@bosepukur 7 лет назад
hope you have a million subscribers :)
@milanpospisil8024
@milanpospisil8024 6 лет назад
But unsupervised and supervised learning are sometimes tight together. For example prediction of future is classification on unlabeled data (you just want to predict next state of system using unlabeled sequence of states in time). And I think, thats what the brain does.
@debarokz
@debarokz 7 лет назад
wow.... great talk !!! I wish I had enough money to become a patron.. makes me feel bad
@cdrwolfe
@cdrwolfe 7 лет назад
Great vid and interesting discussion points. For those interested in evolution and its application in the brain I always recommend 'Chrisantha Fernando' (now of google) and his past work on 'Darwinian Neurodynamics'
@WillTesler
@WillTesler 7 лет назад
Excellent video my friend
@SirajRaval
@SirajRaval 6 лет назад
thanks Will!
@Barnardrab
@Barnardrab 7 лет назад
I didn't know Steven Pinker wrote books. I think I remember seeing him in a few Big Think videos. I may be picking up How the Mind Works.
@harshtiku3240
@harshtiku3240 7 лет назад
16:25 Whenever I see a Siraj Video!
@SirajRaval
@SirajRaval 6 лет назад
lol
@camilogallardo6923
@camilogallardo6923 7 лет назад
I love your content, im looking forward to getting into your deep learning series :D
@vergelimit8654
@vergelimit8654 6 лет назад
You're the best, keep it up!
@grainfrizz
@grainfrizz 7 лет назад
16:22 get high on math, not on drugs
@CheapBurger
@CheapBurger 7 лет назад
hahahahhahahha
@GuillaumeVerdonA
@GuillaumeVerdonA 7 лет назад
math is a helluva drug
@SirajRaval
@SirajRaval 6 лет назад
hahah always
@ortinsuez2052
@ortinsuez2052 6 лет назад
lol. I agree.
@zinqtable1092
@zinqtable1092 6 лет назад
go even higher with drugs
@451shail
@451shail 6 лет назад
Such an interesting video!
@HarshColby
@HarshColby 6 лет назад
Within back propagation, is there a way to prune the hidden nodes? If a node isn't actually relevant, can it be eliminated automatically? I'm thinking of an analogy to the brain where layers are sparse, and a computer equivalent would be more efficient if it needed less memory/processing for unnecessary intermediate nodes. I wish I had more time in my day. I'm one of those people that started in the 70's with AI, but my career took me in a different direction. Love your videos. Keep it up.
@getrasa1
@getrasa1 7 лет назад
Siraj, you were talking a lot about those functions and that they are everything in life, where can I find more about them and how they relate to neural networks?
@Extruder676
@Extruder676 7 лет назад
люто плюсую!
@davidrhodus6849
@davidrhodus6849 7 лет назад
Nice work. Thanks
@arthdh5222
@arthdh5222 6 лет назад
Great talk man!
@leajiaure
@leajiaure 6 лет назад
Maybe in the short term we will use computers to augment our abilities (as we have always done with technology), but machines absolutely can and will be capable of creativity that far exceeds ours. There is no task that cannot be done better by an AI.
@RoulDukeGonzo
@RoulDukeGonzo 7 лет назад
Seen the early stuff from Hofstadter? The concept network and workspace ideas are really cool.
@squirrel2770
@squirrel2770 7 лет назад
Awesome, appreciate your work Siraj, inspiring! I need to get friendlier with Khan Academy and get back into math...*shudders*. Would you be able to recommend things to cherry pick and learn for these purposes, or any order of things perhaps? Or would you consider most concepts to have dependencies that would make cherry picking questionable? For an example would you jump to trying to figure out the derivative and partial derivative?
@andraskasznar1678
@andraskasznar1678 6 лет назад
Amazing.
@BalbirSingh-qp2we
@BalbirSingh-qp2we 7 лет назад
Total G, your videos are.
@tamerius1
@tamerius1 6 лет назад
very nice video thanks
@Magenta1593
@Magenta1593 7 лет назад
Really good video!
@nikhilsoni7037
@nikhilsoni7037 7 лет назад
Amazing videos. You are an inspiration. Are you coming to India soon?
@alienkishorekumar
@alienkishorekumar 7 лет назад
I almost thought Jeffery Hinton was in this video.
@SirajRaval
@SirajRaval 6 лет назад
he is in spirit
@AinunNajib-ec2ainun
@AinunNajib-ec2ainun 7 лет назад
WOW, just wow I lose faith in deep learning just before watching this video Actually there is more more things to optimize! Thanks for making this video
@SirajRaval
@SirajRaval 6 лет назад
so awesome!
@eduardmart1237
@eduardmart1237 6 лет назад
please make a video about text(document) classification!)
@victorocampo5263
@victorocampo5263 7 лет назад
Notice me senpai!!!!
@SirajRaval
@SirajRaval 6 лет назад
hi Victor!!
@jordanparker211
@jordanparker211 7 лет назад
Could back-propagation be a viewed as a CompSci implementation of an adaption to linear regression with a transformation applied? (1) The gradient decent optimization of the loss function analogous to a least squares estimator; (2) The weights equal to matrix B and the bias equal to matrix A in equation Y = A + BX + E, where E equals a matrix of error terms; (3) The application of the non-linearity being the transformation function in Z = g(Y) It seems like there is similar thought in linear regression and auto-regressive time-series modeling? Please tell me if i'm wrong! P.S. love the vidz brah
@ivy3420
@ivy3420 6 лет назад
yeah couldn't agree more. math is freaking beautiful. so is physics and biology and chemistry and computer science and engineering and artificial intelligence and deep learning.
@realcygnus
@realcygnus 6 лет назад
great channel !.......Raj, just out of pure curiosity do you have a "computer science" Degree ? or are you still a student ?......or are you just a natural self taught/DIY !?
@gangadharasai9372
@gangadharasai9372 7 лет назад
Siraj you are Awesome !!!! and one qn ........... Is back propagation inspired from the neural activity of brain ...... does back propagation happen in our brain? i am not able to find an answer?
@Leon-pn6rb
@Leon-pn6rb 6 лет назад
yo I legit thought in the beginning that the bald guy behind him was actually a co host , standing behind him , waiting to speak next I looked at him for far too long to realize that he was just a photo in the article
@alienkishorekumar
@alienkishorekumar 7 лет назад
Learning deeplearning from deeplearning.ai and soon I'll do the fast.ai course too.
@tunestar
@tunestar 7 лет назад
Cool, but none of them will you teach you RL which is what you should be doing according to this video.
@alienkishorekumar
@alienkishorekumar 7 лет назад
Alejandro Rodriguez I'm planning to take it step by step, going from first principles.
@xiobus
@xiobus 6 лет назад
there all good tools to learn. .. learn it all
@SirajRaval
@SirajRaval 6 лет назад
keep it up
@Erilyth
@Erilyth 7 лет назад
Great video Siraj! One small question though, in self organizing maps, could we just extend it to higher dimensions instead of just a 2D map, following the algorithm I don't see any issues extending it to higher dimensions.
@Anonymous-lw1zy
@Anonymous-lw1zy 7 лет назад
Hinton contributed a massive amount to NNs - but not backprop. During the 1980's upswing in NNs, there was a lot of discussion (and conflicting claims) of who first used backprop for NNs. The agreement finally pretty much concluded it was Werbos. See this for a good historical summary. people.idsia.ch/~juergen/who-invented-backpropagation.html
@ladjiab
@ladjiab 7 лет назад
Hi Siraj I have a question, do you think it's safe to train our models on Google Machine Learning engine ? wouldn't that allow Google's AI to know everything about our products since it's kinda learning from our models.
@randcontrols
@randcontrols 7 лет назад
Awesome video Siraj. What I like to see is more collaboration between the deep learning and complex adaptive system worlds. This video focusses on the deep learning world, so let me give a one sentence intro on complex adaptive systems. In a complex adaptive system, complexity arises, almost from nowhere, from simple interactions between "agents". Netlogo is a very simple modeling environment with simple examples demonstrating the concept. For example how the complex behavior of the flocking of birds can be simulated using very simple rules. On the other end of the scale is the very ambitious www.futurict2.eu project that aims to simulate the world to solve global socio-economic problems. Deep learning is really very exciting and I am truly amazed by its achievements. My 2 cents on the subject is that 20 percent of the 20 percent that you want to move from exploitation to exploration, should explore combining deep learning with complexity science.
@hanyuliangchina
@hanyuliangchina 6 лет назад
very cool。
@santicomp
@santicomp 6 лет назад
Hi Siraj, I love your videos they are very inspiring, i hope i can finish software engineering and continue with ai applications. I have a question what do you think about a general ai that could think like a human, generate programs automatically, do you think a human(programmer) would be somehow excluded completely and lost it´s creative value. I understand making automatism for things that are monotonous or that a machine could do better, but if we get replaced by having a machine do everything were would we fit?. This question came from a debate which i had with a probability and statistics teacher that asked what was the future of ai, i responded talking of present ai, and his response was grim, like there was no hope in the future and we were contributing to this dispare of humanity. He also said few people would be in control of ai as in the feudal ages and we would all be like slaves. Of course i think its a bright future, i tried to respond positively, but he was hard to convince .I´m really intrigued by your opinion, maybe i can change his belief about the grim future he thinks is coming Keep it up Cheers from Uruguay South America.
@pranabsarkar
@pranabsarkar 6 лет назад
Thank you :-)
@BFBCIE2
@BFBCIE2 6 лет назад
Where can I find the document he is using in the video? The info would be super helpful!
@dhiahassen9414
@dhiahassen9414 5 лет назад
it is simple how babies learns : 1- there is the supervision of instincts 2- using a feedback loop
@mashmoompathan2052
@mashmoompathan2052 6 лет назад
Hey siraj, i need to select a deep learning algorithm for image (image is basically of computer sceen, which has characters, objects of diff shapes etc ) which algorithm do u think will be most suitable in my case. Please reply!
@cupajoesir
@cupajoesir 7 лет назад
but is our acceleration in learning being outstripped by the acceleration of the accumulation of information thus making it impossible to ever grasp full understanding?
Далее
Numenta Explained
25:32
Просмотров 50 тыс.
Synthetic Gradients Explained
27:16
Просмотров 21 тыс.
The moment we stopped understanding AI [AlexNet]
17:38
Просмотров 979 тыс.
Concerning the Stranded Astronauts
13:27
Просмотров 470 тыс.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Просмотров 507 тыс.
This is why Deep Learning is really weird.
2:06:38
Просмотров 384 тыс.
Watching Neural Networks Learn
25:28
Просмотров 1,3 млн
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
Просмотров 1,4 млн