Тёмный

Efficient Computing for Deep Learning, Robotics, and AI (Vivienne Sze) | MIT Deep Learning Series 

Подписаться
Просмотров 57 тыс.
% 1 496

Lecture by Vivienne Sze in January 2020, part of the MIT Deep Learning Lecture Series.
Website: deeplearning.mit.edu
Slides: bit.ly/2Rm7Gi1
Playlist: bit.ly/deep-learning-playlist
LECTURE LINKS:
Twitter: eems_mit
RU-vid: ru-vid.com/show-UC8cviSAQrtD8IpzXdE6dyug
MIT professional course: bit.ly/36ncGam
NeurIPS 2019 tutorial: bit.ly/2RhVleO
Tutorial and survey paper: arxiv.org/abs/1703.09039
Book coming out in Spring 2020!
OUTLINE:
0:00 - Introduction
0:43 - Talk overview
1:18 - Compute for deep learning
5:48 - Power consumption for deep learning, robotics, and AI
9:23 - Deep learning in the context of resource use
12:29 - Deep learning basics
20:28 - Hardware acceleration for deep learning
57:54 - Looking beyond the DNN accelerator for acceleration
1:03:45 - Beyond deep neural networks
CONNECT:
- If you enjoyed this video, please subscribe to this channel.
- Twitter: lexfridman
- LinkedIn: www.linkedin.com/in/lexfridman
- Facebook: lexfridman
- Instagram: lexfridman

Наука

Опубликовано:

 

23 янв 2020

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 46   
@lexfridman
@lexfridman 4 года назад
I really enjoyed this talk by Vivienne. Here's the outline: 0:00 - Introduction 0:43 - Talk overview 1:18 - Compute for deep learning 5:48 - Power consumption for deep learning, robotics, and AI 9:23 - Deep learning in the context of resource use 12:29 - Deep learning basics 20:28 - Hardware acceleration for deep learning 57:54 - Looking beyond the DNN accelerator for acceleration 1:03:45 - Beyond deep neural networks
@gggrow
@gggrow 4 года назад
Looking forward to watching this, but shouldn't the Vladimir Vapnik lecture be coming first?
@createchannel8815
@createchannel8815 4 года назад
Me too. Invite her again.
@createchannel8815
@createchannel8815 4 года назад
Great talk. The Speaker Vivienne was clear and concise. Very informative.
@NomenNescio99
@NomenNescio99 4 года назад
Thank you for sharing the lecture, this is the type of content I really enjoy.
@gonzalochristobal
@gonzalochristobal 4 года назад
thank you lex, the amount of information you already shared is invaluable, eternally grateful
@UglyG82
@UglyG82 4 года назад
Great stuff Lex. Thank you !
@UglyG82
@UglyG82 4 года назад
And Thank you Vivienne for the fantastic insight
@pierreerbacher4864
@pierreerbacher4864 4 года назад
The density of neurons in this channel is incredibly high.
@colouredlaundry1165
@colouredlaundry1165 4 года назад
Vivienne is incredibly smart, it is a pleasure to listen to her.
@davidvijayramchurn1860
@davidvijayramchurn1860 4 года назад
Ironically, if you call someone 'dense' in English slang, it would imply the opposite.
@samuelec
@samuelec 4 года назад
Impressive amounts of information delivered by this lady!. To watch a such high densely packed informative video I had to take more than few breaks. I wonder how she managed to go through those 80 slides so fast and if there is someone that watched it all in one go without lose the focus !
@burkebaby
@burkebaby Год назад
Impressive amounts of information delivered by this lady!. To watch a such high densely packed informative video I had to take more than few breaks.
@JousefM
@JousefM 4 года назад
Thanks for a rather "exotic" topic I need to learn about as an AI newbie, much appreciated Lex!
@jayhu6075
@jayhu6075 4 года назад
Many thanks for sharing to the people that not study can afford at the MIT. Respect.
@colouredlaundry1165
@colouredlaundry1165 4 года назад
Agree with you. Respect.
@summersnow7296
@summersnow7296 4 года назад
Excellent lecture 👏👏👏. Things that we don’t usually think about as a ML practitioner but highly important. Great insights.
@colouredlaundry1165
@colouredlaundry1165 4 года назад
I am not an expert in the field of Vivienne Sze, however, she was an extremely good lecturer. Every concept was extremely clear.
@JonMcGill
@JonMcGill 3 года назад
I used to be a Field Apps engineer for telecom, and she's certainly correct about the power problem with respect to chip technology. Very likeable lecturer!!
@nikhilpandey2364
@nikhilpandey2364 4 года назад
I was researching about this on my own. I have been doing the network pruning wrong. I wouldn’t mind a hit in accuracy if my latency budget were met but now I think I can be far more frugal with the decrease in accuracy. Thanks a lot.
@warsin8641
@warsin8641 4 года назад
I love this I will rewatch everything when I'm older and hopefully understand better and deeper I'm only a junior in high school 😖
@tomfillot5453
@tomfillot5453 4 года назад
Maybe start by looking at Crash Course computer science. They give a good overview of how a computer actually works, and should give you more context for what are the different types of memory, operation and stuff like that. Then 3Blue1Brown has an excellent video series on neural networks. A lot of understanding comes from calculus, but fortunately he also has an excellent video series on that !
@alterna19
@alterna19 4 года назад
Warsin I like your avatar
@thusspokeshabistari
@thusspokeshabistari 4 года назад
Try to watch the video slowly in segmented chunks, and then write down what you understand and don't understand about the particular segment of the video(s), and then you can Google what you don't understand and then get back to viewing the video again later.
@ayushdutta8050
@ayushdutta8050 4 года назад
Haha .. senior year here 😅 . . AI has no age cutoff thank god haha
@BlackHermit
@BlackHermit 4 года назад
FastDepth is really interesting. Could be useful for many people.
@ganeshdongari7098
@ganeshdongari7098 3 года назад
Excellent
@Happy-wi7ml
@Happy-wi7ml 4 года назад
Brilliant thank you
@merlinmystique
@merlinmystique 4 года назад
Thank you, every video you post is incredibly useful. Though, it is really hard to enter this field from scratch: everything you learn forces you to go learn thousands other things, it gets really frustrating sometimes. I hope in time this will go better
@Soulixs
@Soulixs 3 года назад
thanks lex
@XCSme
@XCSme 4 года назад
Great video and an interesting problem. Why stop at architecture? What about using different materials for specialized DNN hardware? Maybe using some lower power transistors that are less accurate but good enough for inference. I don't think the brain neurons are always 100% accurate and consistent, but the brain seems to be somewhat fault tolerant.
@punyaslokdutta4362
@punyaslokdutta4362 4 года назад
Trade off between number of filters on the 3D Convolution and a 4D Convolution ? . Convolution is a matrix operation (w*Imap+ Bias). RELU Activation is mostly used to provide non-linearity . I feel the number of Filters is needed to see higher abstracted stuff . For instance, The initial layer of a CNN understands pixels based information primarily for edges , cuts, depths. The layer following it understands shape , structures. The further layers help us understand semantic meaning of eyes, skin, ears, nose, face. But, How will the model perform when instead of multiple filters , we having more layers . That is, the information in filters in stuffed inside the CNN layers. Or is it done for easing computation while training ?
@adamsimms8528
@adamsimms8528 4 года назад
I'm trying to imagine how this structure will become half relevant as we move into UltraRAM which is as close to as fast as DRAM but NOT volatile like memory stick type RAM. What are the implications if the data can be laid out and accessed in place where it is saved. Suddenly the whole structure is no longer useful.
@machinimaaquinix3178
@machinimaaquinix3178 4 года назад
This was a great talk, thank goodnees RU-vid has a .75 speed mode. She talks fast!
@colouredlaundry1165
@colouredlaundry1165 4 года назад
She has very high knowledge throughput: 10Gb information per second xD
@masbro1901
@masbro1901 2 года назад
1:10:37 its 100x faster than FPGA ?? wth, wow, thats blowing my mind, i thought designing custom hardware for specialized algorithm on FPGA its the fastest way on the planet, is it really??
@minhongz
@minhongz 4 года назад
So essentially power consumption and speed are almost equivalent for AI chips. Does anyone know what architecture Tesla chips employ?
@paulrautenbach
@paulrautenbach 4 года назад
While watching this I was seeing parallels with what I know about the Tesla chips from their Autonomy Investor Day presentation. The Tesla chips were designed with an energy budget in mind and so address many of the same things. One advantage the Tesla chips have is they do not need to be general purpose - so, to a large extent, only need to support a single architecture or configuration. In some cases the Tesla chips avoid storage and retrieval of intermediate data by passing outputs directly to inputs via hardware channels between successive computational stages implemented as separate hardware. A large proportion of the Tesla chips are used for static memory to implement what she called global memory. This avoids going off-chip for most values.
@samoha0812
@samoha0812 Год назад
This is what I exactly wanted to hear. Thank you. I expected to hear about how AI chip is designed to minimize energy consumption attracted by her presentation title but lot of content is focused on computing algorithm rather than hardware design but great presentation providing comprehensive understanding about computing and energy consumption. Thank you
@kitgary
@kitgary 4 года назад
Genius!!!!!
@jkobject
@jkobject 4 года назад
what about neuromorphic computing?
@jessenochella4309
@jessenochella4309 4 года назад
REEVERSIBLE computing uses less power! But you need new chip architecture and algorithms.
@fayssalelansari8584
@fayssalelansari8584 4 года назад
not bad
@vuththiwattanathornkosithg5625
@vuththiwattanathornkosithg5625 4 года назад
Tesla hardware 3.0???
@alexanderpadalka5708
@alexanderpadalka5708 4 года назад
🗽
@dapdizzy
@dapdizzy 4 года назад
I don’t think the content presented by those youngsters is on par with the talks to the legends. You know what I mean.