Тёмный
No video :(

DeepMind x UCL | Deep Learning Lectures | 8/12 | Attention and Memory in Deep Learning 

Google DeepMind
Подписаться 499 тыс.
Просмотров 64 тыс.
50% 1

Attention and memory have emerged as two vital new components of deep learning over the last few years. This lecture by DeepMind Research Scientist Alex Graves covers a broad range of contemporary attention mechanisms, including the implicit attention present in any deep network, as well as both discrete and differentiable variants of explicit attention. It then discusses networks with external memory and explains how attention provides them with selective recall. It briefly reviews transformers, a particularly successful type of attention network, and lastly looks at variable computation time, which can be seen as a form of 'attention by concentration'.
Download the slides here:
storage.google...
Find out more about how DeepMind increases access to science here:
deepmind.com/a...
Speaker Bio:
Alex Graves completed a BSc in Theoretical Physics at the University of Edinburgh, Part III Maths at the University of Cambridge and a PhD in artificial intelligence at IDSIA with Jürgen Schmidhuber, followed by postdocs at the Technical University of Munich and with Geoff Hinton at the University of Toronto. He is now a research scientist at DeepMind. His contributions include the Connectionist Temporal Classification algorithm for sequence labelling (widely used for commercial speech and handwriting recognition), stochastic gradient variational inference, the Neural Turing Machine / Differentiable Neural Computer architectures, and the A2C algorithm for reinforcement learning.
About the lecture series:
The Deep Learning Lecture Series is a collaboration between DeepMind and the UCL Centre for Artificial Intelligence. Over the past decade, Deep Learning has evolved as the leading artificial intelligence paradigm providing us with the ability to learn complex functions from raw data at unprecedented accuracy and scale. Deep Learning has been applied to problems in object recognition, speech recognition, speech synthesis, forecasting, scientific computing, control and many more. The resulting applications are touching all of our lives in areas such as healthcare and medical research, human-computer interaction, communication, transport, conservation, manufacturing and many other fields of human endeavour. In recognition of this huge impact, the 2019 Turing Award, the highest honour in computing, was awarded to pioneers of Deep Learning.
In this lecture series, research scientists from leading AI research lab, DeepMind, deliver 12 lectures on an exciting selection of topics in Deep Learning, ranging from the fundamentals of training neural networks via advanced ideas around memory, attention, and generative modelling to the important topic of responsible innovation.

Опубликовано:

 

29 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 46   
@kimchi_taco
@kimchi_taco 11 месяцев назад
Alex Graves invented CTC and RNNT, which is basically a modern e2e ASR model in 2013. It created tens of thousands of research jobs, and he left to seek his desire. His journey is inspiring. He doesn't seek fame or money or status. He seeks the answer to his internal curiosity. I wanna live like him.
@menesun
@menesun 2 года назад
From comment of Lei Xun (I added a 0.00 timestamp for the see chapters in the video) 0. Opening 0:00 1. Introduction 1:24 1.1 Attention, memory and cognition 1:28 1.2 Attention in neural networks 2:50 -Implicit -Can be checked through Jacobian 1.3 Explicit attention: hard attention, non-differentiable 17:00 -It has several advantages over implicit attention --Computational efficiency --Scalability (e.g. fixed size glimpse for any size image) --Sequential processing of static data (e.g. moving gaze) --Easier to interpret -Neural attention models 19:24 -Glimpse distribution 20:25 -Attention with reinforcement learning 21:12 -Complex glimpse 22:46 2. Explicit attention: soft attention, differentiable 26:27 2.1 Basic 28:15 2.2 Attention weights 29:22 2.3 An example: handwriting synthesis with RNNs 32:40 2.4 Associative attention 38:38 2.5 Differentiable visual attention 45:30 3. Introspective attention 49:23 3.1 Neural Turing Machine 51:02 3.2 Selective attention 52:53 3.3 Content-based and location-based attention 55:28 3.4 Differentiable Neural Computer 1:12:04 4. Further topics 1:13:51 4.1 Self-attention in Transformers 1:14:00 5. Summary 1:34:14
@drpchankh
@drpchankh 3 года назад
A no-nonsense detailed attention based lectures. A very well prepared lecture for all (beginners and experienced deep learning practitioner). Greatly recommended for all who want a context on how attention is first thought through in the research world. Thank you Alex. Enjoyed the lecture.
@leixun
@leixun 4 года назад
*DeepMind x UCL | Deep Learning Lectures | 8/12 | Attention and Memory in Deep Learning* *My takeaways:* *1. Introduction **1:24* 1.1 Attention, memory and cognition 1:28 1.2 Attention in neural networks 2:50 -Implicit -Can be checked through Jacobian 1.3 Explicit attention: hard attention, non-differentiable 17:00 -It has several advantages over implicit attention --Computational efficiency --Scalability (e.g. fixed size glimpse for any size image) --Sequential processing of static data (e.g. moving gaze) --Easier to interpret -Neural attention models 19:24 -Glimpse distribution 20:25 -Attention with reinforcement learning 21:12 -Complex glimpse 22:46 *2. Explicit attention: soft attention, differentiable **26:27* 2.1 Basic 28:15 2.2 Attention weights 29:22 2.3 An example: handwriting synthesis with RNNs 32:40 2.4 Associative attention 38:38 2.5 Differentiable visual attention 45:30 *3. Introspective attention **49:23* 3.1 Neural Turing Machine 51:02 3.2 Selective attention 52:53 3.3 Content-based and location-based attention 55:28 3.4 Differentiable Neural Computer 1:12:04 *4. Further topics **1:13:51* 4.1 Self-attention in Transformers 1:14:00 *5. Summary **1:34:14*
@softerseltzer
@softerseltzer 3 года назад
22:25 : I'm checking all my tabs for notifications
@leixun
@leixun 3 года назад
@jawad mansoor You’re welcome
@barisdenizsaglam
@barisdenizsaglam 4 года назад
Great lecture! I really appreciate how he explains the thought process behind the new ideas.
@skySanter
@skySanter 3 года назад
Selam türk
@stephennfernandes
@stephennfernandes 3 года назад
Great Lecture ! Highly recommend anyone who is looking for indepth understanding of attention and different families of attention mechanism please watch this video . Its the best attention explanation available on the entire web.
@pw7225
@pw7225 2 года назад
This is sooooooo good. So well explained. It's like a Neuralink knowledge upload to my brain. Thanks, Alex!
@peterdavidfagan
@peterdavidfagan 2 года назад
This is one of my all-time favorite lectures, thanks for making this available. DNCs are very interesting.
@agamemnonc
@agamemnonc Год назад
I like the "Thank you very much for your attention" punch line at the end.
@Letsfeelthenaturee
@Letsfeelthenaturee 4 года назад
You are really brilliant, sir. I am from your friend country Bangladesh 🇧🇩. Hope you will be more and more helpful
@lukn4100
@lukn4100 3 года назад
Great lecture and big thanks to DeepMind for sharing this great content.
@BlackHermit
@BlackHermit 4 года назад
This is only the beginning.
@Naghaas
@Naghaas 8 месяцев назад
Another high quality course from deepmind, thanks !
@ansh6848
@ansh6848 2 года назад
Looking for a lecture on attention mechanism..and This was the best.
@marcelomanteigas
@marcelomanteigas 4 года назад
wonderful! Thanks for putting these lectures out!!
@kaymengjialyu5086
@kaymengjialyu5086 4 года назад
Dear DeepMind, the link for the slides seems to be valid. Can anyone fix that?
@ProfessionalTycoons
@ProfessionalTycoons 3 года назад
thank you for this lecture, learned a lot about attention
@robertfoertsch
@robertfoertsch 4 года назад
Excellent, Added To My Research Library, Sharing Through TheTRUTH Network...
@GrigorySapunov
@GrigorySapunov 4 года назад
Thanks Alex for the cool lecture and research!
@siyn007
@siyn007 4 года назад
For anyone that watched this lecture and his lecture from two years ago, is the difference large enough for me to watch the one from two years ago? Thanks
@mabbasiazad
@mabbasiazad 4 года назад
The section discussed after 1:14:00 (Further topic) is new.
@siyn007
@siyn007 4 года назад
@@mabbasiazad thanks!
@stanislavjirak2894
@stanislavjirak2894 Год назад
Splendid!! 🎉
@YaroslavVolovich
@YaroslavVolovich 4 года назад
Thanks Alex for a great lecture!
@PresidentGollumSmeag
@PresidentGollumSmeag 7 месяцев назад
hey! i really enjoyed the machine lecturing! BUT!!!! your name is graves but i dont see a scar and i played u quite a bit in aram and also no cigar and also no shotgun and also no collector in ur item list in the background! PROPS FOR THE BEARD!!! FRAUD!!!!!!!
@Adrixin
@Adrixin 7 месяцев назад
totally agree. needless to say he was camping base the entire time afk
@AnonymousIguana
@AnonymousIguana Год назад
Fantastic :)
@hosseinsheikhi5596
@hosseinsheikhi5596 4 года назад
Amazing lecture!
@Letsfeelthenaturee
@Letsfeelthenaturee 4 года назад
really..... so much interesting
@priancho
@priancho 4 года назад
Thank you for such a good lecture! :=)
@amniasalma307
@amniasalma307 4 года назад
Thanks for sharing this
@pratik245
@pratik245 2 года назад
These things seem eeringly similar to an idea i had 5 years ago and wrote some innocuous linkedin article around tge same same transformers delved into attnetion mechanism. But, only those who actually are in Harvard, MIT, deep mind can actually implement it with the resources that are required for it.
@robensonlarokulu4963
@robensonlarokulu4963 Год назад
No worries! Jürgen Schmidhuber already invented all those stuff and all relevant ideas at least 30-40 years ago. Maybe earlier, just around while he was a suckling infant.
@pratik245
@pratik245 Год назад
@@robensonlarokulu4963 yeah.. Also true that losers will be losers right from the time they are born.
@pratik245
@pratik245 Год назад
Also, understand the difference between me and you, i never want credit for anything but i don't like stealing. Do you know who rudra is, that is what i become when i see too much injustice.. So better stay away from sucking any future generation's blood. If i see such injustice, believe me you will know God's wrath..
@quentinpaden1481
@quentinpaden1481 5 месяцев назад
Revive RU-vid
@rogerab1792
@rogerab1792 3 года назад
Memmory Augmented Neural Networks are the next big thing.
@deeplearningpartnership
@deeplearningpartnership 4 года назад
Amazing.
4 года назад
"Attention is all you need"? What a missed opportunity to call the paper "Give me some attention and I'll do everything you want". ;)
@quentinpaden1481
@quentinpaden1481 5 месяцев назад
Police RU-vid
Далее
AI Safety…Ok Doomer: with Anca Dragan
39:43
Просмотров 6 тыс.
Deep Learning 7. Attention and Memory in Deep Learning
1:40:19
Attention is all you need explained
13:56
Просмотров 82 тыс.
How Far is Too Far? | The Age of A.I.
34:40
Просмотров 62 млн
This is why Deep Learning is really weird.
2:06:38
Просмотров 384 тыс.
The Turing Lectures: The future of generative AI
1:37:37
Просмотров 591 тыс.
AlphaFold: The making of a scientific breakthrough
7:55
CS480/680 Lecture 19: Attention and Transformer Networks
1:22:38