Тёмный

Spiking Neural Networks for More Efficient AI Algorithms 

WaterlooAI
Подписаться 1,7 тыс.
Просмотров 62 тыс.
50% 1

Spiking neural networks (SNNs) have received little attention from the AI community, although they compute in a fundamentally different -- and more biologically inspired -- manner than standard artificial neural networks (ANNs). This can be partially explained by the lack of hardware that natively supports SNNs. However, several groups have recently released neuromorphic hardware that supports SNNs. I will describe example SNN applications that my group has built that demonstrates superior performance on neuromorphic hardware, compared to ANNs on ANN accelerators. I will also discuss new algorithms that outperform standard RNNs (including GRUs, LSTMs, etc.) in both spiking and non-spiking applications.
Speaker Bio:
Professor Chris Eliasmith is currently Director of the Centre for Theoretical Neuroscience at the University of Waterloo and holds a Canada Research Chair in Theoretical Neuroscience. He has authored or co-authored two books and over 90 publications in philosophy, psychology, neuroscience, computer science, and engineering. His book, 'How to build a brain' (Oxford, 2013), describes the Semantic Pointer Architecture for constructing large-scale brain models. His team built what is currently the world's largest functional brain model, 'Spaun,' for which he received the coveted NSERC Polanyi Prize. In addition, he is an expert on neuromorphic computation, writing algorithms for, and designing, brain-like hardware. His team has shown state-of-the-art efficiency on neuromorphic platforms for deep learning, adaptive control, and a variety of other applications.

Наука

Опубликовано:

 

30 янв 2020

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 46   
@lachlangray8120
@lachlangray8120 3 года назад
Very excited for the future of hardware!
@arefpar3465
@arefpar3465 Год назад
I always thought of the new types of hardware that perform the task similarly to Brain. This exciting and wonderful talk gave me the impression that my thinking wasn't out of context. Looking forward to hearing more about it.
@PedramNG
@PedramNG 3 года назад
Truly, a fascinating talk! I enjoyed it.
@jayp6955
@jayp6955 Год назад
This is the single most important problem in ML right now. Data uncertainty and lack of generalizing power makes traditional ML brittle. OL is done at the data-pipeline level rather than intrinsically in the model, which won't scale or get us closer to AGI. In the future we'll look back at OL pipelines and see them as primitive. A sound basis of AI must incorporate time/OL, which is something traditional ANNs ignore as they are stationary solutions. ANNs need to be re-evaluated from first-principles where time/OL are baked in. Time-dependent networks like spike/phase oscillators are a promising way forward if time/OL is intrinsic, but the ML community has been seduced by traditional ANNs.
@jayp6955
@jayp6955 Год назад
Interestingly just came across these clip where Hopfield he talks about the limitation of offline-first networks, and what I perceive as a flaw in simple feed-forward ANN design. Ideas behind Hopfield networks are extremely fascinating. Hopfield also touched on emergent large-scale oscillatory behavior in this talk. There are differential equations that can be used to study this (Kuramoto). ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-DKyzcbNr8WE.html ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-DKyzcbNr8WE.html ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-DKyzcbNr8WE.html
@mywaimarketing
@mywaimarketing 2 года назад
Great Video Intro for our research fellows starting hardware based spike neural network research in MYWAI Labs
@SuperGhostRider
@SuperGhostRider 3 года назад
great lecture!
@phasor50
@phasor50 2 месяца назад
everytime someone says "compute" instead of computation or computing, I say to myself "life, liberty and the pursuit of happy"
@roryhector6581
@roryhector6581 3 года назад
I wonder how SNNs on Loihi compare to just an iteratively pruned ANN on something like EIE from Han et al. (2016). Is it mainly the fact that it’s a sparse network and hardware for it that give it good performance with less energy vs GPU? Or is the benefit to efficiency more from spikes and asynchrony?
@mapy1234
@mapy1234 2 года назад
great work...
@kanesoban
@kanesoban 2 года назад
I am wondering what is the training algorithm used for these networks? Does backpropagation work with SNNs ?
@BrianMantel
@BrianMantel 3 года назад
You're doing absolutely amazing work here. Can you point me to a simple example or information about how to perform online learning in a spiking neural network?
@solaweng
@solaweng 3 года назад
I believe the Nengo package that Prof Eliasmith mentioned is capable of doing online training. Training with spiking neuron is tricky though as back propagation is not usually available (I guess you can still do it with back prop if you are working under rate code). The only rule (biologically feasible) that I know is PES.
@BrianMantel
@BrianMantel 3 года назад
@@solaweng do you think maybe they're using hebian learning? Thanks for responding I was waiting 5 months for that. :-)
@solaweng
@solaweng 3 года назад
@@BrianMantel I just checked the new nengo doc and seems it has different learning rule now. The Oja learning rule has something to do with Hebbian coactivity so I guess the answer is yes. You can check it out here www.nengo.ai/nengo/examples/learning/learn-unsupervised.html
@lucamaxmeyer
@lucamaxmeyer Год назад
is it possible to download the slides somewhere? thanks for the video!
@postnubilaphoebus96
@postnubilaphoebus96 3 года назад
Very good lecture. Was looking for material for my master's thesis, and I found lots of interesting pointers.
@PedramNG
@PedramNG 3 года назад
what is your master thesis? if you don't mind.
@postnubilaphoebus96
@postnubilaphoebus96 3 года назад
I'm still discussing with my supervisors, so I cannot say yet. But I can come back to comment here once I know 😄
@PedramNG
@PedramNG 3 года назад
@@postnubilaphoebus96 good luck with it 😁
@postnubilaphoebus96
@postnubilaphoebus96 3 года назад
Thanks! The project also only starts around January, so there's still some time.
@PedramNG
@PedramNG 3 года назад
@@postnubilaphoebus96 Do let me know about it. 😁 Btw, I have a BCI mindmap, the link is available in the description of my RU-vid channel. I recently started to add some stuff to its computational neuroscience section of it.
@abby-fichtner
@abby-fichtner 3 года назад
SO helpful. Thank you so much! Other than the math part (which I fear I may possibly have fallen asleep for), everything made sense to me except for the part about your shirt. hmmmm. 🙃
@oraz.
@oraz. 6 месяцев назад
It's weird it seems like there are parallels between LMU and Hippo.
@ritikanarwal8147
@ritikanarwal8147 2 года назад
Sir, i want to write a research paper on Spiking neural networks. Would you please suggest me some of the applications, that would makeit easier to choose a field in this particular concept.
@henrilemoine3953
@henrilemoine3953 2 года назад
If this is all true, why aren’t everyone working on neuromorphic computing and SNNs?? This is really confusing to me, because I expect that researchers would all turn towards this research area if they knew it to be accurate.
@nullbeyondo
@nullbeyondo 2 года назад
Because I don't think neuromorphic chips are easily debuggable since neurons would become physical. Also, any use of backpropagation (which is the industry's focus right now) destroys the purpose of spiking neural networks in the first place IMO; like how are you gonna calculate a gradient for neurons that have synapsis looping? The only choice is to probably not have looping which renders SNNs further and further away from how the brain actually works. It worth nothing that the brain doesn't use backpropagation.
@TruePeaceSeeker
@TruePeaceSeeker Год назад
Because to get traction one must be viable in the industry as well
@zheyuanlin6397
@zheyuanlin6397 4 месяца назад
Amazing talk. Absolutely no way it's free.
@stanislav4607
@stanislav4607 4 месяца назад
48:50 that aged well
@parsarahimi335
@parsarahimi335 3 года назад
I came here to see spiking networks then he finishes by presenting LMUs. Like how is that even related
@chriseliasmith5387
@chriseliasmith5387 3 года назад
LMUs (unlike many other RNNs esp. LSTMs etc) run well on spiking hardware.
@diffpizza
@diffpizza Год назад
This is fucking amazing and I really want to do research on it
@shamnanaseer766
@shamnanaseer766 2 года назад
Klllllllllll
Далее
AI: Grappling with a New Kind of Intelligence
1:55:51
Просмотров 741 тыс.
This is why Deep Learning is really weird.
2:06:38
Просмотров 372 тыс.
The Next Generation Of Brain Mimicking AI
25:46
Просмотров 130 тыс.
Space oddities - with Harry Cliff
54:22
Просмотров 641 тыс.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Просмотров 377 тыс.
Neuromorphic: BRAINLIKE Computers
35:52
Просмотров 91 тыс.
Самый быстрый пылесос!
0:30
Просмотров 22 тыс.