Тёмный

ISAAC: An Analog Convolutional Neural Network Accelerator (Part I) 

Rajeev Balasubramonian
Подписаться 10 тыс.
Просмотров 17 тыс.
50% 1

This is the first of two videos on the ISAAC analog accelerator for deep neural networks. The video is based on the ISCA 2016 paper "ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars" by A. Shafiee et al. Part I introduces the memristor crossbar unit, its inherent challenges, and how ADC overheads and precision can be managed.

Опубликовано:

 

7 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 12   
@RebeccaSejungPark
@RebeccaSejungPark 4 года назад
The best intro for people outside of the field of system design! (I'm personally more familiar with semiconductor device.) Thank you so much. I understood things very clearly :)
@carletpierre1895
@carletpierre1895 Год назад
Do you think a focus on VLSI and circuit design would benefit from this ?
@mustafafayez1784
@mustafafayez1784 6 лет назад
Thanks for the clarified explanation.
@kindpotato
@kindpotato 4 года назад
Wow this is really cool
@Max-ge7sv
@Max-ge7sv 2 года назад
If you have a network like this, the currents will not simply be added. In fact you have a complex current divider with multiple voltage sources. The output current is the result of the superposition of all current dividers, which depend on the resistor values and it will get more and more complex by increasing the input vector. Furthermore, the memrisors are changing their values by applying a voltage. How is it possible to get a consistent result?
@nabhay583
@nabhay583 Год назад
fWhat if we ground the lines? Won't we then easily be able to add the currents due to superposition?
@prateek22sri
@prateek22sri 3 года назад
Do you by any chance have some publicly available code for this architecture?
@weirdsciencetv4999
@weirdsciencetv4999 2 года назад
Why would we need to keep the resolution as opposed to just making a network which is tolerant of error accumulation?
@peters972
@peters972 4 года назад
You know how convolutional neural nets can have a dozen hidden layers etc.. Can you have hidden layers, all in analogue before the ADC? Can you have applications which the analogue output is used directly e.g to drive a servo?
@JeffersonRodrigoo
@JeffersonRodrigoo 3 года назад
Maybe two problems would arise: noise and saturation of precision.
@jaysiddhapura
@jaysiddhapura 2 года назад
Does DA and AD conversion needed between each layer !?
@shinnychinni4408
@shinnychinni4408 5 лет назад
Sir i want expalin for memristor based hardware accelerator for image compression videos details plzzz sir
Далее
Memristors for Analog AI Chips
16:25
Просмотров 123 тыс.
ПЛАН ПРОТИВОДЕЙСТВИЯ МЕДВЕДЮ.
00:28
Why Does Diffusion Work Better than Auto-Regression?
20:18
Watching Neural Networks Learn
25:28
Просмотров 1,3 млн
Transformer Neural Networks Derived from Scratch
18:08
Просмотров 142 тыс.
Can We Build an Artificial Hippocampus?
23:51
Просмотров 207 тыс.
The Most Powerful Computers You've Never Heard Of
20:13