Тёмный

Razvan Pascanu: Improving learning efficiency for deep neural networks (MLSP 2020 keynote) 

Steven Van Vaerenbergh
Подписаться 3,5 тыс.
Просмотров 363
50% 1

Опубликовано:

 

2 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 2   
@X_platform
@X_platform 4 года назад
I could not agree more. The top k gradient approach would greatly reduce the tug of war issue. Loving the speaker and the content 😊 Thank you!
@nguyenngocly1484
@nguyenngocly1484 4 года назад
You can turn artificial neural networks inside-out by using fixed dot products (weighted sums) and adjustable (parametric) activation functions. The fixed dot products can be computed very quickly using fast transforms like the FFT. Also the number of overall parameters required is vastly reduced. The dot products of the transform act as statistical summary measures. Ensuring good behavour. See Fast Transform (fixed filter bank) neural networks. Since dot products are so statistical in nature only weak optimisers are necessary for neural networks. You can use sparse mutations and evolutions. Then the workload can be very easily split between GPUs with little data movement needed during training. See Continuous Gray Code Optimization.
Далее
Peter Hitchens in heated clash over Israel's war
11:33
Обменялись песнями с POLI
00:18
Просмотров 286 тыс.
#kikakim
00:10
Просмотров 13 млн
МАЛОЙ ГАИШНИК
00:35
Просмотров 402 тыс.
Mark Cuban on Why He's All In for Kamala Harris
37:45
Noam Chomsky on Israel
7:41
Просмотров 82
Using D-ID to create a talking avatar video
11:24
Meta Has Changed The Game.
10:17
Просмотров 6 тыс.
Обменялись песнями с POLI
00:18
Просмотров 286 тыс.