Тёмный

Were RNNs All We Needed? (Paper Explained) 

Yannic Kilcher
Подписаться 264 тыс.
Просмотров 29 тыс.
50% 1

Опубликовано:

 

13 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 57   
@Bikameral
@Bikameral День назад
It's great having you back !! Thank you and please don't leave us again
@FranksWorldTV
@FranksWorldTV День назад
💯
@novantha1
@novantha1 День назад
Imagine how influential this paper could have been if it released in 2014, lol. It would have been revolutionary.
@fireinthehole2272
@fireinthehole2272 День назад
Next paper: was NAND gates and registers all we needed?
@ickorling7328
@ickorling7328 День назад
Wait, literally...
@yurona5155
@yurona5155 День назад
Shame on you, NERF (NOR-exclusionary reductive functionalist)!
@achunaryan3418
@achunaryan3418 День назад
Was newton all we needed?
@Cereal.interface
@Cereal.interface 20 часов назад
are organic molecules and nucleotides all we needed?
@ickorling7328
@ickorling7328 16 часов назад
@@Cereal.interface was DNA as central dogma all we needed?
@wolpumba4099
@wolpumba4099 День назад
*Were RNNs All We Needed? Revisiting the Power of Minimal Recurrent Networks* * *0:00** Introduction:* The video explores a paper questioning the necessity of complex recurrent neural network (RNN) architectures like S4 and Mamba, suggesting that simpler RNNs might achieve comparable performance. * *0:16** RNNs vs. Transformers:* RNNs handle sequences efficiently with constant memory requirements compared to Transformers' quadratic memory needs, but suffer from backpropagation through time (BPTT). * *3:52** BPTT Limitations:* BPTT requires backpropagating gradients through all intermediate steps, limiting the length of sequences RNNs can effectively handle. * *5:30** State Space Models:* Newer models like S4 and Mamba address BPTT by removing hidden state dependencies from input computations, allowing for parallel processing and training. * *9:06** Minimal RNNs (minGRU, minLSTM):* The paper introduces minimal versions of GRUs and LSTMs that eliminate hidden state dependencies in gating mechanisms, further simplifying computation. * *12:54** Parallel Scan:* These minimal RNNs can be trained efficiently using a parallel scan algorithm, similar to S4 and Mamba. * *14:56** Trade-offs:* While simpler, minimal RNNs are less powerful than traditional RNNs in a single layer. However, this can be mitigated by using multiple layers. * *19:55** Experimental Results:* * *19:57** Selective Copying Task:* Minimal RNNs struggle with long-range dependencies in a single layer, but improve significantly with multiple layers. * *21:02** Reinforcement Learning Benchmarks:* Minimal RNNs perform well, but the benchmarks are considered too simple to draw strong conclusions. * *23:59** Language Modeling (Shakespeare):* Minimal RNNs perform comparably to Mamba on this small character-level dataset, where Transformers struggle due to the task's local nature. * *26:45** Conclusion:* The paper's hypothesis that minimal RNNs can achieve comparable performance to complex state-space models is valid, but requires stronger experimental evidence. However, the potential for scalability and efficiency makes them promising candidates for future research. I used gemini-1.5-pro-exp-0827 on rocketrecap dot com to summarize the transcript. Cost (if I didn't use the free tier): $0.03 Input tokens: 21161 Output tokens: 467
@onlyms4693
@onlyms4693 День назад
Does their solution to attention head are just with more layer? if i not wrong even mamba have limitation that it use transformer multi head attention to mitigate it. What we need to find are a replacement formula for attention head because i feel its the biggest compute cost with the fact the bigger the context the bigger its need to be procces in the attention head by calculating each context to other context if it more than one words.
@Neomadra
@Neomadra День назад
Excellent analysis of the benchmarks. Especially the analysis of character level tasks makes so much sense.
@Mordenor
@Mordenor День назад
Thank you Mr Yannic for discussing whether RNNs are all we needed.
@lizardy2867
@lizardy2867 День назад
TLDR: It would have been more experimentally interesting to see results on an ensemble of minGRUs. It is hard for me to say there is much takeaway here besides confirmation of the Mamba architecture's success. Perhaps they were a bit too excited with the release of the paper, that they decided to not focus on the stronger aspect of the paper, that being the minGRU and the concept of ensemble that Mamba also relies on.
@maccloud8526
@maccloud8526 День назад
Use a dark theme, then you won't have to wear sunglasses.
@achunaryan3418
@achunaryan3418 День назад
Even then how is he going to enter the matrix neo?
@GNARGNARHEAD
@GNARGNARHEAD День назад
was looking at doing something similar last week, but compressing the layers of a transformer into the weights for the RNN get around the training inefficiencies
@elpepemandioca
@elpepemandioca День назад
In spite of not getting good results right now, I'd like more research to go this way, attempting to synthesize the plethora of models
@danielsautot4521
@danielsautot4521 14 часов назад
Welcome back. Can you make a video of the architecture of the liquid foundation model?
@black-snow
@black-snow День назад
5th! Finally able to leave a high-quality comment.
@lifeofcode
@lifeofcode День назад
Wonderful overview, thanks!
@the_primal_instinct
@the_primal_instinct 19 часов назад
Next paper: "Can multiplications be replaced with multiple additions?"
@box-mt3xv
@box-mt3xv День назад
Missed your videos
@xelaxander
@xelaxander День назад
25:55 Constant gate decay might actually interesting for surrogate models of physical systems. Ignoring damage accumulation, a system response is independent of it‘s history.
@xelaxander
@xelaxander День назад
You need knowledge of the past though since you can’t include the entire phase space in your input, making you loose higher order information.
@testboga5991
@testboga5991 День назад
I think they're onto something, but I also think that in the strict sense is impossible to demonstrate if it can't be mathematically proven (likely not by a human possible anyway, if at all). They're basically trying to prove a negative, which strictly doesn't work.
@shikhars4816
@shikhars4816 День назад
iiuc, selective copying of a token depends on the current input token alone (?). In that case why does a single layer perform so bad on the task?
@achunaryan3418
@achunaryan3418 День назад
Single layer selection cannot be optimally used in rnn for selective token copying based on current input token. Recurrence requires more than one for creating outputs with less error even when it is input dependent. Maybe CNN, or mnn can produce better result.
@ensabinha
@ensabinha День назад
The fact one of the experiments is run on a simple benchmark is not the issue. As long as all architectures were run on such then that is not an argument not to use as a benchmark. Good architectures should perform well in simple problems as well. However, they should run on hard problems too.
@r.alexander9075
@r.alexander9075 23 часа назад
Why were the benchmarks chosen to be RL tasks, instead of Seqeunce Modelling tasks? And why would we then compare then to Decision Transformers?
@DanFrederiksen
@DanFrederiksen День назад
wouldn't it be straight forward to try it on the GPT2 training set and compare? or is that inconvenient
@-E42-
@-E42- День назад
damn I wish I was at flying altitude to fly with you through these papers ahah :D
@alekseyburrovets4747
@alekseyburrovets4747 День назад
Randomly stumbled. Subscribed.
@Timotheeee1
@Timotheeee1 День назад
can you review the nGPT paper?
@davidlearnforus
@davidlearnforus 20 часов назад
but I do not get who has decided that better performance of compositional bare-bone element means anything? Ameba has so much more capabilities than human singe neuron, but there is a big "BUT".
@mrpocock
@mrpocock День назад
RNNs are effectively map-reduce.
@seanoconnor1984
@seanoconnor1984 День назад
I guess Schmidhuber is dancing around over the Nobel.
@tresuvesdobles
@tresuvesdobles День назад
I doubt it (answering the question in the title)
@andrewaverbah4809
@andrewaverbah4809 День назад
Please review REPA paper
@crassflam8830
@crassflam8830 День назад
yes
@andytroo
@andytroo 3 часа назад
how many layers are in mingru - current transformers have >20 complex layers ....
@zrmsraggot
@zrmsraggot 4 часа назад
I just saw the title and i laughed
@makhalid1999
@makhalid1999 День назад
GPRNN when?
@tallwaters9708
@tallwaters9708 День назад
What do people use RNNs for these days? I though they went the way of GANs.
@chickenp7038
@chickenp7038 День назад
GANs are definitely still very used. all of the vaes in the LDMs use a gan loss
@novantha1
@novantha1 День назад
Well, the problem with deep learning seems to be that you can do most tasks with most architecture giving enough scale, data, and training compute. RNNs are kind of nice in that paradigm because they have stable memory allocation with large sequences as compared to Transformers, but they’re also a lot easier to optimize because you effectively just need efficient kernels for the linear transformations, activation functions, and parallel scan algorithms, which is quite a bit simpler than in, for instance, a full Transformer. As for what you’d use them for? Presumably the same things you could use a Transformer for, essentially. It appears that for a lot of the things you would use a smaller LLM for (ie: 1.3B and below) it actually really doesn’t matter which architecture you have. I’ve also thought about extending the context length for a Transformer LLM with some sort of RNN adapter for the ultra long range dependencies but I’m not even sure what that would look like exactly.
@AM-yk5yd
@AM-yk5yd День назад
I think translatatron still uses LSTM. It mentioned in Translatatroron 2 paper, iirc paper 3 doesn't explicitly goes into what decoder is. Only vaguely. And says that its backbone is translatatron 2. There was also xLSTM. I think Yannic covered it. RWKV is still alive and being developed. Still weak, but one day.... I will not be surprised if it is not used in time series prediction. Mamba would fit perfectly and rnn is generally probably the first thing I'd try to model time series from scratch.
@AM-yk5yd
@AM-yk5yd День назад
Simplest form of adaptor would probably make just insert extra layers of rnn. Memory transformers(or retro I don't remember which) found that inserting kv-lookup near end like around layer 10 in 12 layers network gives very good results. We can replace kv lookup with rnn layers and either add their output as prefix tokens RMT style or just add values
@MartinDxt
@MartinDxt День назад
Say whaaaat?
@lanessarosel
@lanessarosel День назад
Right - I’m still wondering what the borealis ai is - sounds like a reset machine
Далее
AI can't cross this line and we don't know why.
24:07
Million jamoasi - Sportsmenka bilan uchrashuv
15:05
Просмотров 880 тыс.
Grand Final | IEM RIO 2024 | BO5 | КРNВОЙ ЭФИР
6:35:24
why is it always rubidium?
19:40
Просмотров 219 тыс.
What P vs NP is actually about
17:58
Просмотров 108 тыс.
My dream died, and now I'm here
13:41
Просмотров 3 млн
Where do particles come from? - Sixty Symbols
25:34
Просмотров 238 тыс.
AI Can Only Do 5% of Jobs: MIT Economist Fears Crash
8:27
How are holograms possible? | Optics puzzles 5
46:24
Просмотров 913 тыс.
Million jamoasi - Sportsmenka bilan uchrashuv
15:05
Просмотров 880 тыс.