Тёмный

Rotary Positional Embeddings: Combining Absolute and Relative 

Efficient NLP
Подписаться 8 тыс.
Просмотров 33 тыс.
50% 1

Опубликовано:

 

26 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 65   
@kindness_mushroom
@kindness_mushroom 2 месяца назад
Thank you for such an intuitive explanation of a pretty complex paper.
@theunconventionalenglishman
@theunconventionalenglishman 9 месяцев назад
I've watched a few videos trying to wrap my head around this concept and yours is by far the best. Thanks!
@laurentiupetrea3726
@laurentiupetrea3726 3 месяца назад
Finally! My 4th video and I was lost but this one did the trick!
@cmbbqrpb9737
@cmbbqrpb9737 Год назад
Thanks for creating and sharing this vid! Still confused on the math stuff though. So I read through the paper and wrote down some notes: The rotation matrix Rm rotates a query vector q of the mth token by mθ, while Rn rotates a key vector k of the nth token by nθ. For any rotation matrix or orthogonal matrix R, R^T = R^-1 holds. Thus Rm^T is Rm's inverse, that rotates q in another direction by -mθ. This means (q·Rm)^T·(k·Rn) can in total rotate q^T·k by (n-m)θ. This ultimately associates the knowledge extracted from the mth query and the nth key with their relative distance n - m, naturally and interpretably.
@dy8576
@dy8576 6 месяцев назад
Genius
@ItsRyanStudios
@ItsRyanStudios Год назад
This is amazing, thank you I just wrapped my mind around sinusoidal embeddings and came across rope and was really struggling to grasp it. Definitely going to refer back to this video. I love in depth NLP content like this.
@snehotoshbanerjee1938
@snehotoshbanerjee1938 Месяц назад
Excellent explanation!! Thank you!
@roomo7time
@roomo7time 4 месяца назад
your explanation is amazing. thank you for your work
@sammcj2000
@sammcj2000 2 месяца назад
Great explanation. Thank you for making this.
@vixguy
@vixguy Год назад
You make it easy to learn even for a high school student
@garylai5174
@garylai5174 2 месяца назад
Nice video. Thanks for this. I could be wrong but one potential error I see: In this video, you said that "You can’t do KV cache because you change the embeddings with every token you add." I don't think this is necessarily true, at least not for decoder architectures like GPTs. The previous tokens don't attend to the new tokens -- they only attend to tokens to their left (there's a causal mask). When you add a new token, the relative position between the previous tokens don't change. For example, if you add a 6th token to a sequence, the distance between token 1 and token 4 haven't changed at all; therefore, the KV cache is still valid. It seems to me that yes, relative position embedding is inefficient, but not because it invalidates KV cache; rather, it's because every time we add a new token, it needs to attend to all previous tokens twice: once for the regular attention calculation, once for the relative positional embedding
@EfficientNLP
@EfficientNLP 2 месяца назад
Yes, that is correct. The KV cache can still be used in T5 relative positional embeddings, but it is less efficient because the relative position needs to be recalculated - so this is an extra step that cannot be cached, making the KV cache not as effective compared to absolute positional embeddings.
@hw5622
@hw5622 8 месяцев назад
Thank you so much. Your explanation is very clear and succinct.
@weekendwarrior7933
@weekendwarrior7933 5 месяцев назад
Absolutely amazing explanation! Keep it up man
@MrOnlineCoder
@MrOnlineCoder 10 месяцев назад
Amazing video, intuitive explanations with examples.
@muyanfeng2082
@muyanfeng2082 6 месяцев назад
Really good introduction, thanks
@marshallmcluhan33
@marshallmcluhan33 Год назад
Good work, I look 'forward' to the ReRoPE video. 😎
@SahilDua
@SahilDua 10 месяцев назад
Thanks for the in-depth explanation of RoPE. A couple of questions: 1. How is KV Cache used/built for RoPE case? RoPE is applied to q and K. Does this change anything in how K and V are cached? 2. Where can I find intuition behind why this RoPE works? I usually find it harder to jump into the mathematical equations directly to find the proof.
@EfficientNLP
@EfficientNLP 10 месяцев назад
Yes, the KV cache can be used normally with RoPE, because the rotation is applied to a token depending on its position from the start of the sequence, and this does not change as more tokens are generated. I hope this video provides a good intuition of why this works!
@varunsaagars
@varunsaagars 9 месяцев назад
🎯 Key Takeaways for quick navigation: 00:14 🆕 *In 2022, a new architectural improvement called "Rotary Positional Embeddings" (ROPE) was proposed and adopted by various language models.* 03:27 🔄 *Relative positional embeddings represent token pairs' distances but face engineering challenges like slower processing for longer sequences.* 06:01 🔄 *Rotary positional embeddings propose rotating word vectors based on positions, combining advantages of both absolute and relative positional embeddings.* 08:04 🔢 *Rotary embeddings are implemented using rotation matrices for 2D cases and a more general approach for higher-dimensional vectors.* 10:48 ⚙️ *Experiments show that models using rotary positional embeddings train faster than those using sinusoidal embeddings and are relatively robust across various model architectures and training setups.*
@ddobokki
@ddobokki 6 месяцев назад
OMG!! Very good teaching!!!
@pierreenel1516
@pierreenel1516 6 месяцев назад
Excellent video, thanks!
@kevon217
@kevon217 Год назад
Thanks for this overview!
@ml.9106
@ml.9106 6 месяцев назад
Very clear~~thanks!
@abdelrahmanhammad1020
@abdelrahmanhammad1020 11 месяцев назад
Thanks @Bai for the great explanation. I still have a question: Mathematically, why will the positional embedding of other positional embedding techniques (may be absolute?) change if adding more tokens to the sentence? Approximately, at minutes 7:00 of this video. Thanks!
@EfficientNLP
@EfficientNLP 10 месяцев назад
This is a property of most absolute positional embeddings, but generally not for relative positional embeddings. For example, T5's relative embeddings change at every step as different bias values need to be added to the attention matrix. Thus, rotary embeddings are the first to combine the benefits of both absolute and relative embeddings.
@mineword2771
@mineword2771 Месяц назад
@naklecha 💪 and a legend was born ‼️
Год назад
very good explanation.
@gemini_537
@gemini_537 5 месяцев назад
Gemini: The video is about a new method for positional embeddings in transformers called rotary positional embeddings. The Transformer architecture is a neural network architecture commonly used for various natural language processing tasks. A key challenge for Transformer models is that they are invariant to the order of words by default. This means that the model would not be able to distinguish between a sentence and its scrambled version. To address this challenge, positional embeddings are added to the Transformer model. There are two main types of positional embeddings: absolute positional embeddings and relative positional embeddings. Absolute positional embeddings assign a unique vector to each position in a sentence. This approach however, can not handle sentences longer than the training data. Relative positional embeddings, on the other hand, represent the relationship between two words. While this method can handle sentences of any length, it requires additional computations in the self-attention layer, making it less efficient. Rotary positional embeddings address the limitations of both absolute and relative positional embeddings. The core idea is to rotate the word vector instead of adding a separate positional embedding vector. The amount of rotation is determined by the position of the word in the sentence. This way, rotary positional embeddings capture the absolute position of a word while also preserving the relative positions between words. The video also mentions that rotary positional embeddings have been shown to improve the training speed of language models.░
@manikantabandla3923
@manikantabandla3923 27 дней назад
Thanks for the crisp explanation. But I'm curious to know the source of information at 7:36; I couldn't find the same in the paper. Can you share the source for more information?
@EfficientNLP
@EfficientNLP 26 дней назад
I'm not sure if this is what you're asking, but a property of rotations is that they preserve the dot product between vectors. The dot product remains the same if you apply the same rotation to both vectors, so it only depends on the relative position difference between the two tokens, and not their absolute difference.
@manikantabandla3923
@manikantabandla3923 20 дней назад
​@@EfficientNLP@EfficientNLP If I'm not wrong, RoPE preserves this only at the first layer of the transformer. Because after the first layer, the angle between the representations for the words "pig" and "dog" will be different for the two prompts.
@EfficientNLP
@EfficientNLP 20 дней назад
​@@manikantabandla3923 That is correct - the angle between 'pig' and 'dog' is only the same in the first layer, as in later layers the embedding incorporates information from the entire sentence. In the later layers, the angle preserving property of RoPE lets it better capture relative positional information than absolute position.
@wolpumba4099
@wolpumba4099 Год назад
*Video Summary: Rotary Positional Embeddings: Combining Absolute and Relative* - *Introduction* - Discusses the importance of positional embeddings in Transformer models. - *Absolute Positional Embeddings* - Explains how absolute positional embeddings work. - Highlights limitations like fixed sequence length and lack of relative context. - *Relative Positional Embeddings* - Introduces the concept of relative positional embeddings. - Discusses the computational challenges and inefficiencies. - *Rotary Positional Embeddings (RoPE)* - Combines the advantages of both absolute and relative embeddings. - Uses rotation to encode position, preserving relative distances. - *Matrix Formulation* - Explains the mathematical formulation behind RoPE. - *Implementation* - Shows how RoPE can be implemented efficiently in PyTorch. - *Experiments and Conclusion* - Shares results of experiments showing RoPE's effectiveness and efficiency compared to other methods. The video provides a comprehensive overview of Rotary Positional Embeddings, a new method that combines the strengths of both absolute and relative positional embeddings. It delves into the mathematical details and practical implementation, concluding with experimental results that validate its effectiveness.
@naubull2
@naubull2 10 месяцев назад
Thanks for a great explanation! By the way, I was curious when I understood from the initial explanation and the rotational equations, consecutive pairs of coordinates seem to be rotated, as in (x_1, x_2) / (x_3, x_4) ... are each rotated. However from most of the implementations as suggested in the video, the codes pair up not by adjacent indices but with a window size of half the dimension, which would be (x_1, x_d//2+1) / (x_2, x_d//2+2) ... since the code states that we split the hdim by half and swap their order.. did I understand correctly or am I missing something?
@EfficientNLP
@EfficientNLP 10 месяцев назад
You are correct. In many implementations, rather than rotating each pair of adjacent dimensions, they choose to split the entire vector in half and rotate the two halves. Ultimately, this does not matter because the dimensions of vectors are interchangeable and do not affect vector addition and multiplication. This is likely to be more efficient from an implementation standpoint and is equivalent to the original formula.
@qwerty_and_azerty
@qwerty_and_azerty Год назад
Great vid! Nice explanation! Question: why is it termed “rotary” and not “rotational” position embeddings?
@EfficientNLP
@EfficientNLP Год назад
It’s the name given in the paper. I think it’s quite catchy!
@buh357
@buh357 7 месяцев назад
thank you for such a clear explaination, your explaination helped me to understand this concept, rotary positional embedding is so elegant way to do positional embedding, and intuitively make sense to me, curious how can this embedding technique works for vision transformer? anyone have experience?
@EfficientNLP
@EfficientNLP 7 месяцев назад
Rotary embeddings may be applied to a vision transformer, just as they can be for any other transformer; I'm not aware of any reports that it improves performance in this case. It would be an interesting experiment, though!
@einsteinsapples2909
@einsteinsapples2909 9 месяцев назад
I just smashed the like button.
@amortalbeing
@amortalbeing 10 месяцев назад
thanks alot
@hussainshaik4390
@hussainshaik4390 Год назад
great video but i have one question you are referring the eluther ai blog right? in that pytorch implementation instead of rotating every 2 elements in dim vector they rotated half vector like this ```def rotate_half(x): x1, x2 = x.chunk(2, dim=-1) return torch.cat((-x2, x1), dim=-1)``` but in the jax implementation they rotated every two elements any idea on this?
@EfficientNLP
@EfficientNLP Год назад
Yea that's possible, there are multiple ways to implement this but they should be logically equivalent.
@hazemessamm
@hazemessamm Год назад
Hi, thank you for this great video, but I wanted to ask how they should be logically equivalent, the values that were negated are not the same, so how they are logically eqivalent?@@EfficientNLP
@ziqichen5902
@ziqichen5902 11 месяцев назад
Same question... Have you figured out the reason yet?😅@@hazemessamm
@harshmittal63
@harshmittal63 4 месяца назад
@guanxi99
@guanxi99 9 месяцев назад
Thanks for the good explanation! How to actually make sure that the result of applying a positional embedding algorithm does not coincidently represent another token? E.g how to avoid that the positional embedding of “dog” in oosition i does not mean “cat” in position b?
@EfficientNLP
@EfficientNLP 9 месяцев назад
Indeed, it is possible for a word at position i to have the same embedding as a different word at position j, since both positional information and non-positional semantic information are represented in the same embedding space. The model learns to use them appropriately during training.
@jasonjones4236
@jasonjones4236 Год назад
Why is kv cache difficult to implement in case of relative embeddings?
@EfficientNLP
@EfficientNLP Год назад
The KV cache saves the K and V matrices during autoregressive decoding to avoid recomputing them for every token. But for relative embeddings, when a new token is generated, the relative distance between the new token and previous tokens changes. So there is an extra step (adding the relative biases) that cannot be cached, making the KV cache not as effective.
@jasonjones4236
@jasonjones4236 Год назад
@@EfficientNLP Ah so to be precise, the cache can work but we need to fully compute the attention matrix and add the relative embedding matrix to it. But isn't the attention matrix computed when we torch.matmul q and k in other cases too?
@EfficientNLP
@EfficientNLP Год назад
That is correct. In summary: there are several steps that are required in relative positional embeddings that aren't needed for absolute & rotary embeddings, which make them slower. Determining precisely which step causes the slowdown is an interesting question and would require some benchmarking experiments.
@pratik6447
@pratik6447 6 месяцев назад
What is W (q,k) matrix and how its calculated?
@EfficientNLP
@EfficientNLP 6 месяцев назад
These are the W_q and W_k matrices in self-attention, which are used to generate the Q and K matrices.
@dylstuart
@dylstuart 10 месяцев назад
Great video! What value is used for Theta?
@EfficientNLP
@EfficientNLP 10 месяцев назад
Theta_i = 10000^(2i / d). I didn't cover this in the video, but it is mentioned in the RoFormer paper.
@gemini_537
@gemini_537 7 месяцев назад
@@EfficientNLP That seems to be the same as the one used in the paper "Attention is All you Need".
@GeYin-k9r
@GeYin-k9r 3 часа назад
3Q hope for chinese version!!!
@davidlee327
@davidlee327 3 месяца назад
dude you are the mf goat
@akshaydevkarama3277
@akshaydevkarama3277 3 месяца назад
great explanation,really helped me!
@csbarathi
@csbarathi Год назад
Why not positionally embed based on sentence and paragraph rather than just the position of the word in the overall prompt? I understand that it adds more computation. But would yield better results wouldn't it?
@EfficientNLP
@EfficientNLP Год назад
The transformer doesn't distinguish between sentences and paragraphs; they are treated like any other token, so the position encoding doesn't refer to them specifically.
@csbarathi
@csbarathi Год назад
@@EfficientNLP I guess I have something in mind that I'm unable to express in words now. Will try it out and let you know what I ran into.
@Prashantkumar-hy1no
@Prashantkumar-hy1no 5 дней назад
Далее
БЕЛКА СЬЕЛА КОТЕНКА?#cat
00:13
Просмотров 1,4 млн
The KV Cache: Memory Usage in Transformers
8:33
Просмотров 38 тыс.
Relative Position Bias (+ PyTorch Implementation)
23:13
Ollama 0.1.26 Makes Embedding 100x Better
8:17
Просмотров 45 тыс.
RoPE Rotary Position Embedding to 100K context length
39:56
Variational Autoencoders | Generative AI Animated
20:09
Transformer Neural Networks Derived from Scratch
18:08
Просмотров 141 тыс.
Positional encodings in transformers (NLP817 11.5)
19:29