After a year returning to that video finally I fully (or at least saw the entire video in a row) understand what is going on!! Maybe one more time to fix and go to the next part!! Thanxx
A cristal clear explanation of Transformers. Papers in many cases are very difficult to follow. Pointing out the important omited details which are critical for the model, even if not explained, is very useful. Many out there try to explain transformers without having a clue of what it is. Clearly, this is not the case. Thanks in its deepest tokenized meaning for sharing your knowledge. BTW, the last programming tip is really helpful. A small hands on demo of using BERT(or any flavor of BERT) with a classifier for a particular application would be amazing for another video.
Hands down the best explanation, this after watching so many videos, terrific, Looking forward to some videos on understanding on BARD and its fine tuning
Great Video. The first Transformer explanation that (correctly) does not use the Encoder/Decoder diagram from the Transformer paper, well done! Additionally talking about the exact outputs (using only one output for predictions) was very helpful.
Great overview and explanation of the Transformer network. I am just starting my exploration into NLP and this talk has saved me lots of time. I now know that this where I need to be focussing my attention. Thank you 👍🙏😍
Mapping to geometry is pro. I have thought since my education about 40 years ago that current mathematics is taught incorrectly. Here is a pro example of how math should be taught!
She's so good! I've watched a few videos attempting to explain these self-attention version of transformers and this one is by far the best in so many aspects with actual deep understanding of the architecture at the top followed closely by coherently communicating concepts, good script, presentation and graphics! I hope she narrates more videos like this... I'm about to search and find out lol! 🧐🤞 🤓
I expected some wishy-washy feel-good "explanation", but I'm pleasantly surprised. So far the best explanation. Goes after the relevant distinguishing key features of the transformers without getting bogged down in unnecessary details.
This is a fairly good presentation. There are some areas where it summarizes to the point where it becomes almost misleading, and at least very questionable: 1. Several other sources that I read claim that the Bert layers will have to be frozen during fine tuning, so I think it is still open for debate what the right thing to do is there? 2. This presentation glosses over the outputs of the pretraining phase. I think the output corresponding to the CLS token is pretrained with the “next sentence prediction task”. So, is this output layer dropped entirely in the fine tuning task? Otherwise I don’t see how the CLS token output would be a good input for sentiment classification. 3. The presentation suggest that the initial non contextual token step is also trainable and fine tunable. Isn’t it just fixed byte pair encodings? I know that these depend on frequencies of letters in the language but can these be trained in process with Bert? 4. This presentation equals transformers very silently to transformer encoders, and thus drops the fact that transformers can also be decoders. I think all initial transformers were trained on sequence to sequence transformation, and then the decoders were trained on next token prediction giving rise to things like GPT, whereas the encoders were trained on a combination of masked token prediction and next sentence prediction giving rise to the BERT like models.
Julia, Your presentation has triggered a Eureka moment in me . What makes a great training video? Can AI help answer that. Here is a suggestion. Get a collection of videos and rank them by review comments. Using a large language model, find patterns and features and see whether there are correlations between the features and the views and review rankings. The model should be unsupervised. Some of the features can be extracted from comments
at minute 35 the video describes transfer learning, and it is said that during the fine tuning phase ALL the parameters are adjusted, not only the classifier parameters. Is that right? In contrast, when using a pre-trained deep network for a specific image calssification, I froze all parameters belonging to the CNN and just allowed the classifier parameters to vary
okay what you are saying is completely vague . like for the query matrice you mentioned ( some other representation [why do we need another representation at all ?])
Sorry to say, but this was not very good. Key information is missing mostly the WHYs ? why is there a need for Query and Key Matrices? what is the main function of these matrices? How does the Attention function alter the Feedforward NNs?
I always find the face of presenter distracting when it is on the slides … can you just talk over slides instead of covering them with presenter’s face??