Тёмный
No video :(

Understanding Mixture of Experts 

Trelis Research
Подписаться 11 тыс.
Просмотров 9 тыс.
50% 1

Опубликовано:

 

29 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 34   
@TripUnico-personalizedTrip
@TripUnico-personalizedTrip 4 месяца назад
This is one of the best explanations for MoE. Going into enough depth to give good idea about internal workings, problems, evaluation results. Great work!
@maybmb_
@maybmb_ 8 месяцев назад
One of the more approachable videos on the concept in RU-vid.
@HuxleyCrimson
@HuxleyCrimson 6 месяцев назад
You made a complex topic appear simple by giving just the right insight at the right time, thereby hitting the sweet spot between making it indigestible and way too simplified. I was really wondering about the training process and you gave invaluable insight about that. It is not made clear in the paper and the code was also somewhat confusing. So, thanks for that buddy.
@TrelisResearch
@TrelisResearch 6 месяцев назад
Appreciate that! thanks
6 месяцев назад
Agreed. This was an impressive explanation.
@pilotgfx
@pilotgfx 7 месяцев назад
thank you for this accessible explanation of a somewhat complex subject
@troy_neilson
@troy_neilson 10 месяцев назад
Great video and a really clear description. Thanks a lot!
@TrelisResearch
@TrelisResearch 10 месяцев назад
you're welcome!
@Shaunmcdonogh-shaunsurfing
@Shaunmcdonogh-shaunsurfing 8 месяцев назад
Incredibly well made video. Thank you.
@keeganpenney169
@keeganpenney169 10 месяцев назад
I like how you think, you found a new sub
@brandon1902
@brandon1902 8 месяцев назад
12:20 I heard that there is a minimum size for an expert to become reasonably functional. It worked for GPT4 because it had 1,800b parameters, which was more than it needed considering the size of the data set used. However, splitting a 7b parameter LLM like Mistral into 8 would make each expert less than 1b parameters. As a result it may have ~8x faster inference but the performance of even the best expert chosen by the router would be much worse than the original 7b parameter Mistral, or even a half sized 3.5b Mistral. Even at 70b parameters (Llama 2) a mixture of elements would perform significantly worse in response to every prompt than the original 70b LLM, or even a half sized 35b Llama 2. It's not until the parameter count starts to exceed what is ideally required considering the size of the input corpus that a MOE becomes reasonable. And even then a 1,800b parameter non-MOE GPT4 would perform ~10% better than a MOE, but such a small bump in performance isn't worth the ~8x inference cost. And using a 225b non-MOE GPT4 would perform much worse than the ideally chosen 225b expert. So in the end you get a notable bump in performance with the same inference cost. Yet at 180b or less a corpus capable of capturing a web dump, 1000s of books... is too big to be reasonably split into a MOE. Each expert needs to be larger than a minimum size (~100b or more) to capture the nuances of language and knowledge every expert requires as a base in order to respond as reasonably and articulately as GPT4 does.
@user-de8ro3tk5f
@user-de8ro3tk5f 7 месяцев назад
Very nice explanation
@akashgss4976
@akashgss4976 6 месяцев назад
This video's insane!
@ResIpsa-pk4ih
@ResIpsa-pk4ih 9 месяцев назад
Gpt-3 came out in the summer of 2020. Maybe you meant chatgpt came out in November of 22?
@sampathkj
@sampathkj 5 месяцев назад
Isn't MOE good at Multi-task learning and Multi-objective scenarios? Isn't that one of the main reasons to employ MOE - that was my understanding, will be great to get your thoughts
@TrelisResearch
@TrelisResearch 5 месяцев назад
MoE doesn't have anything in particular that makes it good at learning. Probably it's slower to learn because you end up training multiple experts on some similar content. The benefit is in cutting inference time. So it's really a cost/speed than a quality improvement.
@jeffg4686
@jeffg4686 5 месяцев назад
last time I had to deal with tokens, I was putting them in the skeeball at Chuck e Cheese, lol. That was the last time. oh, no, there's macros. nm. I came to learn about MoE, but got some interesting training on Fast feed forward networks. Pretty cool. Might have to watch this again. From what I'm learning, this can't use like ControlNet or LoRA adapters, right? Seems like MoE is only for the big boys - only someone able to afford a blackwell, or another recent big dog gpu.
@TrelisResearch
@TrelisResearch 5 месяцев назад
haha, love it. Yeah I don't know controlnet but LoRA works fine on MoE (so long as you only apply it to attention layers, which are shared, and not the sparse feed forwards). It's for the big boys the MoE, yeah I tend to agree. 7B and 34B models are just better imo if running locally or even on rental machines... To do MoE you need to be renting at least 2 GPUs
@jeffg4686
@jeffg4686 5 месяцев назад
@@TrelisResearch- thanks, yeah, I guess rental would be the way to go these days. It's getting ridiculous. But they are damn good...
@jeffg4686
@jeffg4686 5 месяцев назад
@@TrelisResearch- I missed the punchline, "Attention is all you need!" - oc, that's Transformers - but so what - close enough. Same field
@AkshayKumar-sd1mx
@AkshayKumar-sd1mx 7 месяцев назад
Loved your presentation.... Mixtral mentions using a TopK() for routing... how can such a method work if they use Fast Feed Forward (All are binary decisions)
@TrelisResearch
@TrelisResearch 7 месяцев назад
Howdy! thanks, appreciate that. In fast feed forward, one option is to just use the top expert. I believe you can still calculate the probability of each leaf being chosen, so it should still be possible to do a top k approach. It just means you activate all of the decision tree nodes (of which there are few relative to linear layer weights anyways).
@AkshayKumar-sd1mx
@AkshayKumar-sd1mx 7 месяцев назад
@@TrelisResearch Thank you!
@user-ux4gp3bk6m
@user-ux4gp3bk6m 4 месяца назад
Why 8 experts? Is there any structural consideration behind the choice?
@TrelisResearch
@TrelisResearch 4 месяца назад
Well typically it's a power of two because computing is binary and a lot derives off of that. As to why 8 and not 4 or 16... . If you do 2, that's only a 2x increase in speed... But if you do 16, then you have load balancing issues at inference because they may not all be used roughly equally. That's my best guess.
@franktfrisby
@franktfrisby 8 месяцев назад
Isn’t a mixture of expert is similar to a GAN by having two networks that use each other to improve.
@TrelisResearch
@TrelisResearch 8 месяцев назад
The experts don’t use each other to improve. They don’t see each others outputs.
@konstantinlozev2272
@konstantinlozev2272 9 месяцев назад
Very interesting! Would it not be worth to test with one introductory sentence with a dedicated sentence pointing to the subject of the chat Vs no such leading sentence
@TrelisResearch
@TrelisResearch 9 месяцев назад
Yeah, I think performance could def be improved with prompt engineering - although I was trying to keep things simple so devs know what works plug and play or not. Long summaries are hard because there is so much information feeding forward into the next token prediction that smaller models will either refuse or respond with blank or repetition. Refusing or responding with blank makes sense because that's a common occurence in text. I'm less sure what drives repetition, probably that's to do with mistuning of length parameters. Anyway, bigger models can handle more of the attention information and generate meaningful text above the baseline probability of refusal/blank responses.
@konstantinlozev2272
@konstantinlozev2272 9 месяцев назад
@@TrelisResearch have you seen that priming technique ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-piRMk2KIx2o.htmlsi=ZjdtMd-idT29QKA4
@ernststravoblofeld
@ernststravoblofeld 6 месяцев назад
Why not intentionally train each expert in a topic? To make it an expert in something?
@TrelisResearch
@TrelisResearch 6 месяцев назад
You could, but it may not be the most efficient way. Most likely, a lot of the semantics and statistical relationships would be repeated in the experts, so it is best to let gradient descent do the segregation.
@ernststravoblofeld
@ernststravoblofeld 6 месяцев назад
@TrelisResearch Most likely, things get repeated anyway. No one ever said neural networks are efficient, they just fit to a curve reasonably well when a human doesn't necessarily know how to do it.
@TrelisResearch
@TrelisResearch 6 месяцев назад
@@ernststravoblofeld yup I think both of those things are true too 👍
Далее
Improving LLM accuracy with Monte Carlo Tree Search
33:16
Why Does Diffusion Work Better than Auto-Regression?
20:18
Useful gadget for styling hair 💖🤩
00:20
Просмотров 1,8 млн
Кого из блогеров узнали?
00:10
Просмотров 661 тыс.
How Did Open Source Catch Up To OpenAI? [Mixtral-8x7B]
5:47
Serve a Custom LLM for Over 100 Customers
51:56
Просмотров 19 тыс.
Gaussian Processes
23:47
Просмотров 124 тыс.
Mixture of Experts LLM - MoE explained in simple terms
22:54
Useful gadget for styling hair 💖🤩
00:20
Просмотров 1,8 млн