Тёмный

What is an LLM Router? 

Sam Witteveen
Подписаться 62 тыс.
Просмотров 20 тыс.
50% 1

In this video I take a look at a new open source framework and the accompanying paper from LMSys for helping you to automate LLM selection based on the input query.
Blog : lmsys.org/blog/2024-07-01-rou...
Github: github.com/lm-sys/RouteLLM
Paper : arxiv.org/pdf/2406.18665
Models and datasets: huggingface.co/routellm
🕵️ Interested in building LLM Agents? Fill out the form below
Building LLM Agents Form: drp.li/dIMes
👨‍💻Github:
github.com/samwit/langchain-t... (updated)
github.com/samwit/llm-tutorials
⏱️Time Stamps:
00:00 Intro
01:15 LMSys RouterLLM Blog
03:24 RouteLLM Paper
03:46 RouteLLM Github
08:31 RouteLLM Hugging Face

Наука

Опубликовано:

 

5 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 67   
@JohnBoen
@JohnBoen 2 дня назад
Haha. Last night I was chatting with someone about how to come up with data to solve this problem. I have a story writing engine that can blow through blow through $10 of tokens in minutes. It is getting really expensive just to develop it. This morning I was going to look around to see if anybody had something like this. And the solution to my quest is in the first video I watched this morning. I hope the rest of my day is this awesome.
@toadlguy
@toadlguy 2 дня назад
You could probably modify the code as well so that you have both a "debug" context and a "production" context so that cheaper LLMs could be used when the final output doesn't require the most expensive tokens.
@JohnBoen
@JohnBoen 2 дня назад
@toadlguy Thanks. You have pointed me to solutions several times. We were trying to figure out how to get prompt plus user choice data. There are plenty of sites that run your prompt against 2 LLMs and ask which you prefer. That is some really valuable data... Now I don't need to - and that would have blocked me. My current algorithm involves looking at the size and complexity of the prompt. Most of my prompting is highly structured and templatized - I expect LLMRouter will pick Chat 4o most of the time. But later today I will find out :) Now I suspect need to find a way to generalize the use of tokenizers and different embeddings and such... Do you have a quick and easy solution to this, too? Thanks for all the help.
@JohnBoen
@JohnBoen 2 дня назад
@toadlguy It was a nice thought... The application I am making uses prompts that contain RDF - state/node/conditional edge... names, descriptions, pass messages, fail messages... The prompt then acts as a state engine outputting the proper messaging based on the non-templatized stuff passed in. This is ChatGPT 4o all the way.
@anantgupta3285
@anantgupta3285 День назад
@JohnBoen do post how it went, for you test it
@JohnBoen
@JohnBoen День назад
@anantgupta3285 Sure. I get different results - undesired results - when I try using lesser models. My initial thoughts: * Templatized agent prompts work differently in different LLMs. * It would be quite a bit of work to revise things so I could dynamically select the LLM - this is not something I considered when I started playing with LangChain... Not gonna work for this project. However - I am writing an auto-tester to execute my standard test suite against new LLMs...I will plug it into that... However... ChatGPT 4o $20 plan supports 8k tokens. I assume 750 words for 1000 tokens... 5600 words in the context window. Realistically this limits my story engine to about 10 275-word pages of text. A couple of chapters before it loses track of the story metadata and rules... A team account would get me 32k tokens perhaps 50 pages... I think I might just need to use Google's million token context window...
@StephenRayner
@StephenRayner 2 дня назад
Now that is OPEN 😮 wow. Great work!
@bayesian7404
@bayesian7404 10 часов назад
Great work and excellent explanation. Thank you
@thirdreplicator
@thirdreplicator День назад
I'd like to see more examples of applications of LLMs.
@jeffg4686
@jeffg4686 2 дня назад
Would be interesting to see how this and MoA (mixture of agents) could be used together. Perhaps the route could go to a different model that uses several smaller agents (models) together, medium agents together, and larger agents together and/or mixed with smaller agents
@toadlguy
@toadlguy 2 дня назад
Wow, this is great that it was released with the entire framework open source, as I believe that this (or something like it) will be part of the interface we will all be using soon. The other component is determining what data is required to respond. For instance, does the query require proprietary or personal data? This would first create a context (through RAG) for that data but also determine which LLMs would be available to that context based on the required security (do you even want to send the proprietary data to a commercial LLM?). Also with Llama3 8B, this could be done locally (at almost no cost). BTW, this is part of the framework that Apple will implementing, but can be tailored for many other applications now using this framework and LangChain (for instance).
@YuleiSheng
@YuleiSheng День назад
This makes so much sense
@armans4494
@armans4494 2 дня назад
Wonderful video. Thank you
@RolandoLopezNieto
@RolandoLopezNieto 2 дня назад
Great opensource release, thanks for the video
@thesimplicitylifestyle
@thesimplicitylifestyle День назад
Very helpful! Thank you! 😎🤖
@KevinKreger
@KevinKreger 2 дня назад
Great one Sam. So, to make this all about me :-), I've been using GPT4x as the router/manager under the theory that it is the smartest (this is a Mixture of Agents). Then the agents are cheaper. I can see this is much better. Thanks!
@hqcart1
@hqcart1 2 дня назад
It's a good idea, but not practical in real life apps. I use multiple LLM for my app, and i manually test them first and make sure the weaker model are suitable for my task , then i route each of different tasks to different LLMs based on intensive test results. I am unsure how or where this router AI would be useful.
@tvwithtiffani
@tvwithtiffani 2 дня назад
Maybe it can help you iterate faster. Help you manually test your prompt + models quicker and see which of the cheapest models are suitable for each task.
@JohnBoen
@JohnBoen 2 дня назад
@hqcart1 Initial thoughts... After poking around with example prompts... Highly structured and templatized prompts seem to always suggest frontier models. Not too useful for me either.
@hqcart1
@hqcart1 2 дня назад
@@tvwithtiffani no, all it does is add another layer of complexity and latency to a judge (route LLM) that is weak in nature, so it will guess which model to use based on unknown criteria. a manual route is way better without complex systems and reliable results.
@tvwithtiffani
@tvwithtiffani 2 дня назад
@@hqcart1 oh well don't use it then 🤷🏾‍♀️and it's not unknown criteria. This video said it uses its llm arena data.
@hqcart1
@hqcart1 2 дня назад
@@tvwithtiffani exactky, thats the unknown
@Viki_vigneshwaran
@Viki_vigneshwaran 2 дня назад
Good insight
@Mzulfreaky
@Mzulfreaky 2 дня назад
Interesting 😮
@Leo-ph7ow
@Leo-ph7ow 2 дня назад
Thanks!
@AdrienSales
@AdrienSales 2 дня назад
This is actually ery interesting. Concretely, when you use langchain and has satically linked LLMs on some custom tools, how could we redirect this from langchain directly from langchain so the routing is made afterwards ?
@user-wu6me3nf1d
@user-wu6me3nf1d 2 дня назад
Worth comparing to how well it performs vs the semantic router lib which is also free to use
@reza2kn
@reza2kn 2 дня назад
❤❤❤
@amj-9421
@amj-9421 2 дня назад
What about the latency impact? Wouldnt this preclude a lot of production use cases
@jarail
@jarail 2 дня назад
The router would be a very small and fast model. The cheaper model would also be a smaller model. Since cheaper models are smaller, they respond much faster. Latency would only go up a tiny bit for the responses that get routed to the most expensive model. Overall, you'd see a massive improvement in latency. Consider cost to be a proxy for compute time spent. They say they save 85% of the cost while maintaining 95% of the benchmark score. So estimate a 85% latency reduction (not counting the fixed networking latency). This doesn't actually play out exactly as expensive models are more parallel as well but you get the idea.
@longboardfella5306
@longboardfella5306 2 дня назад
I would think so. Perhaps that’s another parameter to be optimised for: latency required for the type of query. Eg if conversational voice tokens input would route to a low latency model like upcoming 4o voice mode.
@nickludlam
@nickludlam 2 дня назад
I don’t know whether this data oriented way of evaluating where to direct a query is going to be better than a task based one. For my app, it would be far easier to route for summarisation vs data extraction tasks, vs other tasks
@MattJonesYT
@MattJonesYT День назад
Requesting a vid on GraphRAG
@jarail
@jarail 2 дня назад
Haha how is this new? I started doing this about 20 mins after trying GPT-4. I appreciate the formal framework and improvements they've made tho. That said, I use GPT 3.5 to filter first. And yeah, saved me a ton of money. Not only is it the cheaper model but using simpler (short) prompts. Like "respond IGNORE if this message is not asking for a response." Then I'll only send the messages to GPT4 with a full prompt for messages that need responses. Use a simple model first. Save tokens. Save 20-50x of LLM costs (my usecase). Profit. Also worth noting that ChatGPT has something similar. People have long known that some responses get routed to gpt 3.5 vs 4.0+.
@GregoryMcCarthy123
@GregoryMcCarthy123 2 дня назад
Haha, nice man, yeah I’ve been doing the same thing too, I also use the cheap guys for filtering. It’s very cool to bump into someone else who does the same thing!
@93simongh
@93simongh 2 дня назад
Does it work for languages other than english?
@samwitteveenai
@samwitteveenai 2 дня назад
This is a good question. It may do ok out of the box, but you could certainly train one of their models to handle other languages.
@ATH42069
@ATH42069 2 дня назад
this is a super sexy topic
@lydedreamoz
@lydedreamoz 2 дня назад
Claude 3.5 Haiku with this framework is gonna be insane. Nice video as always !
@bastabey2652
@bastabey2652 2 дня назад
Flash is a better and cheaper reranker than the rerankers in the market (including Cohere)
@husanaaulia4717
@husanaaulia4717 2 дня назад
I guess this idea similar to CoE that SambaNova use
@BorisHrzenjak
@BorisHrzenjak 2 дня назад
that's basically like mixture of agents or am I wrong?
@tvwithtiffani
@tvwithtiffani 2 дня назад
They used their own data from the arena to produce the framework. So theoretically it should be more versed in which queries are ok for each of the Models you select to MoE(xperts)/MoA(gents) with.
@BorisHrzenjak
@BorisHrzenjak 2 дня назад
@@tvwithtiffani so same concept just a bit more polished :)
@longboardfella5306
@longboardfella5306 2 дня назад
Am I right that this could be combined with MoA to enable you to then optimise for cost / performance and accuracy?
@tvwithtiffani
@tvwithtiffani 2 дня назад
@@BorisHrzenjak I think so. More like mixture of models tho because Agents and experts are typically preset with a specific task or set of tasks for agents and a specific set of preset knowledge for each the Experts in a mixture.
@husanaaulia4717
@husanaaulia4717 2 дня назад
​@@BorisHrzenjakSambaNova call it CoE I guess
@robcz3926
@robcz3926 2 дня назад
isn't a semantic router easier and faster?
@SR-zi1pw
@SR-zi1pw 2 дня назад
Lite llm?
@samwitteveenai
@samwitteveenai 2 дня назад
This is different than LiteLLM it is dynamically changing between the two model choices
@guanjwcn
@guanjwcn 2 дня назад
No code time this time?
@samwitteveenai
@samwitteveenai 2 дня назад
I linked their GitHub with the code etc. in the description
@enriquebruzual1702
@enriquebruzual1702 День назад
I have a huge problem with with all solutions must be a Framework, at best this is a library or even a function. Not saying you but companies/developers.
@samwitteveenai
@samwitteveenai 18 часов назад
Yeah I do feel like that about a bunch of these things. This I would look at as more of a proxy you go through.
@MrJinwright
@MrJinwright 2 дня назад
Most people don't need this because you'll know if you can use the cheaper model before hand.
@gabrielkeith3189
@gabrielkeith3189 2 дня назад
Not very accurate I have built a tool for my org that works on a custom built sql agent that could use a cheap model for over half of the questions being asked. I am building a "router" to check complexity and context need.
@jeffsteyn7174
@jeffsteyn7174 2 дня назад
They could saved a load of time by just using gpt3.5 and function calling.
@christosmelissourgos2757
@christosmelissourgos2757 2 дня назад
Nice content but too much complexity if you want to build a product for scaling
@samwitteveenai
@samwitteveenai 2 дня назад
the good thing is it out there an open sourced so others can work on improving it for lots of use cases.
@nosult3220
@nosult3220 2 дня назад
😢
@davidw8668
@davidw8668 2 дня назад
Clearly not for (most) of production cases. But it be useful in dev as a heuristic
@WeirdoPlays
@WeirdoPlays 2 дня назад
First Comment should get likes 😅
@nAme-bf9uz
@nAme-bf9uz 2 дня назад
This is like a really old idea.
@designstudiohq7764
@designstudiohq7764 2 дня назад
❤❤❤
Далее
new SSH exploit is absolutely wild
11:59
Просмотров 248 тыс.
MC TAXI: АК-47
35:14
Просмотров 503 тыс.
Happy 4th of July 😂
00:12
Просмотров 4,7 млн
Don't Contribute to Open Source
9:55
Просмотров 222 тыс.
The 4 Big Changes in LLMs
14:25
Просмотров 14 тыс.
Testing Microsoft's New VLM - Phi-3 Vision
14:53
Просмотров 11 тыс.
The moment we stopped understanding AI [AlexNet]
17:38
Просмотров 207 тыс.
Why This New CD Could Change Storage
14:42
Просмотров 827 тыс.
Blackview N6000SE Краш Тест!
1:00
Просмотров 35 тыс.
Blackview N6000SE Краш Тест!
1:00
Просмотров 35 тыс.