Тёмный

RouteLLM achieves 90% GPT4o Quality AND 80% CHEAPER 

Matthew Berman
Подписаться 330 тыс.
Просмотров 56 тыс.
50% 1

Опубликовано:

 

7 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 228   
@matthew_berman
@matthew_berman 3 месяца назад
My "AI Stack" is RouteLLM, MoA, and CrewAI. What about you?
@craiggriessel1872
@craiggriessel1872 3 месяца назад
AISheldon 🤓
@shalinluitel1332
@shalinluitel1332 3 месяца назад
It would be best to have alternatives to all these which are free and open source. Maybe later down the line.. The video is really cool tho! Thanks Matthew
@santiagomartinez3417
@santiagomartinez3417 3 месяца назад
Is MoA mixture of agents?
@AIGooroo
@AIGooroo 3 месяца назад
Mathew, please do the full tutorial on how to set this up. thank you
@smokewulf
@smokewulf 3 месяца назад
RouteLLM, MoA, and Agency Swarm. Should do a video on Agency Swarm. I think it is the best agentic framework
@davtech
@davtech 3 месяца назад
Would love to see a tutorial on how to set this up.
@AlexBrumMachadoPLUS
@AlexBrumMachadoPLUS 3 месяца назад
Me too ❤
@bamit1979
@bamit1979 3 месяца назад
I think some other AI enthusiast covered it a few days back. It was quite easy. Check RU-vid.
@ChristianNode
@ChristianNode 3 месяца назад
get the agents to watch it and do it.
@sugaith
@sugaith 3 месяца назад
On how to set this up IN THE CLOUD as well or preferebly
@averybrooks2099
@averybrooks2099 3 месяца назад
Me too but on a local machine instead of a third party service.
@velocityerp
@velocityerp 3 месяца назад
Matthew - for those of us who develop line-of-business apps for SME businesses - local LLM deployment is a must. Would certainly like to see you demo RouteLLM with orchestration - Thanks!
@clapppo
@clapppo 3 месяца назад
it'd be cool if you did a vid on setting it up and running it locally
@anubisai
@anubisai 3 месяца назад
Olama or LLM studio?
@josephremick8286
@josephremick8286 3 месяца назад
I am a cyber security analyst who knows very little about coding so, between your videos and just straight asking ChatGPT or Claude, I am ham-fisting my way through getting AI to run locally. Please keep making tutorial videos - I am excited to see how to impliment RouteLLM!
@s2turbine
@s2turbine 3 месяца назад
I agree, I'm pretty much in the same boat as you. The problem is that my knowledge is outdated by the time I finally figure things out because there is so much advancement in so little time. I think we need a "checkpoint" how-to on how to do things now, as opposed to 3 months ago.
@DihelsonMendonca
@DihelsonMendonca 3 месяца назад
If you don't know much about anything, like me, but want to run LLMs locally, you just need to install LM Studio. No need to understand anything. On the software, it has even the option to download and install them, and run. That's what I use. Now that I learned a bit more, I will try to install Open WebUI, Ollama and Docker, these are way more complicated. 🎉❤
@cool1297
@cool1297 3 месяца назад
Please do a tutorial for local installation for this. Thanks
@camelCased
@camelCased 3 месяца назад
What exactly? As I understand, RouteLLM is not an LLM itself but just a router. You can install local LLMs very easily using Backyard AI.
@m8hackr60
@m8hackr60 3 месяца назад
Sign me up for the full tutorial!
@DihelsonMendonca
@DihelsonMendonca 3 месяца назад
​@@camelCased Or LM Studio
@bigglyguy8429
@bigglyguy8429 3 месяца назад
@@camelCased But how to use the router with Backyard?
@camelCased
@camelCased 3 месяца назад
@@bigglyguy8429 Why would you want to use the router at all, if running LLM models locally?
@bernieapodaca2912
@bernieapodaca2912 3 месяца назад
Yes! Please show us a comprehensive breakdown of this great tool! I’m also interested in your sponsor’s product, LangTrace. Can you possibly show us how to use it?
@aiforculture
@aiforculture 3 месяца назад
Great breakdown, much appreciated. I definitely foresee local LLMs becoming dominant for organisations as soon as next year. My advice during consults is for them not to invest a massive amount in high-end data secure cloud systems, but just to hang on a little, work with dummy data on current models to build up foundational knowledge, and then once local options exist they can start diving into more sensitive analytics.
@AshishKumar-hg2cl
@AshishKumar-hg2cl 3 месяца назад
Hey Matt, yes it would be great if you could show a demo of how to setup this model on Azure OpenAI or Azure Databrix and then use it in the application.
@caseyvallett8953
@caseyvallett8953 3 месяца назад
Absolutely do a detailed tutorial on how to get this up and running!
@AngeloXification
@AngeloXification 3 месяца назад
I feel like everyone is realising things at the same time. I started 2 projects, the first an LLM co-ordination system and a chain of thought processing on specific models
@jamesvictor2182
@jamesvictor2182 3 месяца назад
Just popping up to say thanks Matthew. You have become almost my only required source for AI news because your take is right up my street every time. Great work, keep it coming
@MichaelLloydMobile
@MichaelLloydMobile 3 месяца назад
Yes, please provide a tutorial on setting up the described language model.
@mrbrent62
@mrbrent62 3 месяца назад
I also saw where they will have 20TB m.2 drives in a couple of years. Running this LLM locally will be really cool.
@dezigns333
@dezigns333 3 месяца назад
It's time people admit that benchmarking off GPT4 is stupid. When GPT4 came out it was amazing. Now its no better than any other LLM. Ever since OpenAI introduced cheaper Turbo models, the quality has gone down hill. They sacrificed intelligence for speed to the point where they have plateaued in quality and its not getting better no matter how new models they release.
@orthodox_gentleman
@orthodox_gentleman 3 месяца назад
Thanks for being real bro. I absolutely agree with you. I barely even use ChatGPT anymore because it sucks.
@irql2
@irql2 3 месяца назад
"Now its no better than any other LLM" -- do you really believe this? Seems like you do. That's certainly a take.
@kyleabent
@kyleabent 3 месяца назад
I agree man I don't care about speed as much as I care about accuracy. I'll happily wait for a better response than rapidly go through 2-3 quick responses that need more time in the oven.
@CookTheBruce
@CookTheBruce 3 месяца назад
Yes! The tutorial. Great vid. Sharing with my crew...Just beginning an AI Consultant agency and cost is an existential threat!!!
@wardehaj
@wardehaj 3 месяца назад
Thanks for this video. Very informative. Please make a full tutorial about the setup of route llm and what the recommendations of the local pc should be. Thank you in advance!
@joe_limon
@joe_limon 3 месяца назад
There seems to be a hold up on the highest end models as the leading companies continually try to improve safety while watching their competition. Nobody seems to want to jump in and release a new/better model at risk of the potential "dangerous" label being applied to them. So a lot of the progress remains hidden in the lab, waiting for competition to finally engage.
@steveclark9934
@steveclark9934 3 месяца назад
Improve safety really means neuter.
@davidk.8686
@davidk.8686 3 месяца назад
So far with LLM's "data is code" ... it is inherently unsafe, unless something fundamentally changes
@MarcvitZubieta
@MarcvitZubieta 3 месяца назад
Yes! please we need a full tutorial!
@madelles
@madelles 3 месяца назад
It would be interesting to see how this will work on your AI benchmark. Please do a setup and test
@D0J0Master
@D0J0Master 3 месяца назад
How would this effect mixture of agents? Could we have multiple route llms combined together since they use such lower compute?
@danielhenderson7050
@danielhenderson7050 3 месяца назад
I think you misrepresented the graph. The "ideal router" point on the graph is likely just that - the ideal. I don't think that's claiming actual results
@antonio-urbanculture
@antonio-urbanculture 3 месяца назад
Yes I really like your idea of a complete install and running tutorial. Go for it. 🙏 Thanks 👍
@kamilnowak4329
@kamilnowak4329 3 месяца назад
The only channel where i actually watch ads. Very interesting stuff
@johngrauel1661
@johngrauel1661 3 месяца назад
Yes - please do a full tutorial on setup and use. Thanks.
@parimalthakkar1796
@parimalthakkar1796 2 месяца назад
Would love a local setup tutorial! Thanks 😊
@jlwolfhagen
@jlwolfhagen 3 месяца назад
Would love to see a tutorial on setting up RouteLLM! 🙂
@environments9
@environments9 Месяц назад
Would also love a tutorial. Thanks for all you do!
@limebulls
@limebulls 3 месяца назад
Yes please full set up!
@socialexperiment8267
@socialexperiment8267 3 месяца назад
Danke! As always great!🎯👍
@galdakaMusic
@galdakaMusic 3 месяца назад
We need something locally for non difficult pourpouses. For example local home Assitant control.
@solifugus
@solifugus 3 месяца назад
Yes please... Full tutorial on setting this up to run locally. Also, I'd like to know how to setup multi-modal so I can show my images and casually talk to it (local).
@MoadKISSAI
@MoadKISSAI 3 месяца назад
Always yes for full tutorial
@Alice_Fumo
@Alice_Fumo 3 месяца назад
I really don't find this to be a big deal. I expect people select the model to use themselves on a per-task basis on what they believe is the most appropriate one for the task. For me the decision process is really simple: 1. is it code or requires complex problem-solving? -> Claude 3.5 Sonnet 2. Do I want to have a deep conversation with a creative partner -> Claude 3 Opus 3. Is it anything the other models would refuse? -> GPT-4o 4. Is it too private for any of the above? -> Local LLM I don't need a router for this and I wouldn't trust it to reliably choose the same way I would either.
@xhy20x
@xhy20x 3 месяца назад
Please do a demonstration
@leonwinkel6084
@leonwinkel6084 3 месяца назад
For coding this would be insane. Mixed local and api endpoints
@MagusArtStudios
@MagusArtStudios 3 месяца назад
First thing I did a year and a half ago was routing different LLMs via a zero-shot classifier. Looks like Route has done the same thing lol. I figured it was common sense.
@Idea-LabAi
@Idea-LabAi 3 месяца назад
Please do a tutorial. And need to measure performance to validate the performance - cost graph.
@NNokia-jz6jb
@NNokia-jz6jb 3 месяца назад
So, how to run it. And on what hardware?
@threepe0
@threepe0 3 месяца назад
Lmgtfy
@AseemChishti
@AseemChishti 2 месяца назад
Yes, give a walkthrough video for RouteLLM
@dantfamily9831
@dantfamily9831 3 месяца назад
I'd be interested in what hardware is needed to run something like this locally. I was waiting until late fall or early next year to buy, but I might need to get an intern system to train up. I am big on local control except when needed to reach out.
@ralfw77
@ralfw77 3 месяца назад
Hi Mathew, I love your channel. I’m curious if you would be willing to explore Pi ai? It doesn’t compare to the others in the same way. Maybe it’s hard to test. But very interesting. It’s trained to be empathetic and you can actually have a conversation with voice that feels satisfying.
@harshshah0203
@harshshah0203 3 месяца назад
Yes do make a whole tutorial on it
@mafo003
@mafo003 3 месяца назад
Ive seen you do techdev before and would love to see you do this one as well please.
@aleksandreliott5440
@aleksandreliott5440 2 месяца назад
I would love to see a tutorial on how to get this running locally.
@knecting
@knecting 3 месяца назад
Hey Matt, please do a tutorial on setting this up.
@rafaeldelrey9239
@rafaeldelrey9239 3 месяца назад
The article used GPT 4, not GPT4-O, which is already 50% of GPT4 cost. Or am I missing something?
@Ed-Shibboleth
@Ed-Shibboleth 3 месяца назад
That's good stuff. I will take a look at the codebase. Thanks for sharing
@sophiophile
@sophiophile 3 месяца назад
After developing exclusively on GPT models, then joining an org with a ridiculous amount of free GCP credits and being pushed to use Gemini family instead- I can honestly say that while differences on benchmarks may seem small, they end up being really extreme in practice. I spent days smashing my head against a wall trying to get Gemini to provide quality responses, and after switching to 4o, I was literally ready to deploy. There still don't seem to be great benchmarks that represent performance of generative models well.
@imramugh
@imramugh 2 месяца назад
I’d love to see a demo if possible.
@martingauthier5245
@martingauthier5245 3 месяца назад
It would be really cool to have a tutorial on how to implement this with ollama
@phieyl7105
@phieyl7105 2 месяца назад
Problem with this method is that there are some trade offs. While it maybe cheaper at answering a question directly; you sacrifice its social intelligence. Even though you get the right answer, the way the answer is phrased can be the difference between either a toddler or a graduate student. Personally I wauld want to talk with the graduate student.
@geekswithfeet9137
@geekswithfeet9137 2 месяца назад
Every single time I’ve seen a claim like this, the output in real usage never compares
@3enny3oy
@3enny3oy 3 месяца назад
You should consider including Semantic Kernel and GraphRAG in that ideal stack
@nate2139
@nate2139 3 месяца назад
This sounds interesting, but does it offer the same capability that the OpenAI API offers with customizable assistants, RAG, and function calling? I still have yet to find anything that compares. Would love to see something open source that can do this.
@MattReady
@MattReady 3 месяца назад
I’d love a guide to easily set this up for myself
@davieslacker
@davieslacker 3 месяца назад
I would love to catch a tutorial of you setting it up!
@PatrickWriter
@PatrickWriter 3 месяца назад
Yes please make a tutorial on the routerLLM.
@rilum97
@rilum97 3 месяца назад
You are so consistent bro, keep it up 🙌
@executivelifehacks6747
@executivelifehacks6747 3 месяца назад
I suspect these features, plus dedicated non-GPU hardware will eventually reduce energy costs per "thought" to less than the human brain. Currently perplexity using Sonnet 3.5 thinks GPT4 uses 25x more.
@macjonesnz
@macjonesnz 3 месяца назад
I think they are saying the brown dot is where an ideal LLM would be placed, I'm not sure that Route LLM is better than Claude 3 Opus. SO not sure where on that chart their router actually is. probably down with Llama 3 8b. Cause it's only job its to route.
@monnef
@monnef 3 месяца назад
Promising, but a bit mess with naming. They are using GPT-4 to mean at least GPT-4 Turbo and GPT-4 Omni in various places. I am not even sure if on some place they don't really mean the older model GPT-4.
@thecatsupdog
@thecatsupdog 3 месяца назад
Does your local model search the internet and summarize a few web pages? That's what chatgpt does for me, and that's all I need.
@KingMertel
@KingMertel 3 месяца назад
Hey Matt, what are these routers exactly? (They are not LLM I understand) And how do they determine where to route to?
@audiovisualsoulfood1426
@audiovisualsoulfood1426 2 месяца назад
Would also love to see the tutorial :)
@ashtwenty12
@ashtwenty12 3 месяца назад
Could you do a tutorial on RAG (retrieval augmented generation) ? I think I'll be pretty massive thing in agentic archetecure. Also I think RAG might soon be more than just text and PDFs 😂 in the not too distant future.
@mikezooper
@mikezooper 3 месяца назад
It doesn’t change anything. LLMs are good at certain tasks (most of which aren’t as useful as we need, and most don’t help us earn money). AI has plateaued. They haven’t replaced software engineers.
@ritviksinghal9190
@ritviksinghal9190 3 месяца назад
An implementation would be interesting
@orthodox_gentleman
@orthodox_gentleman 3 месяца назад
This wasn’t just released. It had been around for a while. Now that GPT-4o and Claude 3.5 Sonnet exist things are much cheaper. I can understand using a local LLM with these two but overall the cost savings are not as big of a deal as before.
3 месяца назад
API for claude and GPT is sitll expensive.
@HawkX189
@HawkX189 3 месяца назад
Let me launch this... Online models are saving themselves yet because of context.
@hipotures
@hipotures 3 месяца назад
Reading and watching anything about AI is like a live broadcast of the Manhattan Project in 1942. The current year is 1944?
@MEvansMusic
@MEvansMusic 2 месяца назад
can this be used to route between agents as opposed to model instances? for example routing to chain of thought agent vs simple q and a agent?
@sapito169
@sapito169 3 месяца назад
wonderfull know you can offer a low cost service and a primun service at diferent prices
@davidk.8686
@davidk.8686 3 месяца назад
When "data = code", how can you have security while having a actually useful / powerful AI?
@heltengundersen
@heltengundersen 2 месяца назад
Claude 2.5 Sonnet missing from the chart.
3 месяца назад
While this looks promising, it is just a router that forwards simple queries to weak models while forwarding hard queries to strong models. This assumes that the queries can be divided between strong and weak models. If your work is truly intensive, I don't see much reduction here as it still requires querying strong models most of the time.
@parthwagh3607
@parthwagh3607 3 месяца назад
yes we need detailed video
@opita
@opita 3 месяца назад
Can you please look into alloy voice assistant
@angelwallflower
@angelwallflower 3 месяца назад
yes I vote for tutorial for set up please thank you
@BradleyKieser
@BradleyKieser 3 месяца назад
Yes please, do the tutorial.
@samuelopoku4868
@samuelopoku4868 2 месяца назад
If I could like and subscribe harder I would. Tutorial would be fantastic thanks 👍🏿
@keithhunt8
@keithhunt8 3 месяца назад
Yes, please.🙏
@mickmickymick6927
@mickmickymick6927 3 месяца назад
95% of my queries, even GPT 4o or Sonnet 3.5 can't answer so I don't know what your queries are that local models usually handle fine.
@tytwh
@tytwh 3 месяца назад
Do you and Wes Roth collaborate? They uploaded an identically titled video 2 hours ago.
@matthew_berman
@matthew_berman 3 месяца назад
No. He just copied me exactly...again
@andresfelipehiguera785
@andresfelipehiguera785 3 месяца назад
A tutorial would be great!
@MussawirIftikhar
@MussawirIftikhar 3 месяца назад
Dear Matt, you could create a presentation for showing all this rather than just reading from the website. Please give a bit more time when creating videos, I watch your videos for learning things faster not the opposite. Thank you
@RaedTulefat
@RaedTulefat 3 месяца назад
Yes Please. a tutorial!
@MPXVM
@MPXVM 3 месяца назад
If runs on local machine, why needs OPENAI_API_KEY ?
3 месяца назад
because it still needs to query weak models (like mistral) and strong models like GPT
@jackbauer322
@jackbauer322 3 месяца назад
don't ask in the comments each time JUST DO IT !!!
@woszkar
@woszkar 3 месяца назад
Is this an LLM that we can use in LM Studio?
3 месяца назад
its just a proxy to send queries to two models, weak vs strong. It's not a new LLM.
@calvingrondahl1011
@calvingrondahl1011 3 месяца назад
Thank you Matt🖖🤖👍
@rawleystanhope3251
@rawleystanhope3251 3 месяца назад
Full tutorial pls
@nemonomen3340
@nemonomen3340 3 месяца назад
90% quality and 80% cheaper? I'm actually not sure if I should be impressed or not. Sure, on the surface that seems like a small decrease in quality for massively reduced cost, but isn't it normal for that last ~10% quality to be a lot harder to achieve? I think I'd be more impressed to see a model that's just 5% better quality for an 80% increase in cost.
@mdubbau
@mdubbau 3 месяца назад
Please do a tutorial on setti g up
@chrismann1916
@chrismann1916 3 месяца назад
Now, who has this in production?
@keithycheung
@keithycheung 2 месяца назад
Please do a tutorial !
@MS-qy4sx
@MS-qy4sx 3 месяца назад
“GPT-4o” and “quality” shouldn’t be used in the same sentence. I find GPT-3.5T more useful.
@trafferz
@trafferz 3 месяца назад
Batch API - 50% cheaper!! can you, have you, talked about batch processing? I see OpenAI is 50% cheaper for batch processing on all models, input and output!!! These are asynchronous groups of requests which don't require immediate turnaround, clear within 24 hours and include higher rate limits. Links available on the OpenAI api pricing page.
@MdotO1
@MdotO1 Месяц назад
What’s up with ChatLLM? Is it the same as Route?
@nashad6142
@nashad6142 3 месяца назад
Yessss! Go open source
Далее
Huge ChatGPT Upgrade - Introducing “Canvas”
15:00
Ex-OpenAI Employee Reveals TERRIFYING Future of AI
1:01:31
would you eat this? #shorts
00:13
Просмотров 3,2 млн
Find The Real MrBeast, Win $10,000
00:37
Просмотров 46 млн
Linus Torvalds: Speaks on Hype and the Future of AI
9:02
You've been using AI Wrong
30:58
Просмотров 518 тыс.
Why OpenAI o1 has Changed the Game for AI Agents
12:45
Просмотров 4,9 тыс.
Why AI Is Tech's Latest Hoax
38:26
Просмотров 712 тыс.
Former Google CEO Spills ALL! (Google AI is Doomed)
44:45
NVIDIA CEO on Agents Being the Future of AI
16:57
Просмотров 62 тыс.
How are holograms possible?
46:24
Просмотров 441 тыс.