Тёмный

The 4 Big Changes in LLMs 

Sam Witteveen
Подписаться 67 тыс.
Просмотров 18 тыс.
50% 1

Опубликовано:

 

4 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 87   
@petargolubovic5300
@petargolubovic5300 3 месяца назад
5:20 - it's Groq. Grok is the twitter chatbot. (have 0 clue why they are named so similarly) Great video though! Hope you stay more active, Sam
@NickAubert
@NickAubert 3 месяца назад
I'm pretty sure this is a reference to Robert Heinlein's science fiction novel Stranger in a Strange Land. It's spelled "grok" in the novel, and the term roughly means to understand something on a deep metaphysical, or even magical level. If you're building an AI startup, it's automatic nerd cred.
@samwitteveenai
@samwitteveenai 3 месяца назад
Whoops I didn't even notice that when I checked the edit. Yes I was talking about Groq and LPUs
@unclecode
@unclecode 3 месяца назад
​@@samwitteveenai No worries, it's a common mistake! Whenever I post about them or help in Discord as their ambassador, I have to double-check to avoid writing "Grok" by mistake. They've corrected me a few times 😄😄
@MeinDeutschkurs
@MeinDeutschkurs 3 месяца назад
I was really shocked, because I thought that Elon fixed the dumb Bot. 😂😂 Gosh, what a luck, the world rotates still in the same direction. 😆
@puneet1977
@puneet1977 3 месяца назад
Brilliantly put together points, Sam. I have been seeing them coming and clearly you’ve articulated all of them up well.
@novantha1
@novantha1 3 месяца назад
I wonder if infinite context windows might be how we end up doing continual learning. There's this idea of a model that learns at inference so it can adapt to new problems dynamically, but if you're able to do something like Textgrad where you can backpropogate through text (essentially it's the same as self reflection, but packaged like pytorch), you could have an LLM dynamically build its own in-context learning notes at inference time.
@Tom_Neverwinter
@Tom_Neverwinter 3 месяца назад
what's stopping you with limited context? couldn't you just onload and offload to do the same thing?
@DanielLeachTen
@DanielLeachTen 2 месяца назад
Quite literally one of the most underrated channels on RU-vid. Thank you!
@samwitteveenai
@samwitteveenai 2 месяца назад
Thanks Daniel, much appreciated
@IanScrivener
@IanScrivener 3 месяца назад
Thanks Sam!! Good to be reminded that we haven’t arrived at our AI destination… we are just starting the journey. Yes Sam Altman could have phrased the 5% / 95% thing better. I’d hate to have his job…
@WillJohnston-wg9ew
@WillJohnston-wg9ew 3 месяца назад
I'd like to hear a lot more about 'reflection' and what reflexion is? This for me seems like the big miss right now with the technology.
@eightrice
@eightrice 3 месяца назад
anyone else noticing how Altman is equating AI with OpenAI ?
@NoidoDev
@NoidoDev 3 месяца назад
1:00 - Altman's argument doesn't fit people who want to have their own system, and make it faster and independent from the internet.
@ri3m4nn
@ri3m4nn 3 месяца назад
right. he's detached from the real world.
@BizAutomation4U
@BizAutomation4U 3 месяца назад
@@ri3m4nn Not to mention that in my opinion there is ample evidence that Sam is not to be trusted under any condition, and I'm referring to what others that have left the company have said, not to mention some of the key decisions being made such as requiring signatures approved by Open AI for GPU cards, like WTF kind of big brother dystopia mission is this guy on. I think he's the second comming of Bill Gates who was notorious for stealing IP from start-ups, only much, much worse.
@Dom-zy1qy
@Dom-zy1qy 3 месяца назад
Cause he doesnt make money off of open weight models that can be hosted in such a nature
@john_blues
@john_blues 3 месяца назад
That's going to be a small percentage of people.
@ri3m4nn
@ri3m4nn 3 месяца назад
@@john_blues that's incorrect. Private AI is a real market.
@eightrice
@eightrice 3 месяца назад
you know what would've been great to have from Altman? An example of a startup in this "giant space" that will not get steamrolled by them. He didn't provide one because there isn't one.
@samwitteveenai
@samwitteveenai 3 месяца назад
Agree that would be interesting to get his opinion on that.
@ryanscott642
@ryanscott642 3 месяца назад
It's a winner takes most kind of situation.
@micbab-vg2mu
@micbab-vg2mu 3 месяца назад
Great video! I am amazed by the recent Claude 3.5. I hope Google and OpenAI updates will be as good as Anthropic's. I use LLMs from these three companies for different tasks :) - I only use the best one. The price is not a problem for me, as at the moment I use LLMs just for experimentation, not for scaling.
@hqcart1
@hqcart1 3 месяца назад
there is a pattern if you havent notice: claud release killer model, then gemini kills it then openAI kills both, and so on... the span is 1-2 months between each release
@john_blues
@john_blues 3 месяца назад
Altman is absolutely right on his point. If you plan your product around a flaw or shortcoming, and the shortcoming gets fixed, you're screwed. It's not good long term business planning.
@mehmetnaciakkk3983
@mehmetnaciakkk3983 3 месяца назад
C´mon Sam Altman! A startup (or any startup) needs to use what is available with teh tools that are available. You can´t sit and wait for the next and better version. It would mean waiting forever. What should be said. I suppose, is that one should expect the models becoming better, of course. As Sam Witteveen says.
@guanjwcn
@guanjwcn 3 месяца назад
Thanks, Sam. Great video.
@1MinuteFlipDoc
@1MinuteFlipDoc 3 месяца назад
* Models are Getting Smarter * Tokens are Getting Faster * Tokens are Getting a Lot Cheaper * Context Windows are Going Infinite
@bastabey2652
@bastabey2652 3 месяца назад
nice informative video.. thanks for posting
@Challseus
@Challseus 3 месяца назад
Great video, as always.
@SonGoku-pc7jl
@SonGoku-pc7jl 3 месяца назад
thanks! good reflexion :)
@proflead
@proflead 3 месяца назад
Thanks for sharing Sam 😀🙏
@oswaldohb
@oswaldohb 3 месяца назад
Hey Sam. One question, you mention LLM and then different modalities so there is any different between LLM and LMM? At least in terms of terminology?
@samwitteveenai
@samwitteveenai 3 месяца назад
People seem to be sticking to the terms LLM and VLM. I haven't seen LMM even though it is technically correct. most the VLMs actually bolt on a image encoder and are not Multi modal in the way some other models are that are training just end to end.
@olimiemma
@olimiemma 3 месяца назад
Hey Sam, I love the content you produce! I have a question for you. Been learning LLMs and AI for close to 5 years now, and I'm finding it really interesting. Your channels has been really instrumental too. I haven't been this excited about anything in software or computer science in a long time. I have a software engineering background, been at it since 2010. But I'll be honest, I haven't loved any new revolution in the field for a while - until recently. So here's my question: With all this knowledge I'm gaining about LLMs, I don't know what role, position, title or name to give someone with such expertise. I'm finding it hard because I don't want to just call it "software engineering" - that gets lost in the sea of other things. It's not really computer science. Not really a prompt engineer either, because that's not yet an established role. At the same time, it's knowledge that's distinct from the mainstream roles that exist in the software/CS world, like backend dev or full stack or whatever. So what is this field? What are the roles and titles? How does someone with this knowledge present themselves or describe what their role? To put it plainly, What will ones "linkedin" page say?😄 . If anyone in the comments has any ideas, please guide me as well. Thanks!
@samwitteveenai
@samwitteveenai 3 месяца назад
2 years ago it would have been ML Engineer vs ML Ops etc. Now I am seeing LLM Engineer for people who actually run models and work on models and AI Engineer for people who use APIs.
@olimiemma
@olimiemma 3 месяца назад
@@samwitteveenai Oh, so I guess I'm an AI engineer then. Hehe. Thank you. I could be an LLM engineer as well, but I rarely run models. As you know, most models are resource-heavy, and my laptop can't handle some of them, but I have run the lighter models. So, I'm treading between LLM engineer and AI engineer. I also have a feeling "AI engineer" is going to be overused and abused very soon by anyone who even knows how to use ChatGPT, calling themselves an AI engineer. Hehe. But thanks again for this. I love the content you create; keep making it. You are helping a lot.
@SirajFlorida
@SirajFlorida 3 месяца назад
I love the idea of using ASIC powered MoA frameworks. However, the state of using ASICs is that one still has to stay within the API call rate limits which basically is not usable for group discussion or iterative query applications. Rate limit's aren't just by minute, they are also by day. In many projects that I've used chatGPT to grind with I would say that in a day I've done over 1500 queries. easily. Using an agent library so that it's better than GPT seems to be between 15 and 20 messages per query. So I've just run into rate limit problems enough to find myself just returning to the GPT webui because it just cost less money and don't run out of queries in a day. When 4 first came out it was a really big pain in the neck because I'd run out of queries in like a couple of hours, but that hasn't happened since 4o, for sure.
@hqcart1
@hqcart1 3 месяца назад
What???? Modern ASICs, especially those designed for high-performance tasks like running large language models (LLMs) or handling API calls for chatbots, are capable of parallel processing. They can handle multiple operations simultaneously, similar to how GPUs operate.
@IanScrivener
@IanScrivener 3 месяца назад
I would if we’ll see new classes of chips… like FPGAs years ago…. Or RAG drives…
@timothywcrane
@timothywcrane 3 месяца назад
To paraphrase Sam "There are two types of start ups. Start ups that use with the OpenAI right mindset toward us... and those that use OpenAI with the wrong mindset toward us... No one else huh?
@ChronicleContent
@ChronicleContent 3 месяца назад
Event and Focus: Coverage of Google I/O held in May in San Francisco. Focus on the evolution of Large Language Models (LLMs) and their impact on startup strategies. Key Topics Discussed: Instruction finetuning. Use of synthetic data. Integration of multimodality to enhance model performance. Significant reduction in token costs. Improvements in LLMs. Expansion of context windows. Evolution of RetrievalAugmented Generation (RAG) systems. Detailed Look at RAG System Design: Embedding models. Context learning. Caching. Dynamic example selection. Designing Applications Utilizing LLMs: Incontext learning. Chunking. Embedding. Promptability. Actionable Strategies: Preparing for smarter AI models. Adapting products to leverage current and future capabilities. Integrating these technologies into products and services.
@samwitteveenai
@samwitteveenai 3 месяца назад
Your summarizer is off. I didn't do any detailed look at RAG and don't think I mentioned embedding models etc.
@ChronicleContent
@ChronicleContent 3 месяца назад
@@samwitteveenai you mentioned embeddings at 10:31
@kirilchi
@kirilchi 3 месяца назад
Gemini 1.5 Pro is much more expensive than 1.0 Pro at this moment. It costs 7-14 times more per mil. tokens
@dr.mikeybee
@dr.mikeybee 3 месяца назад
Good job!
@Gho73t
@Gho73t 3 месяца назад
Well i found it to be a strange claim that models are getting better because they just aren't. I mean benchmarks get +1% and you get crazy stuff like Phi-3 3.8b outperforming bigger models, but let's be honest (GPT-4o is not better than Turbo, which isn't better than the base model). Even Claude 3.5 does not feel better than the previous model. What makes a big difference are the things like artifacts. But let's be honest if we just use the raw model GPT-4 how it came out and Claude 3.5 how it is now. I don't see a noticeable difference. It still has the same issues that are bound to how a LLM works (it's still just text prediction and without proper RAG skills you're not going to get good results). So, who really claims that models are getting better? What can Claude 3.5 do what baser GPT-4 can't? (aside from the 1million token bullshit, let's be honest that's just API provider dreams of ppl doing 1 million token calls xD)
@avi7278
@avi7278 3 месяца назад
Most of groqs speed advantages have already been negated. The quality to speed ratio is no longer a decisive win for groq.
@hqcart1
@hqcart1 3 месяца назад
what are you talking about??? groq has the same quality with higher speed and low wattage
@avi7278
@avi7278 3 месяца назад
@@hqcart1 same quality? As what? SOA models?
@hqcart1
@hqcart1 3 месяца назад
@@avi7278 yes
@funkytaco1358
@funkytaco1358 2 месяца назад
@@avi7278 why don't you clarify if you mean Grok by Elon or Groq the AI Chip
@toromanow
@toromanow 3 месяца назад
Dumb question. What good is a 1mln token window if you still have to pay for (literally) 1mln tokens???
@samwitteveenai
@samwitteveenai 3 месяца назад
1. Lots of people will happily pay ( I am often surprised when I see some peoples OAI bills, many companies are spending 6 figures a month on GPT models (very annoying when they could achieve the same elsewhere for a fraction of the price)) 2. With things like Context Caching (I made a video on this) you can preload most of the tokens and you just pay a flat fee based on time use for the prefixed tokens.
@avi7278
@avi7278 3 месяца назад
Sonnet 3.5 has just been announced? When did you record this?
@samwitteveenai
@samwitteveenai 3 месяца назад
I gave the first version of this as talk back at end of May and then in June, then recorded this a couple of weeks ago.
@rock3tcatU233
@rock3tcatU233 3 месяца назад
I can't take Scam Altman seriously.
@am0x01
@am0x01 3 месяца назад
Hi @Sam, I follow your videos, but lately seems that it's getting over edited, which could introduce some distraction. ;-)
@wanfuse
@wanfuse 3 месяца назад
Mores law of tokens!
@JustAThought01
@JustAThought01 3 месяца назад
Q. Why are computer programs becoming smarter than the typical human? A. Computer programs are becoming smarter than the typical human because programmers are focused on the thinking process. If we were at apply this same focus on teaching humans to think, we can greatly improve the average intelligence of all humans.
@ps3301
@ps3301 3 месяца назад
We still have a long way to go. Only idiots say chatgpt 4 is amazing
@husanaaulia4717
@husanaaulia4717 3 месяца назад
DeepSeek pricing is cheap
@ri3m4nn
@ri3m4nn 3 месяца назад
Not surprisingly, Sam Altman just proved that he's completely detached from how businesses actually function... there's a reason why 95% use the "former" sam.
@fkxfkx
@fkxfkx 3 месяца назад
as if you know better
@ri3m4nn
@ri3m4nn 3 месяца назад
@@fkxfkx I do. my avatar is from my desk in Palo Alto...
@fkxfkx
@fkxfkx 3 месяца назад
@@ri3m4nnmeans absolutely nothing
@zacboyles1396
@zacboyles1396 3 месяца назад
You’re saying “businesses” but Sam said “startups”. I guess you don’t understand that those are completely different things.
@ri3m4nn
@ri3m4nn 3 месяца назад
@zacboyles1396 they're literally not. looks like you have no meaningful experience in this.
@MrMoonsilver
@MrMoonsilver 3 месяца назад
Mainly very obvious points, not worth watching.
@NoidoDev
@NoidoDev 3 месяца назад
Not obvious if you don't follow the news all the time.
@ri3m4nn
@ri3m4nn 3 месяца назад
are you under 18?
@MrMoonsilver
@MrMoonsilver 3 месяца назад
@@ri3m4nn Not sure if you're on the right platform asking these kinda questions
@ri3m4nn
@ri3m4nn 3 месяца назад
@@MrMoonsilver so that's a no. go get an adult to help you understand how to be helpful.
@MrMoonsilver
@MrMoonsilver 3 месяца назад
Well, you seem to be the one bringing in totally unrelated dimensions into a conversation about whether this video has relevant content or not. Like a teenager would do. Besides, you seem to be looking for underage people on the internet, which is concerning.
@12345idiotsluggage
@12345idiotsluggage 3 месяца назад
Cheers to @samw over @sama. Sama is a charlatan grifter. Please can we move on from him. @samw is a much better representative and explainer of what is going on.
Далее
Microsoft's Phi 3.5 - The latest SLMs
14:32
Просмотров 14 тыс.
`const` was a mistake
31:50
Просмотров 136 тыс.
Llama 3.2 goes Multimodal and to the Edge
13:09
Просмотров 7 тыс.
The moment we stopped understanding AI [AlexNet]
17:38
Has Generative AI Already Peaked? - Computerphile
12:48
AI-First Marketing With OpenAI | INBOUND 2024
26:23
Просмотров 28 тыс.
Moshi The Talking AI
15:29
Просмотров 15 тыс.
NEW 3D LLMs for Spatial Intelligence (Robin3D)
27:30
Просмотров 3,8 тыс.
The COMPLETE TRUTH About AI Agents (2024)
45:47
Просмотров 47 тыс.
Bill Gates Reveals Superhuman AI Prediction
57:18
Просмотров 320 тыс.