Тёмный

GPT-4o: What They Didn't Say! 

Sam Witteveen
Подписаться 58 тыс.
Просмотров 31 тыс.
50% 1

While yesterday's GPT-4o announcement has been covered in detail in lots of places I want to not only cover that but talk about some of the things they didn't say and what the implications are for GPT-5
openai.com/index/hello-gpt-4o/
openai.com/index/spring-update/
🕵️ Interested in building LLM Agents? Fill out the form below
Building LLM Agents Form: drp.li/dIMes
👨‍💻Github:
github.com/samwit/langchain-t... (updated)
git hub.com/samwit/llm-tutorials

Наука

Опубликовано:

 

13 май 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 70   
@Aidev7876
@Aidev7876 25 дней назад
So what was ot that they didn't tell us? This is the only reason I listened...
@JG27Korny
@JG27Korny 25 дней назад
clickbait
@rikschoonbeek
@rikschoonbeek 25 дней назад
I think from 8:00 you'll hear it
@pluto9000
@pluto9000 25 дней назад
rikschoonbeek I didn't hear it. But you saved me watching all.
@Merlinvn82
@Merlinvn82 24 дня назад
The gpt-4o is not actually a finetune from 4, it's a new one trainned with the same gpt-4 datasets.
@fellowshipofthethings3236
@fellowshipofthethings3236 24 дня назад
congratulations for being baited..
@winsomehax
@winsomehax 25 дней назад
Why free? I think it's the same reason they removed the login. The more people using it, the more data they get to train on. They couldn't stay ahead of google in the data game otherwise - google has gigantic amounts of people's data. This explains why google has been so stingy with their bots. They don't need more of your data
@ondrazposukie
@ondrazposukie 25 дней назад
I think they just want to be as open as possible to make many people use their AI.
@neoglacius
@neoglacius 25 дней назад
exactly, why facebook or google is free? because YOURE THE PRODUCT , now including logistical and operational data from all companies in the planet
@rikschoonbeek
@rikschoonbeek 25 дней назад
Can't say there is a single motive. But data seems extremely valuable, so that's probably a big motive
@kamu747
@kamu747 25 дней назад
That's not the reason. We'll, not the main, there might be something there but... 1) It's a competition. Meta changed the pace when they decided to provide their AI for free. Others will need to offer better for free to stay in the game. 2) It has always been part of their mission to provide free services. There are altruisitic reasons behind the intentions of those involved. OpenAI isnt really a company as you know it, it is a movement. A changed world is their ultimate goal. If they don't do this, the global implications are catastrophic, AI risks cresting an irreversibly massive divide between classes because believe it or not, a lot of people can't afford to pay $20. This is what OpenAI started as am NGO, but their mission was too expensive, they needed to placate some investors and monetise a little in order to be sustainable for the time being while compute was expensive. Compute just got cheaper with Nvidias new H200 which allows them to afford to offer services to more people. However, there's certainly more advanced capabilities that paid users will benefit from later on. 4) As for user data, they no longer need it as much as you think they do.
@4l3dx
@4l3dx 25 дней назад
Their actual product is the API; ChatGPT is like the playground
@kai_s1985
@kai_s1985 25 дней назад
It makes sense that this model is based on a different transformer (or tokenizer) because they were calling it gpt2 (gpt2-chatbot or sth like that).
@chrisconn5649
@chrisconn5649 25 дней назад
I am not sure they were ready for launch "Sorry, our systems are experiencing high volume". Shouldn't that be expected?
@hemanthakrishnabharadwaj6127
@hemanthakrishnabharadwaj6127 25 дней назад
Great content as always Sam! Excited by how this could be a teaser to a GPT5, totally agree with what you said.
@ChrisBrennanNJ
@ChrisBrennanNJ 20 дней назад
Great reviews. Loves the channel! Voice IN! Voice OUT! Human doctors learning (better) bedside manners from machines! (Film @ 11).
@micbab-vg2mu
@micbab-vg2mu 25 дней назад
great update - I am waiting for audio vesrion in GPT4o - so far i use it for coding and imagies analysing,
@BiztechMichael
@BiztechMichael 25 дней назад
Re the 1.5 models and the new tokenizer but still GPT-4, I see this as comparable to the Intel “tick-tock” pattern of CPU upgrades - you’ve got a new process node, first you port the old CPU architecture to run on it - that’s the tick - and once that’s proved out, then you get your new CPU architecture running on it, and that’s the tock. Then repeat. This let them split the challenge into two different phases, and gave them something good to release at each phase.
@seanmurphy6481
@seanmurphy6481 25 дней назад
To me it would seem OpenAI is using multi-token prediction method with this new model, but I could be wrong. What do you think?
@MeinDeutschkurs
@MeinDeutschkurs 25 дней назад
Gpt4o was able to refactor complex JavaScript code. I was impressed.
@Luxcium
@Luxcium 25 дней назад
This is a very interesting and informative for anyone interested who has not seen the presentation and I am also watching just because your voice is so calming and engaging… Thanks for bringing this to the AI Community 🎉🎉🎉🎉
@SierraWater
@SierraWater 25 дней назад
Been playing with all last night and this am.. this is a world changer..
@mickelodiansurname9578
@mickelodiansurname9578 25 дней назад
the default setting I seen in their API docs for gpt4o was 2 FPS... however it can be increased.... i'm thinking there's a sweet spot but I hope its not 2 FPS! Also the audio API controls are not integrated yet and you have to use the old 'whisper' rigmarole of TTS and ASR
@willjohnston8216
@willjohnston8216 25 дней назад
Wow, what an interesting time to be alive. I think it's an improvement in many ways, but only around the edges and not the core 'intelligence'. I'm seeing very similar answers to previous versions and other LLMs. Also, I see that it now does a more web search to include in the results and it is telling me that it can store persistent information from our sessions, which seems like a big enhancement. I don't see the 10x improvement that 3.5 to 4 showed and I suspect that they are quite a ways from achieving that in a version 5, but I'd love to be proven wrong.
@lucianocontri239
@lucianocontri239 25 дней назад
don forget that gpt depends on deep-learning advancements in the cientific field to deliver something better, its not like a regular company.
@Emerson1
@Emerson1 24 дня назад
Are you going to any fun events in bay area?
@bennie_pie
@bennie_pie 21 день назад
I found it incredibly quick with a simple text completion but it didnt actually read or do what I asked. It needed reminding to visit the URL I gave it (tool use) which I had to do several times and it still seemed to prioritise its own out of date knowledge over the content it had just fetched. I need to try out all the features fully (limit hit after a few messages) but it came across as a bit too quick to churn out code without reading the initial prompt properly...it felt a bit lazy. Perhaps I just need to learn how to prompt it to get the best from it (as was the case with Claude)
@samvirtuel7583
@samvirtuel7583 24 дня назад
If the audio and image are integrated into the model and use the same neural network, how did they manage to dissociate them in the version currently available?
@samwitteveenai
@samwitteveenai 23 дня назад
The model will have different heads out and they can just turn it one off etc.
@user-me7xe2ux5m
@user-me7xe2ux5m 23 дня назад
Something that I haven't seen largely discussed yet is the opportunity for __personalized tutoring__ that was demoed at OpenAI's GPT-4o announcement event. Imagine a world where every student struggling with a subject like math or physics has a personal tutor at hand to help them grasp a difficult subject. Not solving a homework problem for a student, but guiding them step by step through the solution process, so they can derive the solution on their own with minimal help. IMHO this will make the entire (on-site as well as online) tutoring industry to some degree obsolete.
@samwitteveenai
@samwitteveenai 23 дня назад
I agree this is huge. I know there are people working on it, but agree it is going to be one of the biggest areas for all these models.
@ahmad305
@ahmad305 25 дней назад
I wonder if Dall-E will be available for free users?
@Decentraliseur
@Decentraliseur 25 дней назад
They understood the ongoing market shares competition
@buffaloraouf3411
@buffaloraouf3411 25 дней назад
im free user how can i try it
@Evox402
@Evox402 25 дней назад
I tested it with the mobile app. Its quite amazing how fast it can respond. But the whole thing with different emotions, sounding sad, happy, excited, did not work at all. The voice was using the same tone and "emotion" everytime. Did anyone had different experience with it? Could anyone re-create what they showed in the live-demo?
@XerazoX
@XerazoX 25 дней назад
the voice mode isn't updated yet
@darshank8748
@darshank8748 25 дней назад
Actually Ultra 1.0 is able to do img in and out. But as usual with Google we will witness it next year
@jondo7680
@jondo7680 25 дней назад
It's simple if gpt 3.5 got replaced by this, gpt 4 will most likely replaced by something better for paid users.
@maciejzieniewicz4301
@maciejzieniewicz4301 24 дня назад
I have a feeling GPT-4o was trained using knowledge distillation Teacher-Student framework, with 4o being the student, and Arrakis or whatever else as multimodal teacher. 😅 I have no proof of it anyway. Also good to mention optimized tokenization process.
@CaribouDataScience
@CaribouDataScience 25 дней назад
So is it only free on the desktop?
@markhathaway9456
@markhathaway9456 25 дней назад
an Apple machine first and others over a couple of weeks. They said the API is also free, so we'll see some apps for iOS ans Android.
@nyambe
@nyambe 25 дней назад
My GPT 4o does not do any of the new things. It is just like gpt 4
@carlkim2577
@carlkim2577 25 дней назад
That's it i think. It's a mid point to version 5. Sam talked before about how multi model would lead to better reasoning. They were running out of text data so clearly shifting focus to video plus audio. That resulted in Sora . Now the audio gets us this.
@Anuclano
@Anuclano 24 дня назад
If they ran out of text data, why it has no idea about what Pushkin wrote.
@digzrow8745
@digzrow8745 24 дня назад
the free 4o access is pretty limited, especially for conversations
@trixorth312
@trixorth312 25 дней назад
When does it roll out in australia?
@gavinknight8560
@gavinknight8560 24 дня назад
Already here
@74Gee
@74Gee 25 дней назад
Can't build much on the free tier. 16 messages per 3 hours
25 дней назад
Why would you build something on a free tier?
@74Gee
@74Gee 25 дней назад
Well I wouldn't, but at 1:20 that's the suggestion.
@ps3301
@ps3301 24 дня назад
The world doesn't have enough chip to make ai cheap. Technology still requires a lot more innovation
@clray123
@clray123 25 дней назад
Who are these guys? Free Ilya!
@J2897Tutorials
@J2897Tutorials 22 дня назад
One thing they didn't say is that you can only ask 'GPT-4o' about 5 questions before being blocked for the day unless you pay up.
@user-ff6cs5em7f
@user-ff6cs5em7f 19 дней назад
one eye just a illuminati thing it is
@yomajo
@yomajo 25 дней назад
spoiler: they told everything. go on to next video.
@OnigoroshiZero
@OnigoroshiZero 24 дня назад
I would bet that GPT-4o is 3-5 times smaller than the original GPT-4, if not even smaller. There have been so many advances in the field since GPT-4 released, especially from Meta, which would be stupid to not take advantage of. And the model being completely free backs this up, if it was similar to the original, going free would completely bury the company financially. And I would guess that GPT-5 will be similar size with GPT-4, but taking advantage of every new known innovation in the field, plus dozens more that OpenAI will most likely have made internally, will make it a couple times better, plus with having true multimodality and better memory will likely make it be the first glimpse of AGI by the summer of next year.
@user-ix1je3sp4k
@user-ix1je3sp4k 25 дней назад
THAT The deal announced by Perplexity and SOUND HOUND $SOUN platform is being used by GPT-40
Далее
How Google is Expanding the Gemini Era
13:27
Просмотров 4,1 тыс.
AI Just Changed Everything … Again
18:28
Просмотров 373 тыс.
#kikakim
00:11
Просмотров 1,8 млн
GPT4o: 11 STUNNING Use Cases and Full Breakdown
30:56
Просмотров 102 тыс.
5 Problems Getting LLM Agents into Production
13:12
Просмотров 9 тыс.
26 Incredible Use Cases for the New GPT-4o
21:58
Просмотров 655 тыс.
ChatGPT 4o vs Gemini 1.5 Pro - A Huge Gap!!
9:56
Просмотров 352 тыс.
ChatGPT Can Now Talk Like a Human [Latest Updates]
22:21
Full Keynote: Introducing Copilot+ PCs
1:07:52
Просмотров 317 тыс.
Why EV Tariffs Won't Stop Chinese Cars
10:43
Просмотров 687 тыс.
Debunking OpenAI Fake GPT 4o Demos. This is Not OK
16:19
Гибкий телефон 📱
0:16
Просмотров 90 тыс.