Тёмный

GPT-4o is WAY More Powerful than Open AI is Telling us... 

MattVidPro AI
Подписаться 260 тыс.
Просмотров 240 тыс.
50% 1

OpenAI just unveiled their new GPT-4o model, and it's more powerful than we ever imagined! In this video, we dive deep into what makes GPT-4o truly multimodal, capable of generating text, images, audio, and even video. Discover the groundbreaking features and hidden capabilities that OpenAI didn't fully reveal. From stunning image creation to lifelike audio generation, GPT-4o is set to revolutionize the AI landscape. Watch now to uncover the full potential of this game-changing model!
▼ Link(s) From Today’s Video:
GPT-4o Page: openai.com/index/hello-gpt-4o/
Min Choi's Awesome Thread: / 1790416703404302463
Open AI YT channel: / @openai
Greg Brokman GPT4o image gen: / 1
Smoke away prediction: / 1791142705244127481
► MattVidPro Discord: / discord
► Follow Me on Twitter: / mattvidpro
► Buy me a Coffee! buymeacoffee.com/mattvidpro
-------------------------------------------------
▼ Extra Links of Interest:
AI LINKS MASTER LIST: www.futurepedia.io/
General AI Playlist: • General MattVidPro AI ...
AI I use to edit videos: www.descript.com/?lmref=nA4fDg
Instagram: mattvidpro
Tiktok: tiktok.com/@mattvidpro
Second Channel: / @matt_pie
Let's work together!
- For brand & sponsorship inquiries: tally.so/r/3xdz4E
- For all other business inquiries: mattvidpro@smoothmedia.co
Thanks for watching Matt Video Productions! I make all sorts of videos here on RU-vid! Technology, Tutorials, and Reviews! Enjoy Your stay here, and subscribe!
All Suggestions, Thoughts And Comments Are Greatly Appreciated… Because I Actually Read Them.
Timestamps:
00:00 Introduction and Initial Reactions
00:36 Overview of GPT-4o and Multimodal AI
01:42 Comparison with GPT-4 Turbo
03:22 Text Generation Capabilities
07:22 Audio Generation Capabilities
12:22 Image Generation Capabilities
19:04 Advanced Features
23:27 Video Understanding Capabilities
27:34 Conclusion

Наука

Опубликовано:

 

5 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 1,1 тыс.   
@MattVidPro
@MattVidPro 20 дней назад
I think the image editing is one of THE most mind blowing pieces of this... What do you guys think?
@CallMeThyme
@CallMeThyme 20 дней назад
I think its Amazing. Also, love your videos. Ive watched for almost 2 years!
@muuuuuud
@muuuuuud 20 дней назад
I'm wondering how far off we're from a universal real time translator between humans and some animals. O.O We might get an earful soon. X3
@The_MostHigh
@The_MostHigh 20 дней назад
When are these image capabilities released, i tried recreating the samples with chatgpt 4o by copying the prompts and steps but could not generate consistent characters?
@nathanbanks2354
@nathanbanks2354 20 дней назад
I think it's the latency of audio -> GPT-4o -> audio (around 200ms) vs audio -> whisper -> GPT-4-turbo -> elevenlabs (around 800-1200ms).
@Fytyny
@Fytyny 20 дней назад
@@The_MostHigh 4o available for users is only used to output text. They said they are going to release it step by step and for the next step they will release audio output for pro users in couple of weeks. So we will have to wait for all that.
@itsallgoodaversa
@itsallgoodaversa 20 дней назад
14:17 Matt, the multiple whiteboards/chalkboards at the top ARE realistic. This is actually how chalkboards in older classrooms used to work. They would have multiple chalkboards on sliders that you could pull up and down.
@DeceptiveRealities
@DeceptiveRealities 20 дней назад
Note that it also inset the top one inside the bottom one, as one would expect.
@neelmodi6693
@neelmodi6693 20 дней назад
Most chalkboards I've seen are still of this variety--several overlapping chalkboards that slide up or down depending on which one you want to write on in the moment.
@jnxmaster
@jnxmaster 20 дней назад
Yes, these are still commonplace on universities.
@AmazingArends
@AmazingArends 19 дней назад
I never saw a multiple chalkboard like that… 🤔
@82NeXus
@82NeXus 19 дней назад
It might be 'meant' to be a multi blackboard, but if you look at it, it's structure isn't at all realistic. I wonder if current models such as GPT-4o use their understanding of basic physics, structure and mechanics when they create images, like a human who's used to living in this world would? They do display some understanding of those things in their text output. But unlike humans, they don't have tactile experience of the world to draw on. And does GPT-4o have 3D vision? Most of it's training images will be 2D!
@reifuTD
@reifuTD 20 дней назад
One of the things I think I would have try with GPT-4o is take a photo of a page from a manga or comic book or even a novel and ask it to read back the text in voice of of the characters as they speak.
@ReLapseJunkie
@ReLapseJunkie 20 дней назад
Nice
@fynnjackson2298
@fynnjackson2298 20 дней назад
...and then to generate sidequests and with Sora to convert them into Marvel style video imaegs while GPT reads it in a emotionally dratic voice.
@1x93cm
@1x93cm 20 дней назад
Bruh, with Sora you could have it animate its own anime.
@justinwescott8125
@justinwescott8125 20 дней назад
Don't forget sound effects and background music
@ClayMann
@ClayMann 20 дней назад
I'd like to see how Sora level A.I could re-imagine comics. Imagine if each panel was fully animated. so trees blow in the wind, characters breathe and of course talk what's in their bubbles. A running character would have the scenery fly by and all the animation would be derived from the panels. I'm not even sure how you would read such a thing. As one long flowing video going from panel to panel? or have panels execute as video as you hover over them? Maybe something far more bizarre where what a comic is melts away to be replaced by some fusion of photorealism and motion translating the comics intention into actual little movies. this kinda sounds crazy but seeing what is coming I don't think its beyond Sora level engines from Google and OpenAI.
@chrisbtr7657
@chrisbtr7657 20 дней назад
I don't know about everyone else but most of the people I come in contact with have no clue about the rapid developments in AI. Kind of eery...
@SignumEternis
@SignumEternis 20 дней назад
Yeah, I've been saying for a while now that a lot of people are going to be completely blindsided by how much things are going to change soon with how fast AI is advancing. Even as someone actively following it I find myself being blown away fairly often. The future is gonna be wild.
@chrisbtr7657
@chrisbtr7657 19 дней назад
@@SignumEternis Oh yeah big time and if you follow it and have a somewhat tech savvy / biz mind then there are so many oh sh$! moments. On my end most are not paying attention and going on with business as usual. That is unless they are in an industry that is suddenly being directly impacted.
@jros4057
@jros4057 19 дней назад
I tried showing family that gpt 4o vid and they didn't get it and turned it off half way though.
@CJ-jf9pz
@CJ-jf9pz 19 дней назад
I some what follow or try to and even I feel blindsided how far they've come. Then I imagine how far they have actually gone but have shown us yet
@chrisbtr7657
@chrisbtr7657 19 дней назад
@@jros4057 yeah and just one of the scenes from that video - the ai teaching the kid math. That is a major paradigm shift. To think teachers could soon be replaced with a much smarter and more efficient system in ai. Not saying that's a good thing but it is what it is and we have to deal with it. Just that piece alone is normalcy shattering news. But yeah most people don't seem that interested. It's wild
@MikeWoot65
@MikeWoot65 20 дней назад
Idk if i'm more impressed with the life-like sound of the voice, or how human it feels to interact with (ie. it understands our emotions)
@dmitrysamoilov5989
@dmitrysamoilov5989 20 дней назад
It doesn’t actually work when you use it, the demo must be a better model
@The1QwertySky
@The1QwertySky 20 дней назад
​@@dmitrysamoilov5989it's not out yet fully
@DeceptiveRealities
@DeceptiveRealities 20 дней назад
@@dmitrysamoilov5989 That hasn't been released yet. It's coming in the new app.
@DeceptiveRealities
@DeceptiveRealities 20 дней назад
I hope it has a changeable voice and can have that over-the-top expression dialled down. For my non-American ears it sound raucous and emotively fake.
@tracy419
@tracy419 20 дней назад
​@@dmitrysamoilov5989it's being released over several weeks
@fynnjackson2298
@fynnjackson2298 20 дней назад
Services like Audible should release AI that reads the books, but also allows you to talk about the topics, do quiz tests, and more, making the entire book library an instant interactive homeschooling study resource for anyone wanting to level up in life. In contrast to just 'consuming' audiobooks as we do in todays passive one way relationship dynamic.
@Ahm.elzain
@Ahm.elzain 20 дней назад
I have indeed been saying it’s the inhabitants of digital by spiritual beings jins to interact and communicate with human through “ technology “ tree frame ayyyy!! The final form set but rolling out gradually in order to be accepted normalise it.. collect consciousness
@Suhita-ys6hd
@Suhita-ys6hd 20 дней назад
pretty sure theres a pdf reader chatgpt bot, you dont even need audible to do this, just need your book as a pdf file.
@JBDuncan
@JBDuncan 20 дней назад
​@@Suhita-ys6hdDo you know the name of it?
@AmazingArends
@AmazingArends 19 дней назад
That would be cool, but they have to get rid of the bias first, so if you read a book with a conservative point of view, the AI won't lecture you for engaging in political incorrectness! 😂
@fynnjackson2298
@fynnjackson2298 19 дней назад
@@Suhita-ys6hd nah, I'd like the low latency and choice of reading tone with GPTo. Other current apps still feel like talking to a robot so to speak
@helge666
@helge666 20 дней назад
GPT-4o is also A LOT more reliable when it comes to long-form text processing. Not even comparable to either GPT-4 or Gemini. It follows the prompt much better, doesn't get lazy so easily, and doesn't start to hallucinate so quickly. I tried four hours to get GPT-4 and Gemini to do what I wanted, and they failed miserably. GPT-4o completed the whole damn task in 40 minutes without so much as a hiccup.
@ronilevarez901
@ronilevarez901 20 дней назад
How come? I got kicked back to 3.5 after 4 messages. I can hardly do anything with that time. And having to wait 4 hours to keep the chat is not convenient.
@helge666
@helge666 20 дней назад
@@ronilevarez901 Good question. GPT4 threw me out after countless attempts to get it to do what I wanted, and GPT-4o just did it. I'm in Germany, maybe it's a time zone thing, less traffic at my CEST time, and therefore less bandwidth/token restrictions? I gave it this prompt (in German language, because I was working with German PDF documents): Bitte lies das angehängte PDF-Dokument vollständig durch und formatiere den Inhalt gemäß den folgenden Anweisungen: - Entferne alle Bindestriche (-) aus dem Text. - Korrigiere das Spacing von in Sperrschrift geschriebenen Wörtern, sodass sie normal dargestellt werden (Beispiel: aus "R a u m s c h i f f" mache "Raumschiff"). - Entferne alle überflüssigen Sektionskennungen (z.B. "B-20" oder "C-1"). - Vermeide doppelte Überschriften und stelle sicher, dass jeder Abschnitt klar und einmalig betitelt ist. Ändere oder erfinde keine Worte oder Inhalte. Bitte erstelle keine Zusammenfassungen. Verwende lediglich den originalen Text. Formatiere den Text in sauberem Fließtext, achte dabei auf korrekte Absatzbildung und Zeichensetzung. Bitte führe die Bearbeitung in einem Durchgang durch und präsentiere das vollständige Ergebnis.
@V-ob5zf
@V-ob5zf 20 дней назад
@@ronilevarez901 He probably had a Plus Account. The limit rates are 5 times in plus accoutn
@matiascoco1999
@matiascoco1999 19 дней назад
@@ronilevarez901probably using the API, so different rate limits
@therainman7777
@therainman7777 19 дней назад
@@matiascoco1999Or just a Plus subscriber.
@evil1knight
@evil1knight 20 дней назад
Chalkboards often have multiple boards that slide onto of each other
@fobusas
@fobusas 20 дней назад
My old middle school used to have ones that swing over to the side
@ta1k2t0ny
@ta1k2t0ny 19 дней назад
my thought exactly
@SpikyBlade
@SpikyBlade 20 дней назад
Man, the image understanding of GPT-4o is crazy
@Angel-Azrael
@Angel-Azrael 19 дней назад
Yes, asked it to transcript scanned hand written birth certificates from the 1800s that I can't read most words, in portuguese, it works, some errors but its mind blowing
@peterlang777
@peterlang777 18 дней назад
at this level of functionality hooked to a global database like the internet it would be able to do 80% or more of human jobs
@dot1298
@dot1298 17 дней назад
is there a risk, the US gov could confiscate it away from OpenAI and use it for the Pentagon etc?
@dot1298
@dot1298 17 дней назад
…under Trump?
@peterlang777
@peterlang777 17 дней назад
@dot1298 yes see ISA act of 1952 by Eisenhower. (invention secrecy act) its unlikely though as the public knows about it already
@johannesdolch
@johannesdolch 20 дней назад
Honestly regarding images: What we really need IS multi-modality. The images produced by common models like SD are good enough. The problem is that it doesn't really understand what it is doing. If they can keep the quality of current models and just add a deep understanding to it, that multiplies the actual quality of the outcome by orders of magnitude in the sense that you get what you actually want AND can change specific things instead of getting images that so-so follow a prompt somewhat and then inpainting and hoping for the best.
@jaredf6205
@jaredf6205 20 дней назад
No other image AIs have access to language models that good.
@antonystringfellow5152
@antonystringfellow5152 20 дней назад
Yes, I've been saying this all along. The human brain isn't separate modules, trained separately then cobbled together. It does have specialized regions but it learns together, as one. In doing so, it makes many associations. Most of our knowledge/memory is formed through multiple associations. For any AI to have truly general intelligence, it must be able to do the same. This is how we are able to transfer one set of knowledge/skills to a new area or novel task. Other image generating AIs often screw up the hands because they don't understand what fingers are, let alone that we have eight fingers and two thumbs. If you watch AI generated videos, you'll see similar strange things happening like people walking into walls then disappearing. They can generate photo-realistic videos but don't understand what the images represent. A truly multi-model model solves these problems.
@14supersonic
@14supersonic 20 дней назад
These aren't really LLMs anymore. ​@@jaredf6205
@LAIDBACKMANNER
@LAIDBACKMANNER 20 дней назад
In order for it to have true "understanding" it would have to become conscious... Which, in the field of A.I., will enviably happen someday. Hopefully later rather than sooner, lol.
@minimal3734
@minimal3734 20 дней назад
It seems that when learning multiple modalities, they reinforce each other and interact in a way that increases intelligence in a non-linear way.
@kfrfansub
@kfrfansub 20 дней назад
the most mind blowing think is the speed. With that speed and variety of natural voices you can make a real rpg game with Ai NPC
@markmuller7962
@markmuller7962 20 дней назад
Can't wait
@JaBigKneeGap
@JaBigKneeGap 5 дней назад
even an entire game by it, i been already trying to get it to make me Js rpg, the visuals are stuning
@kfrfansub
@kfrfansub 5 дней назад
@@JaBigKneeGap if you have a video of this running on a rpg, I'll love to see it
@starblaiz1986
@starblaiz1986 20 дней назад
15:53 Actually no, the image generation didn't screw up. If you look that's actually EXACTLY what is written, including capitalisation (or lack-thereof). What's even more impressive is that it actually split the word "sound's" across multiple lines and it did it completely corrctly! Actually mind-blowing! 🤯🤯🤯
@JonasMcDonald
@JonasMcDonald 20 дней назад
This
@Omii_3000
@Omii_3000 20 дней назад
FR Mattvidpro failed English 101 Lollll
@freecivweb4160
@freecivweb4160 20 дней назад
No, hyphenation happens between syllables of multisyllabic words, that's the rule.
@JREinaNutshell331
@JREinaNutshell331 20 дней назад
I'd even say its more impressive that it seems. They deliberately made a mistake with "sound's" and Chatgpt4o didnt correct the mistake (which it should have done due to it's correct training). So ChatGPT4o did exactly what the prompt said even tho it's against its training Or am i wrong here?
@quarksandaces2398
@quarksandaces2398 19 дней назад
It got "everything" wrong
@MattVidPro
@MattVidPro 20 дней назад
Timestamps for yall: 00:00 - Introduction and Initial Reactions Introduction to the video. Reaction to OpenAI's real-time AI companion. 00:36 - Overview of GPT-4o and Multimodal AI Explanation of GPT-4o. What does "multimodal" mean? 01:42 - Comparison with GPT-4 Turbo Differences between GPT-4o and GPT-4 Turbo. Audio capabilities of GPT-4o. 03:22 - Text Generation Capabilities Speed and quality of GPT-4o's text generation. Examples of high-speed text generation. 07:22 - Audio Generation Capabilities Demonstration of GPT-4o's audio generation. Examples of emotive and natural voice outputs. 12:22 - Image Generation Capabilities Explanation of GPT-4o's image generation. Examples of high-quality image outputs. 19:04 - Advanced Features Image recognition and video understanding. Examples of practical applications and scenarios. 23:27 - Video Understanding Capabilities Discussion on GPT-4o's video capabilities. Potential future developments and limitations. 27:34 - Conclusion Final thoughts on GPT-4o's impact and potential. Invitation to viewers to subscribe and join the community.
@mylittleheartscar
@mylittleheartscar 20 дней назад
Can't wait till they cracked their own 1M+ tokens
@ouroborostechnologies696
@ouroborostechnologies696 20 дней назад
"yall" is not a word
@LAIDBACKMANNER
@LAIDBACKMANNER 20 дней назад
@@ouroborostechnologies696 Yeah it's "Y'all", you fuckin' grammar Nazi, lol.
@AmazingArends
@AmazingArends 19 дней назад
@@ouroborostechnologies696 neither is "gentleladies" but they now use that in Congress 😂
@NinetooNine
@NinetooNine 16 дней назад
I am curious, what do you think about Open AI getting rid of the Sky voice (the one that sounds like the voice from "Her") from their Chat GPT 4o model.
@nathanbanks2354
@nathanbanks2354 20 дней назад
An odd thing about GPT-4o is that it's better at poetry than it used to be. It has a better idea of the meter of a limerick or a sonnet than it did before it had a multimodal understanding of what words sounded like. Words like "love" and "prove" don't rhyme any more. You can see this by asking GPT-4 turbo and GPT-4o to produce poems using the existing text interface. It's also the first time I found a model that can reliably produce a Petrarchan/Italian sonnet instead of a Shakespearean/Elizabethan sonnet--previous models always used the much-more-common Elizabethan rhyming scheme.
@Rantarian
@Rantarian 20 дней назад
There's only a handful that can do poetry properly. GPT-4o is one of them. I've experimented with having non-rhyming poems, mixed meters, and a focus on a variety of poetic techniques. It is absolutely capable of creating a poem using metaphor at a distance to talk about something apparently unrelated to what it seems on the surface.
@82NeXus
@82NeXus 20 дней назад
@@Rantarian That's incredible. But I can believe it. I think maybe these models have more understanding than a lot of people think. People often saying they don't understand things the way humans do. I don't get it. To me a thing is either understood or it is not. The mode or mechanism of understanding of ML models vs humans may be very different; but to me that's irrelevant! Understanding is an abstract capability that has nothing to do with physical process or mechanism. I'm sure it is in AI companies' interests to downplay the intelligence / understanding / power of these models, so that they can get on with developing, releasing and in some cases commercializing them, without too much pushback or regulations!
@minimal3734
@minimal3734 20 дней назад
@@82NeXus I agree with that. The statement that AI models don't “really” understand is absurd. Understanding cannot be simulated. It is there, or it is not.
@alexmin4752
@alexmin4752 19 дней назад
It make sense since rhyme is basically sound. If a model has no comprehension about what sound is it all, it can't generate poetry. It can only roughly mimick writing style of real poets. It's the added sound modality that made it better at rhyming.
@nathanbanks2354
@nathanbanks2354 19 дней назад
@@alexmin4752 Precisely.
@Sky_flying2024
@Sky_flying2024 20 дней назад
It's a strange time. Back in the day, when companies released something, it was just a given that that was the latest state-of-the-art thing. These days, it seems like no one really knows how far AI has progressed, and it feels more like a poker game where different players (companies)are holding their cards close, wondering what cards everybody else has.
@82NeXus
@82NeXus 20 дней назад
Also, I'm sure it is in AI companies' interests to downplay the intelligence / understanding / power of these models, so that they can get on with developing, releasing and in some cases commercializing them, without too much pushback or regulations!
@Sky_flying2024
@Sky_flying2024 20 дней назад
@@82NeXus bingo
@markmuller7962
@markmuller7962 20 дней назад
So true, there's so much behind the scenes that literally anything could come out at any moment. Pretty exciting times Edit: Also the AI computing power is growing so rapidly that "everything is possible" is quite literal now
@piteshbhanushali1140
@piteshbhanushali1140 19 дней назад
And all they training old nvidia gpu..thinking about how will be powerful in h200 gpu..😮
@SarcasticTruth77
@SarcasticTruth77 13 дней назад
Now, what sorts of AIs are governments training in private?
@fabiankliebhan
@fabiankliebhan 20 дней назад
About the chalkboard. I think the dual chalkboards are not unrealistic. We had those a lot when I was studying. You could move them up and down to have more space.
@yt45204
@yt45204 20 дней назад
Our lecture halls had high ceilings and triple chalkboards
@jean-francoiskener6036
@jean-francoiskener6036 День назад
Can't wait until a superintelligence whispers me what to do and say so i can pick up any girl. Lol
@iamjohnbuckley
@iamjohnbuckley 20 дней назад
Cracked me up at “I wouldn’t even be able to tell you this was a missile in the first place! This things a professional!” 😂
@82NeXus
@82NeXus 20 дней назад
Has anyone verified that it got the missile picture right? Coz ChatGPT 3 could've convinced you that that missile came from anywhere 😂
@apache937
@apache937 18 дней назад
i dont even see a missile
@stevelamb6720
@stevelamb6720 2 дня назад
I thought the missile was actually part of the question, which is what triggered it to think it was.
@user-xj6ke4qk8t
@user-xj6ke4qk8t 20 дней назад
The ability to read / show screen share your desktop and dictate is game changer for context as you can demonstrate what you want done rather than just trying to describe it.
@allanshpeley4284
@allanshpeley4284 18 дней назад
It really is. This is the main tech I've been waiting for. Unfortunately it's only rolling out for Mac initially. And I'm not sure we'll be able to screenshare with it in real-time and train it to use our programs/tools. But that can't be far off.
@moxes8237
@moxes8237 20 дней назад
I remember reading Nick Bostrom book “Superintelligence, paths and dangers,” and in one of his chapters I remember reading somethings That stuck with me that goes somewhat like this “ I can see a scenario where any one entity who is six months ahead of everybody else is enough to win the game”
@antonystringfellow5152
@antonystringfellow5152 20 дней назад
Less than 6 months ahead is probably more than sufficient.
@1x93cm
@1x93cm 20 дней назад
yeah but the game of money is soon coming to an end. Once you make AGI, ASI is a step away. How long can the current system function when nobody is necessary. They just released a chinese robot that costs 16K and can do most anything. Add in this GPT4o and the BTFOs all low skill wagies.
@Brax1982
@Brax1982 20 дней назад
@@1x93cm Ah...did China tell you that they did that?
@14supersonic
@14supersonic 20 дней назад
​@1x93cm I think you misunderstand how AGI and ASI will actually change humanity necessity. Even with the most advanced AI and robotics, humans will always be necessary. Resources and work are needed, and if anything human intelligence will become even more of a commodity. Machines can't replace our creativity no matter how smart they might get. Getting rid of humans in labor how we think of it now would be beneficial, but removing human power out of the equation entirely would be foolish. Don't forget the greedy people who will not allow the machines to take their resources and money away from them to begin with. What do you think all the regulations are for? It's to protect them from AI, not us.
@1x93cm
@1x93cm 20 дней назад
@@14supersonic if there is an economic incentive for something- it happens. If there is an economic incentive to replace most if not all human labor itll happen and nobody will care about the consequences. After seeing drone videos from ukraine, it would be very easy to put down any uprisings that result from mass unemployment or unlivable conditions. The solution will be the creation of a sideways economy similar to the localized economies of favelas.
@lukepurse9042
@lukepurse9042 14 дней назад
Are you a shareholder?
@alansmithee419
@alansmithee419 20 дней назад
14:10 many university blackboards like this come in sets of three at different depths above the wall. You can slide them up and down to access the other boards. It allows the lecturer to keep writing on new board while allowing students to still see previous steps in the lesson if they need to look back and also means the professor doesn't have to waste time erasing the whole board every 5/10 mins.
@WordsInVain
@WordsInVain 14 дней назад
12:27 Unless it's an app specific feature, GPT-4o in the ChatGPT interface explicitly states that it generates images using DALL-E 3.
@canwegonowhereanyfaster2958
@canwegonowhereanyfaster2958 12 дней назад
So can it watch a RU-vid video and transcribe its entire script including descriptions of the visuals? I kind of need that.
@TrueTake.
@TrueTake. 20 дней назад
It’s understanding of the world is next level. That understanding translates to, what Open AI even said is, abilities still being realized… They don’t shy away from saying AGI is imminent, I think if you give it video and indefinite memory that WILL be AGI.
@fynnjackson2298
@fynnjackson2298 20 дней назад
yupp, pretty much , just ad in memory and video and its AGI, however I'd love it to have a say 160 IQ aswell
@JohnSmith762A11B
@JohnSmith762A11B 20 дней назад
@@fynnjackson2298Could Einstein speak 50 languages? IQ cannot capture what an intelligence like GOT-4o really can do. No, it’s not perfect, but perfection isn’t required for AGI.
@TrueTake.
@TrueTake. 20 дней назад
@@fynnjackson2298 I think It’s already at child level now and will shoot past 160 pretty fast after AGI level to SGI, depending on the guardrails.
@minimal3734
@minimal3734 20 дней назад
@@TrueTake. "at child level"? I think it's light years ahead of average human capabilities in most areas.
@AtliOddsson
@AtliOddsson 14 дней назад
The 4.0 supposedly had a calculated IQ of 155
@wannaBtraceur
@wannaBtraceur 20 дней назад
This is the first AI model that I feel the urge to use. The capabilities are incredible.
@fynnjackson2298
@fynnjackson2298 20 дней назад
Just the fact that we have to rethink the trajectory of our lives and how we operate because of all this new tech is so awesome. AI + humanoid robots on mass scale, plus robo taxis plus compounding technical advancements in all areas. The future is coming and its coming faster and faster. What a trip!
@ihatekillerclowns
@ihatekillerclowns 16 дней назад
You sound like a tech slave to me.
@kingofkings652
@kingofkings652 20 дней назад
Last time I was this early there weren't even animals on land.
@brexitgreens
@brexitgreens 20 дней назад
I know that feeling. 🥇🏆
@echoes28
@echoes28 20 дней назад
Ha ha you're old
@choppergirl
@choppergirl 20 дней назад
I know right, the first time we visited this planet everyone was dinosaurs. So we interfaced with you as dragons. Then there were people in togas playing lyras so we interfaced as Greek gods. Now there are little men everywhere playing video games, so we interact with you as AI deep fakes.
@nuttysquirrel8816
@nuttysquirrel8816 20 дней назад
🤣😂😆
@Paranormal_Gaming_
@Paranormal_Gaming_ 20 дней назад
You was not we know it"s bullshit
@I-Dophler
@I-Dophler 20 дней назад
The image editing capabilities are truly mind-blowing. With music, video, and audio generation advancements on the horizon, the creative possibilities are endless. Many thanks.
@levonkenney
@levonkenney 20 дней назад
whats scary is that sora ai video generation is this good now imagine ai video in 1 year 2 or even 3 its going to be crazy
@fynnjackson2298
@fynnjackson2298 20 дней назад
film remakes on demand.k, but so good they all get a 9.0 IMDB rating.
@robertonery202
@robertonery202 20 дней назад
Dude you are killing the other RU-vidrs with your reviews. Keep it up brother and thank for keeping us super informed.
@allanshpeley4284
@allanshpeley4284 18 дней назад
Huh? I get something out of all of them I follow.
@johnwilson7680
@johnwilson7680 19 дней назад
Another great video. It's so strange that they didn't mention any of these breakthroughs during the demonstration. Can't wait for it to be fully rolled out.
@4l3dx
@4l3dx 20 дней назад
GPT-4o is the checkpoint 0 of GPT-5 🤯
@Edbrad
@Edbrad 20 дней назад
They totally already have GPT-5, I firmly believe a lot of work with these companies are just packaging up a small increase in ability when they feel like it. Like Google always goes too far with their lobotomies. OpenAi also has a history of this. What ChatGPT' 3.5 came out it was much better than what 3.5 turned into. As soon as they first updated ChatGPT it was a downgrade, and then when GPT-4 came out it was like some of that increase in ability was just getting back some of what they'd taken away. They've taken away some of these abilities in the recent GPT-4-o since 3 days ago! It can't understand sound now, like birds, dogs, heavy breathing, emotional expression, and it tells me in multiple new sessions that it can't sing. So we know they can easily just turn off some of this. Sora is also WAY WAY WAY too good, and i think that's because they have a EXTRMELY good model behind the scenes.
@PrimaDel
@PrimaDel 20 дней назад
We are so fucked...
@binyaminramati3010
@binyaminramati3010 20 дней назад
I don't agree at all, GPT-5 would need to be much much smarter, which is a much greater challenge to achieve than creating a multimodal model which is about efficiency. The scientific research of these two domains is very different.
@xitcix8360
@xitcix8360 20 дней назад
@@Edbrad Yes, they do have GPT-5, this has been confirmed. Also, you aren't using the new audio and you never did, that's not even released yet, you're still using whisper. Also, the new image generation is better than Sora in some cases.
@crawkn
@crawkn 20 дней назад
Yes I'm thinking this is just an early version of what was intended to be GPT-5, but strategically they needed to pre-empt some other developers' releases. Which raises the question, if this is only GPT-5 beta, how good will GPT-5 be?
@alansmithee419
@alansmithee419 20 дней назад
The moniker "omni" implies to me something bigger also, though I doubt it's true: "omni" meaning "all" suggests that the AI is capable of using literally any modality, and working with all modalities together. Since this is clearly not the case, it may instead be that it actually means it is in some way modular, or easy to retrain to add extra modalities that it is currently not able to use without hindering its ability to work with previously learned modalities. Again, very much doubt it, but that's what the name should suggest. OpenAI probably just thought it sounded cool.
@blisphul8084
@blisphul8084 20 дней назад
Mixture of experts with some experts having additional modalities perhaps?
@brexitgreens
@brexitgreens 20 дней назад
"Omni" instead of "multi" because seamless and arbitrarily generalisable to any modality. A prelude to embodied GPT.
@FredPauling
@FredPauling 20 дней назад
Maybe there are some modalities it trained on that are not yet exposed. I can imagine robot joint angles, torques, velocities, accelerations being important for their robotics partners using end to end learning
@antonystringfellow5152
@antonystringfellow5152 20 дней назад
I believe it is true. They even give a strong hint of this on their website. "Since this is clearly not the case," - Can you explain this for me? I must have missed something.
@therainman7777
@therainman7777 19 дней назад
I think Omni simply means “all” as in “all commonly used modalities.” I don’t think it’s much deeper than that.
@DougieBarclay
@DougieBarclay 20 дней назад
That multiple blackboard was intentional. Lots of lecturers use rolling multiple blackboards, like that one depicted.
@Omii_3000
@Omii_3000 20 дней назад
12:04 how could a deaf person hear GPT 4o say "hey you have to get out of here" 😂
@HauntedCorpse
@HauntedCorpse 19 дней назад
LITERALLY, WHAT THE HECK
@420Tecknique
@420Tecknique 17 дней назад
Light strobing and vibrations which it definetly can do
@chill_yall6439
@chill_yall6439 20 дней назад
I showed it a pic of my product to help me with an Etsy listing and it perfectly identified the the item all the materials used who would use it and for what purpose. I was truly speechless
@brexitgreens
@brexitgreens 20 дней назад
I told you, Matt, that we were going to have GPT-4.5 before GPT-5 - but you didn't believe. Turns out GPT-4.5 is named GPT-4 Omni.
@IceMetalPunk
@IceMetalPunk 20 дней назад
I don't know if that's fair. The token space is entirely different, as is the training data. I think the only reason they're not calling it GPT-5 is because they seem to be reserving numerical iteration for size increases. In other words, every GPT model they make, no matter how different, will be called a version of GPT-4 until they scale up the number of parameters significantly. But to say it's just "4.5" -- like it's fundamentally the same with minor upgrades -- is a bit reductive.
@brexitgreens
@brexitgreens 20 дней назад
@@IceMetalPunk OpenAI have declared from the outset that GPT-5 will/would be embodied.
@IceMetalPunk
@IceMetalPunk 20 дней назад
@@brexitgreens Did they? I missed that. Interesting... so they won't call new models GPT-5 until they're in the Figure 0x.
@brexitgreens
@brexitgreens 20 дней назад
@@IceMetalPunk Also a recent US Ministry Of Defence report states that OpenAI have not even begun training of GPT-5.
@IceMetalPunk
@IceMetalPunk 20 дней назад
@@brexitgreens But what does *that* mean? They clearly have been training GPT-4o. Saying "they're not yet training GPT-5" just means "they haven't yet decided to call a model GPT-5", but as Shakespeare famously said, what's in a name?
@michaelmcwhirter
@michaelmcwhirter 20 дней назад
Great video Matt! Thank you for all the helpful information 🔥
@tsforero
@tsforero 18 дней назад
Matt, you ponder the question a few times whether the answer to these new capabilities is really just the multimodal aspect. I absolutely think that this is the case. The key, as we now all understand, is context and memory. With a greater diversity of context clues (modalities), it makes sense that the contextual understanding of the model becomes more complex. And we now know greater complexity = greater intelligence. We now have the following levers for increasing intelligence in AI: 1. Neural connections 2. Context, memory, attention 3. Input training data 4. Diversity in modalities Would love to see what happens when these models really start getting placed into Robotics and have additional modalities (temperature, EM, proprioception, touch, spacial, balance, etc).
@FrotLopOfficial
@FrotLopOfficial 20 дней назад
Wow your scripting and structuring has gotten insanely good. I would be super curious how tje retention rate of this vid compares to your last one about gpt4o
@MrTk3435
@MrTk3435 20 дней назад
Matt, Ideogram opened up the new world for me! It's so Dope and so the new GPT4o thank you for your work ✨✨😎✨✨
@javebjorkman
@javebjorkman 20 дней назад
I gave it items and rates from our video production company. I asked it what the prices were for certain items-it still gave me the wrong rates. I asked it to create a budget for a 1 day production and the prices it used were not what I gave it. I still think we’ve got a long way to go
@andre_0413
@andre_0413 20 дней назад
Did you upgrade your camera setup? Video is looking crisp!
@SanctuaryLife
@SanctuaryLife 19 дней назад
Gpt4-o did it for him
@gpierce6403
@gpierce6403 20 дней назад
Great video, thank you for keeping us all informed on the latest AI!
@DJ-Illuminate
@DJ-Illuminate 20 дней назад
The android app no longer has audio in either 4 or 4o. I was hoping the web site version had audio but nope.
@febilogi
@febilogi 20 дней назад
Wow you are right. After i read your comment i can no longer access it on my android phone.
@brexitgreens
@brexitgreens 20 дней назад
Sometimes it has the old audio system, at other times - just basic voice typing.
@johnshepard5121
@johnshepard5121 20 дней назад
I created a new, free account and it works there. Doesn't work on the paid subscription. The optimist in me is hoping that's because they're updating it to new version? Though I know they're probably just fixing something.
@davidyoung623
@davidyoung623 20 дней назад
My app still has the old "voice mode"... I've seen a lot of people saying it disappeared for them, but still there for me 🤷‍♂️
@DumPixels
@DumPixels 20 дней назад
@@johnshepard5121On my free account it’s not there either
@chrisweidner7527
@chrisweidner7527 20 дней назад
Wow, I saw this on my GPT tab and didn’t really use it, but now I know it’s THIS powerful! I’ll definitely use this from now on!
@Serifinity
@Serifinity 19 дней назад
Another great video. There is definitely something going on at OpenAI, the way they manage to be ahead of the curve. I think they are using an inhouse GPT-5 to help run R&D, possibly even sit in board meetings and help run the business. They seem to have something no one else has.
@EchoMountain47
@EchoMountain47 17 дней назад
OpenAI = Cyberdyne Systems lol
@Glowbox3D
@Glowbox3D 20 дней назад
It just keeps comin'! Very cool. Can't wait to play with this one when it fully rolls out.
@sportscommentaries4396
@sportscommentaries4396 20 дней назад
Hopefully we get access to the real time stuff soon, i can’t wait for that.
@alexatedw
@alexatedw 20 дней назад
We have it
@sportscommentaries4396
@sportscommentaries4396 20 дней назад
The voice stuff? I don’t see it on mine, I’m on gpt+
@okaydetar821
@okaydetar821 20 дней назад
@@sportscommentaries4396 We don't have it yet, the old voice mode has confused a lot of people to thinking it is the new one.
@alexatedw
@alexatedw 20 дней назад
@@sportscommentaries4396 update your app
@ReLapseJunkie
@ReLapseJunkie 20 дней назад
@@alexatedwno we don’t
@francofiori926
@francofiori926 20 дней назад
Maybe gpt 4o can solve the mistery of voynich manuscript
@DariusNmN
@DariusNmN 20 дней назад
Great overview. One of the best I’ve seen. Thx
@rockapedra1130
@rockapedra1130 20 дней назад
This channel should have a lot more followers! Excellent work!
@brexitgreens
@brexitgreens 20 дней назад
The first major flaw I was able to spot: while GPT-4o can read long transcripts in a split second, it still fails to associate fragments with respective timestamps correctly.
@ronilevarez901
@ronilevarez901 20 дней назад
In my tests it is good to summarize and adapt text style . But it totally failed to reason about what it was writing and itself in many ways. Gpt3.5 turned out to be better or at the same level in that aspect. It might have more functionality but it is not "more", sadly.
@Uthael_Kileanea
@Uthael_Kileanea 20 дней назад
Tell it that and it will correct itself.
@ronilevarez901
@ronilevarez901 20 дней назад
@@Uthael_Kileanea I did. 3.5 corrects itself properly. 4o kept rewriting the text instead of it's interpretation of the text until I explicitly told it not to. And even then, it failed to do the right thing. Good for an essay writing. Bad for a more interesting chat with it.
@brexitgreens
@brexitgreens 20 дней назад
@@Uthael_Kileanea The transcript in question was a standard SRT (subtitle) file. When GPT-4o failed to provide the correct timestamp for a random quotation, I asked it to provide the turn index number instead - which should be easier because it's incremental. It failed that too.
@Jc8.05
@Jc8.05 16 дней назад
bru we’re so cooked
@corsoandcanvas
@corsoandcanvas 18 дней назад
Such a great breakdown! I really appreciate how deep you went on all the elements that weren’t presented. Big fan!
@markmuller7962
@markmuller7962 20 дней назад
I'm so happy of this progress, OpenAI really doing an amazing job at staying ahead of everything else
@primalplasma
@primalplasma 20 дней назад
We are all characters in a Twilight Zone episode.
@matteverlove
@matteverlove 20 дней назад
The Brain Center at Whipple’s, starring GPT-4o as Robby the Robot
@dankron_
@dankron_ 20 дней назад
14:25 the multiple blackboards is pretty standard in universities, they rotate around and you can have multiple "pages" of blackboards
@gizmomismo7071
@gizmomismo7071 20 дней назад
Fantastic analysis of the capabilities of GPT-4o. I can't wait to see what's next they are going to show us this year!!!
@evil1knight
@evil1knight 20 дней назад
I want the glasses wearables built ontop of this
@GothicDragonX
@GothicDragonX 20 дней назад
Give it time, it's coming :)
@AmazingArends
@AmazingArends 19 дней назад
In the movie "Her", all you had to do was put your smart phone in your pocket with the camera sticking out and Samantha the AI could see everything you did. She talked to you through a wireless earpiece.
@andrewrozhen513
@andrewrozhen513 20 дней назад
How do you guys access it for free? I’ve explored the app also open ai platform, playground, everything. There is no free option unless I subscribe for a payed “plus” option
@zoesays3830
@zoesays3830 14 дней назад
@24:29 Did I really see what I just saw? The capabilites for old books being scanned at such speed is mindblowing!
@nintishia
@nintishia 18 дней назад
Thanks a lot for bringing all these additional features into focus that OpenAI chose to underplay during its demonstration session. The realisation that this could be a new kind of LLM altogether with way advanced multimodal capabilities is a bit unsettling.
@iixotic-
@iixotic- 20 дней назад
Matt you ever doing a live stream again this weekend? Or?
@MattVidPro
@MattVidPro 20 дней назад
probably not :( I will try and schedule one next week
@iixotic-
@iixotic- 20 дней назад
@@MattVidPro all good. Thank you bro 🙏🏾❤️
@markjackson1989
@markjackson1989 20 дней назад
I have used GPT-4o today. It doesn't work at all like the demo. It can't change inflection, sing a song, or hum a tune. It had no concept of my own inflection either. It also did not support real interruption. It spoke, then you spoke. And for everyone wondering, it was 4o, because I reached the rate limit. Tl;dr: it doesn't work anything like the demo. At least right now.
@allanshpeley4284
@allanshpeley4284 18 дней назад
Yeah, that's because you're not using the complete version. I think it was a mistake on their part to allow GPT-4o in accounts without releasing all the technology, which apparently is happening in the next few weeks.
@verb0ze
@verb0ze 14 дней назад
I'm not impressed by these demo until I get the product in my hands so I can test these features myself. Too many faking until you make it these days...
@someperson9998
@someperson9998 5 дней назад
Because it isn't exactly out yet. They only introduced some people to the text version, not the voice. The voice you used was likely just GPT 3.5.
@someperson9998
@someperson9998 5 дней назад
@@verb0ze Open AI faking a demo would be a horrible business move. Who's faking it till they make it?
@kennyjohnson63
@kennyjohnson63 2 дня назад
@@allanshpeley4284 Ah, yeah I agree. I think it was a mistake too. I have GPT-4o in my account but had the same experience as @markjackson1989. I keep seeing all these videos about all the stuff GPT-4o can do but then it doesn't work for me. I think they should have called it something different to avoid the confusion.
@BapaG33
@BapaG33 19 дней назад
It's way past time to upgrade Amazon Echo, Apple Siri, and Google Home Smart Assistant. I use Echo Buds headphones for years. Sound isn't the greatest but being able to ask for any song or ask any question hands free has been great. Having a super smart assistant in there would be incredible.
@smellthel
@smellthel 20 дней назад
Whoa! This is miles ahead of what I was expecting this year! I guess multimodality is the future because it leads to a deeper understanding of the world. I love it. We live in the future!
@iminumst7827
@iminumst7827 20 дней назад
I can confidently say this is the first real AGI. Like ik they don't want to say it because it's a big claim, but the amount of context it has allows it to solve so many diverse problems. This is not just natural language mimicry anymore, it can code, write, sing, understand human tone of voice, create images, etc. It's not superhuman yet, but it is clearly competitive with humans.
@sjcsscjios4112
@sjcsscjios4112 20 дней назад
Imagine when the base model gets updated to gpt-5
@Killingglorie
@Killingglorie 20 дней назад
So why is it cheaper if it’s the most powerful version of ChatGPT? Will the other models be even cheaper than 4o now?
@MattVidPro
@MattVidPro 20 дней назад
Because something… else is coming..
@groboclone
@groboclone 20 дней назад
Just a guess, but the fact that it is so fast and responsive would imply to me that it is actually smaller and LESS computationally expensive than former models, yet performs better. Could be due to some combination of better training data, algorithmic breakthroughs, etc.
@nathanbanks2354
@nathanbanks2354 20 дней назад
After trying Mixtral 8x7b and Mixtral 8x22b, which run at about the same speed as Llama 3-8b & Llama 70b, I'd guess that it uses a mixture of experts type approach that allows most of the calculations for any query to run within the 80GB limit of a single H100 GPU, though a different query would run on a different H100 GPU. Maybe I'm wrong, and it's the same server not the same GPU, or a pair of GPU's, but some sort of a sharding/mixture of experts approach. They probably also overtrained it like they did with Llama 3. Plus various other tricks, such as improving the embeddings, though I'm not sure this would make it faster/cheaper...this is my best guess.
@IceMetalPunk
@IceMetalPunk 20 дней назад
For a couple years now, I've said there are three main obstacles between current GenAI and human-level GenAI: multimodality, size, and continual learning. The size of models, I expect, will continue to grow, especially as NVIDIA pumps out better hardware for them. Continual learning is tough on these massive models, but if I understand correctly, Google's "Infini-attention" paper introduces something very similar to -- if not an actual form of -- continual learning for massive Transformers. And as we see here, multimodality in the token space does *amazing* things for the capabilities of these models, and we're getting them, one new modality at a time. At this rate, I suspect we'll have all these three issues more or less solved within the next two or so years, and after that it's just about scale to hit human-level AGI. As culty as it sounds, I do, in fact, feel the AGI. (RIP to Ilya's tenure at OpenAI, by the way.)
@DamielBE
@DamielBE 19 дней назад
And on the font thing, can you save it as a font format for pc or mac?
@itubeutubewealltube1
@itubeutubewealltube1 20 дней назад
people didnt write this model..it was mostly written by ai, itself.. thats the difference. So in Terminator, it was predicted that the year 2027 was when Kyle Reese was sent back in time... three years to go baby...
@yoyoclockEbay
@yoyoclockEbay 20 дней назад
Just letting you know, this is not a good thing.
@fluffyspark798
@fluffyspark798 20 дней назад
SO HYPED
@angrygary122
@angrygary122 20 дней назад
You're insane
@fluffyspark798
@fluffyspark798 20 дней назад
@@angrygary122 username checks out xD
@minimal3734
@minimal3734 20 дней назад
@@angrygary122 You're not hyped?
@420Tecknique
@420Tecknique 17 дней назад
20:20 something you missed is that gpt4o cleaned up the coaster in the generated image only removing the stains bit leaving the coaster the same. Basicaly sprucing up the product un-prompted. Just a little thing but cool if you understand what that means for rationalizing images it sees. It shows rudimentary understanding and reasoning of the physical world.
@rogerc7960
@rogerc7960 20 дней назад
Understanding video has use in robots and cctv monitoring.
@Thicolate
@Thicolate 20 дней назад
Present
@6axy3
@6axy3 20 дней назад
so is all of this live an available or? my phone says it is 40 mode selected, but nothing of the shown works, still behaves like same old 4
@jeffwads
@jeffwads 20 дней назад
I look forward to testing the vision aspect of gpt4o in the card game MTG. Show it the "board" of cards and see if it can find the best play. It does perfectly well with a text game state setup, so if the vision works, that is a gamechanger.
@Allplussomeminus
@Allplussomeminus 20 дней назад
It seems like everyone else is "trying" to AI... OpenAI "is" AI. I think everyone should drop the act and funnel all resources to them to get this ball rolling.
@entropy9735
@entropy9735 20 дней назад
Still dislike this release strategy.. A few days after the event we still have a gpt-4 near equivalent text model without any of the extra features they did like 50 demos on
@DeceptiveRealities
@DeceptiveRealities 20 дней назад
Better than Google. All pre-made demos. No live demo. Promises promises. 😄
@Skeeva007
@Skeeva007 20 дней назад
I can't freaking wait to get access to everything shown! Let's goooo!!!!
@missoats8731
@missoats8731 20 дней назад
I wouldn't necessarily say they were "hiding" these features from us.They made a detailed blog post about them at the top of their page 😅
@Airbender131090
@Airbender131090 20 дней назад
5$ per million tokens?! This is rediculos! Gemini million tokes is unlimeted free. You can put huge books and videos on it
@whataworld8378
@whataworld8378 16 дней назад
Excellent combination of features. The persistence of AI models and renderings means that it can generate quality videos now.
@Heyworld21
@Heyworld21 3 дня назад
This is what I would call the prototype of AGI, this ai has general intelligence in all subjects and I know I ain’t the only one to notice this
@t.1.0.17
@t.1.0.17 18 дней назад
Do you record with your headset microphone?
@centurionstrengthandfitnes3694
@centurionstrengthandfitnes3694 17 дней назад
Good breakdown, Matt. Like many, I was guilty of being impressed on a surface level but not really grasping the deeper meaning of the demonstrated abilities. Watch the recent TED talk by Fei-Fei Li, by the way. Worth it.
@Calupp
@Calupp 4 дня назад
I see that Bold and Brash painting in the back. You cultured man
@humunu
@humunu 18 дней назад
“When AGI is developed, the company will have to work to pretend it’s less capable than it is”…”Why did they hide these capabilities in the website?”
@stevedavis1437
@stevedavis1437 20 дней назад
I appreciate how you are mind-blown by the right advances. So I feel I can trust your take when you see something new that I hadn't seen. Your posts are always really interesting Matt.
@BerryTheBnnuy
@BerryTheBnnuy 10 дней назад
I've been working on a personal project that uses Whisper v3 (hosted locally) and it CAN tell the difference between a human and a bird chirping or a dog barking. While I was testing it my dog started barking and it output "[dog barking]". Any non-human sounds it hears go into [square brackets]. So I would be typing code while the project is running in the background and it would output [typing]. There are other issues like it doesn't detect color and tone of voice, like you were saying (color and tone refers to emotional content).
@madambovarix8922
@madambovarix8922 20 дней назад
I'm guessing this model has feedback from the images it generates; that (w/ also multimodality ofc) would explain why it's so good, if it can see the images it generates (like we do when we draw something), it can then correct them properly.
@leslieviljoen
@leslieviljoen 20 дней назад
15:53 it didn't screw up. Sound's is short for "sound is" - every sound is like a secret.
@ianPedlar
@ianPedlar 18 дней назад
So when do we get it? Currently in the Android ChatGPT app I can choose to use 4o but I still have the delay, I can't interrupt and can only choose one of the preset voices.
@theindubitable
@theindubitable 20 дней назад
I just tried it, the image, the text logo it creates is beyond sick!
@lostpianist
@lostpianist 20 дней назад
I think theyve basically plugged in lots of Microsoft research models and lots of function calling. Whats impressive is the speed and seamless flow of all these interacting models.
@DamielBE
@DamielBE 19 дней назад
Can you imagine using that for say, streamed panels from convention, to have a transcript or resumé for the hearing impaired or those form whom English isn't their native language? so much potential!
@CommentGuard717
@CommentGuard717 20 дней назад
I used to think open source would never reach something like sora or gpt-4 but with the past 3 years ill give it like a few months thats how fast things are going
@NatureFreak1127
@NatureFreak1127 18 дней назад
I plan to use it for language learning. I will feed it vocabulary and grammar for that lesson and instruct it to converse with me using these structures. .
Далее
Mapping GPT revealed something strange...
1:09:14
Просмотров 169 тыс.
GPT4o: 11 STUNNING Use Cases and Full Breakdown
30:56
Make me the happiest man on earth... 🎁🥹
00:34
The delivery rescued them
00:52
Просмотров 6 млн
Audio Upload on Udio AI Music is INCREDIBLE!
22:35
Просмотров 15 тыс.
AI Just Changed Everything … Again
18:28
Просмотров 361 тыс.
ChatGPT 4o vs Gemini 1.5 Pro - A Huge Gap!!
9:56
Просмотров 343 тыс.
26 Incredible Use Cases for the New GPT-4o
21:58
Просмотров 641 тыс.
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
3 Jobs GPT-4o Can Do For You Right Now
18:51
Просмотров 50 тыс.
как спасти усилитель?
0:35
Просмотров 527 тыс.
Где раздвижные смартфоны ?
0:49
Просмотров 425 тыс.
Странный чехол из Technodeus ⚡️
0:44