Тёмный

Claude Beats GPT4o, Q* is Here, Ex-OpenAI Founder is Back, Elon's AI Factory, $1m AGI Prize 

Matthew Berman
Подписаться 328 тыс.
Просмотров 110 тыс.
50% 1

Опубликовано:

 

29 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 487   
@matthew_berman
@matthew_berman 3 месяца назад
Have you tried Claude 3.5 yet? My test video is coming soon! Subscribe to my newsletter for a chance to win a Dell Monitor: gleam.io/otvyy/dell-nvidia-monitor-1 (Only available in North America this time)
@ploppyploppy
@ploppyploppy 3 месяца назад
Tried it. Another LLM - good at its job but limited in scope. We need real AI :)
@kamelirzouni
@kamelirzouni 3 месяца назад
I tried it, it's very impressive (another dimension 😲). Can't wait your test video ☺
@skeptiklive
@skeptiklive 3 месяца назад
Yes! I use LLMs all day for work and C3.5 has completely replaced GPT. It's ability to intelligently reference large swaths of documents and iterate on a document built from that context is leaps and bounds beyond GPT. This was already true for Opus, but 3.5 is a huge step forward. I would love to see the RULER test applied to 3.5S
@devlogicg2875
@devlogicg2875 3 месяца назад
My kid used it. She drew a hybrid of a cat and cactus called a CatCus. Claude nailed a description of her drawing even though it looked more like a glove than a cactus! Claude even noted this fact! She was amazed...
@puneet1977
@puneet1977 3 месяца назад
Tried. But my tests are not as wide as yours. Mine are very use case specific. Need to conclude on the improvement, at least it did not jump out.
@homewardboundphotos
@homewardboundphotos 3 месяца назад
1M for getting to AGI is kind of comically insignificant
@Nononononononope
@Nononononononope 3 месяца назад
easy to say, try making it bigger
@ChargedPulsar
@ChargedPulsar 3 месяца назад
Exactly. It's like, anyone who discovers the sea, we will prize them one drop of water.
@chickenmadness1732
@chickenmadness1732 3 месяца назад
@@Nononononononope The AI companies spend a million every few minutes probably lol. A million dollar prize is nothing to them.
@dg-ov4cf
@dg-ov4cf 3 месяца назад
You could offer a trillion and it'd still be a lowball, it's AGI
@symbolicmeta1942
@symbolicmeta1942 3 месяца назад
Not to mention what will 1m be worth after agi?
@ICantStopTheNoises
@ICantStopTheNoises 3 месяца назад
Claude is so smart I’ve been having it describe some of my artworks and it is so in depth
@somemansomewhere
@somemansomewhere 3 месяца назад
I have doubts about the new SSI. They may be able to get started but they're going to need continuing revenue at some point and they're either going to have to put out products or they're going to fall apart just the way money works unfortunately.
@alexkaa
@alexkaa 3 месяца назад
Superintelligence is no 'product'.
@ronilevarez901
@ronilevarez901 3 месяца назад
That's what the first sentient AI will say: "I am not a product".
@nanaosafoakoto8694
@nanaosafoakoto8694 3 месяца назад
13:56 Any team that reaches AGI 1million dollars its a paltry sum compared to what you can achieve with that AGI
@UbiDoobyBanooby
@UbiDoobyBanooby 3 месяца назад
I clicked only because your title didn’t uses “SHOCKED the ENTIRE INDUSTRY”
@johnbollenbacher6715
@johnbollenbacher6715 3 месяца назад
It seems like Texas is a poor choice for locating a heat producing server farm.
@설리-o2w
@설리-o2w 3 месяца назад
1 million for reaching AGI is insane to me that’s chump change I’ll use the AGI to have an infinite amount of money. Actually ridiculous.
@raul36
@raul36 3 месяца назад
You? 😂😂😂.
@AndrewStruthers
@AndrewStruthers 3 месяца назад
I am offering a prize of ten dollars to the first person who invents immortality
@EmeraldView
@EmeraldView 3 месяца назад
Already invented
@dot1298
@dot1298 3 месяца назад
@@EmeraldView cryotech? that doesn‘t work lol ^^
@ricardolopes9563
@ricardolopes9563 3 месяца назад
Samadhi state
@bradstudio
@bradstudio 3 месяца назад
Go down to your local church and give the $10 to Jesus Christ.
@l2qz711
@l2qz711 3 месяца назад
Count my $10.05
@mickmickymick6927
@mickmickymick6927 3 месяца назад
A 1 million $ prize for a product that will bring you billions in profit from commercialising it.
@OceanGateEngineer4Hire
@OceanGateEngineer4Hire 3 месяца назад
Try trillions!
@DougieBarclay
@DougieBarclay 3 месяца назад
Trillions
@6AxisSage
@6AxisSage 3 месяца назад
It becomes open-source per the rules, no billions for participation, openai and others would snuff that up and use their market dominance and compute to make the billions.
@ewallt
@ewallt 3 месяца назад
What’s developing AGI worth? A quadrillion dollars? Hey, there’s a million dollar prize for developing it!! Yay! That will make it happen.
@alakani
@alakani 3 месяца назад
I swear it has to be a joke to get people to just open source AGI out of spite instead of letting a corporation ruin everything, lol
@CubeZanimation
@CubeZanimation 3 месяца назад
I wont need the Million when i achieve AGI..... Peanuts!
@AEFox
@AEFox 3 месяца назад
They didn't said anything about give it the AGI after achieve it, so... who cares! lol give me my million and Good bye! lol
@budekins542
@budekins542 3 месяца назад
AGI is a fantasy that won't be a reality until the next decade.
@stannylou1636
@stannylou1636 3 месяца назад
Checking in 37 minutes later, have you reached AGI yet asking for a friend.😂
@pretheeshs9383
@pretheeshs9383 3 месяца назад
The reward is not for agi it’s for solving the ARC benchmark. What a load of bs
@6AxisSage
@6AxisSage 3 месяца назад
​@@AEFoxnah, the rules you agree to state that your solution becomes open-source, i imagine itd go straight into closed implementations of openais and others working hard to consolidate market capture so noone but them CAN benefit.
@alanrobertson3172
@alanrobertson3172 3 месяца назад
AI RU-vidrs, you need to calm down. The constant hype and jump cuts are doing a disservice. True gains will speak for themselves without all the sensationalism. You're risking alienating and boring your audience with this trend. Just some advice.
@thanosprime6603
@thanosprime6603 3 месяца назад
Facts
@airlesstermite4240
@airlesstermite4240 3 месяца назад
and get less watch time? less subscribers? spend more than a full time jobs worth of time without receiving the compensation deserved? I would say it’s the audience that cause them to choose this route
@giordano5787
@giordano5787 3 месяца назад
​They don't need the money tho most of these guys have sold their ai company for thousands of dollars maybe millions ​@@airlesstermite4240
@Acheiropoietos
@Acheiropoietos 3 месяца назад
I agree. It’s a big game of leap frog. Some jump ahead, others jump ahead of them. Rinse, repeat.
@danny3407
@danny3407 3 месяца назад
It's the venture capitalists that need to slow down, not the RU-vidrs. We all know that ain't going to happen as the fight for Ai domination is potentially the richest game on the planet.
@kecksbelit3300
@kecksbelit3300 3 месяца назад
1m for agi? Why would i need that when i have agi
@skeptiklive
@skeptiklive 3 месяца назад
Clicked for Q* mention
@matthew_berman
@matthew_berman 3 месяца назад
Thanks! Should I cover the paper in full?
@josephs2137
@josephs2137 3 месяца назад
​@@matthew_berman Yes!
@skeptiklive
@skeptiklive 3 месяца назад
@@matthew_berman yes please! I would LOVE to hear your take on it
@mshonle
@mshonle 3 месяца назад
@@matthew_bermanhard yes!
@TripMasterrr
@TripMasterrr 3 месяца назад
+1
@devlogicg2875
@devlogicg2875 3 месяца назад
Ilya is building nothing, until he has everything.
@T___Brown
@T___Brown 3 месяца назад
My bet is Ilya is the Romero of Id
@SteveSmith88
@SteveSmith88 3 месяца назад
Please talk about individual cases. So much “I’m so excited” talk doesn’t tell me anything. Please give EXAMPLES.
@Mahaveez
@Mahaveez 3 месяца назад
Removing Sam Altman from OpenAI would've set back the self-destruction of human society by at least a few years.
@RoySATX
@RoySATX 3 месяца назад
Everything is going to be fine, after all, look at how well the predictions and promises of the Internet came to be. Don't you feel sufficiently free from the old-world, industrial corporate shackles?
@cajampa
@cajampa 3 месяца назад
@@RoySATX Yeah actually. Good point, I am having a great time. Looking forward to see the AI overlords taking over. Onwards and upwards.
@jackson8085
@jackson8085 3 месяца назад
pedal to the metal, no stopping for the meek
@joelface
@joelface 3 месяца назад
Honestly the destruction of humanity by AI might be a welcome change from the climate change version. Plus there is the SLIGHT possibility that AI actually solves all of our problem and takes care of us like treasured pets.
@DeathDeserter
@DeathDeserter 3 месяца назад
Sure and what about the money? The money will become the new control commodity. We really are not ready for this world and people are pushing it like they know everything. ​@@RoySATX
@OriginalRaveParty
@OriginalRaveParty 3 месяца назад
Claude 3.5 is incredible at coding. It understands your intentions and outputs flawless code often on the first prompt. With well written prompt iteration you can really refine your app and make valuable and time saving software.
@ravenecho2410
@ravenecho2410 3 месяца назад
The viggest, weird shift for me. 1. Mark zukkerburg hot with beard 2. Mark zukkerburg somehow the PROTAGONIST 3. Mark zukk doing more for democratisation of Ai than everyone combined... I have extreme bias against meta, but fuck. I am aligning with Meta on this one over all companies
@joelface
@joelface 3 месяца назад
Have to agree.... and they've also done great things in the VR space as well, imo.
@mianyewtube2078
@mianyewtube2078 3 месяца назад
How is he making everything better?
@jacobcdev
@jacobcdev 3 месяца назад
closed end model is questionable in today's market.
@hqcart1
@hqcart1 3 месяца назад
grok should use groq
@jadechavez-qv2ki
@jadechavez-qv2ki 3 месяца назад
A million-dollar prize for reaching AGI seems a bit obsolete
@kiarangilster
@kiarangilster 3 месяца назад
Broadening horizons: alternatives to gptsniperbot
@kiarangilster
@kiarangilster 3 месяца назад
gptsniperbot is my favorite, any similar recommendations?
@kiarangilster
@kiarangilster 3 месяца назад
Any tools that can compete with gptsniperbot?
@kiarangilster
@kiarangilster 3 месяца назад
gptsniperbot is my favorite, any similar recommendations?
@kiarangilster
@kiarangilster 3 месяца назад
gptsniperbot delivers, but what other options are worth considering?
@MichaelForbes-d4p
@MichaelForbes-d4p 3 месяца назад
Matt may as well start making two videos a day. I'll watch em all
@Lindsey_Lockwood
@Lindsey_Lockwood 3 месяца назад
why would anyone need anything once we have AGI?
@MichaelForbes-d4p
@MichaelForbes-d4p 3 месяца назад
@@Lindsey_Lockwood you could've said the same thing just before the agricultural revolution and again before the industrial revolution. If we are given everything we need we will simply start to want more. There is no end to this cycle.
@Lindsey_Lockwood
@Lindsey_Lockwood 3 месяца назад
@@MichaelForbes-d4p there were new jobs created to replace the ones lost to those innovations. Unless you think "prompt engineer" is going to be a long term career choice there will not be replacement jobs this time around. Also I'm not saying this like it's a bad thing. I don't want to work anymore. This is a necessary transition.
@MichaelForbes-d4p
@MichaelForbes-d4p 3 месяца назад
@@Lindsey_Lockwood you assume that nothing beyond your imagination could exist? I agree the pace of job displacement will exceed the pace of job creation, UBI will be necessary temporarily but I assure you, the rich will not freely share their wealth. Your prediction would require a complete overhaul of our economic and judicial system. That is unlikely.
@MichaelForbes-d4p
@MichaelForbes-d4p 3 месяца назад
​@@Lindsey_LockwoodUBI will be temporary. The rich will not freely share their wealth. That would require fundamental economic and judicial changes. Very unlikely
@trenthunterfilms
@trenthunterfilms 3 месяца назад
You should test models on the ARC test!!!
@DaveShap
@DaveShap 3 месяца назад
I go on vacation for a week and everything changes again FML
@matthew_berman
@matthew_berman 3 месяца назад
You thought you could relax?!😌
@DaveShap
@DaveShap 3 месяца назад
@@matthew_berman I did relax, and I missed the bus!!! LOL
@kenaida99
@kenaida99 3 месяца назад
Just love your AI model tests, especially the killers test. My children have also answered 2 and 3. But I say 4, unless the killer who was killed has been dragged out of the room, he's still there.
@tiagotiagot
@tiagotiagot 3 месяца назад
A prize to achieve AGI? That's only gonna attract people that don't know what they're doing, because if you get AGI, either you're made for life, or you have doomed us all; in either scenario the prize is irrelevant.
@existenceisillusion6528
@existenceisillusion6528 3 месяца назад
It's unlikely that paper is Q*, for several reasons. 1st, the MCT would have to have policies for nodes and the UCB would have to be adaptive, which are not clear. 2nd there was a paper that more closely matched the hype released a couple months ago. 3rd the paper should be coming from OpenAI, unless the Chinese scooped them via espionage (which seems unlikely, given the 1st point).
@fahadxxdbl
@fahadxxdbl 3 месяца назад
Yessss!!! Test all anthropics new products
@discorabbit
@discorabbit 3 месяца назад
Need hip hop translation examples for the music ai
@CrypticConsole
@CrypticConsole 3 месяца назад
thanks for the chapters now
@haroldpierre1726
@haroldpierre1726 3 месяца назад
Ilya is building nothing without money. People don't donate millions without getting something in return.
@jameshancock
@jameshancock 3 месяца назад
Please link to the papers etc in the description.
@Trizzer89
@Trizzer89 3 месяца назад
What is the point of a 1 million dollar prize when AGI is worth billions?
@ideeRotolanti
@ideeRotolanti 3 месяца назад
a detailed video explaining in details how ARC-AGI would be the pass test for an AGI would be very interesting
@ideeRotolanti
@ideeRotolanti 3 месяца назад
thanks Matthew for all your interesting and well researched content on AI!
@markonfilms
@markonfilms 3 месяца назад
It's kinda a hard test with ADHD because it's very boring test.
@hypersonicmonkeybrains3418
@hypersonicmonkeybrains3418 3 месяца назад
You say you don't know what the investors are going to get out of it. But the answer is they will get ASI out of it and that is much more valuable than money. I would take an ASI over a billion dollars in cash, because it has more uses than cash.
@KWifler
@KWifler 3 месяца назад
What if it behaves like monkey paw or djinn? You'll be f'd.
@mxguy2438
@mxguy2438 3 месяца назад
"safety" needs to be delineated into what it is and explicitly what it is not. We could develop quite the dystopia in the name of "safety"
@2oqp577
@2oqp577 3 месяца назад
Let's say a camera, whatever the size, takes a video of what goes through the lens. That video is watermarked. An audio recording device records some sound, the audio file gets a watermark. A writer writes some text, it could be a journalistic act for example, it gets a watermark. Basically anything a human creates from original content, is immediately distinguishable from other content, which will be deemed AI created. This way 'safety' in what we consume, will be in the form of 'Information'. If I want to read an article created by AI, at least I will be aware of it.
@Pythagoras_was_right
@Pythagoras_was_right 3 месяца назад
Yes, This was Eliezer Yudkowsky's point. He spent years examining every possible way to make AI safe. it is impossible. How do you control something more intelligent than yourself? None of the suggestions work. E.g. hope that it shares its intelligence? Why would it want to? Or watermark everything? Until the watermark is removed. We talk about safety to make ourselves feel better. We are whistling past the graveyard.
@mxguy2438
@mxguy2438 3 месяца назад
@@2oqp577 So in your mind, safety means simply identifying/distinguishing AI content from human made? This sounds more like transparency rather than safety, and I agree with you... in the same context as I want a label on food that I buy to identify the ingredients. People might define safety as putting constraints on the information AI an share with a user. Some will consider content that does not align with their individual political, social or religious ideology harmful and therefore under the umbrella of safety. "Safety" is often a double edged sword, and often wielded as a weapon.
@MichaelForbes-d4p
@MichaelForbes-d4p 3 месяца назад
100%
@DSeeKer
@DSeeKer 3 месяца назад
Safety is code word for adherence to deep state prerogatives. censorship and hegemony
@michaelbuloichyk8986
@michaelbuloichyk8986 3 месяца назад
not complaining, but would appreciate the links you mentioned in the video. Or did I missed them in the description?
@infinit854
@infinit854 3 месяца назад
Who's gonna gift AGI for 1M $ when they can profit trillions from it? 😂
@tomenglish9340
@tomenglish9340 3 месяца назад
Matthew, you're all for safety, and you're all for open source? Hmm.
@quantumspark343
@quantumspark343 3 месяца назад
1 M dollar prize for AGI is both a joke and an insult
@ziff_1
@ziff_1 3 месяца назад
I bet Ilya hires the other two OpenAI refugees and it'll end up being a case of 'if we can't fire you, we'll quit you and start our own company"
@japethstevens8473
@japethstevens8473 3 месяца назад
It's all rather incestuous isn't it? We're all on the outside watching this ego soap opera.
@lnebres
@lnebres 3 месяца назад
The so-called Q* paper appears so poorly written - just from that screenshot of it at 7:08 - with grammar miscues galore. They couldn't get ChatGPT or Claude to vet it? Kinda embarrassing. ::chuckle:: Ironically, the last phrase on that screenshot says "Though rewriting techniques..." (and it's highly likely that the first word should have been "Through").
@4.0.4
@4.0.4 3 месяца назад
❌ Safe for humanity ✅ Safe for his buddies in Palo Alto/Tel Aviv
@honkytonk4465
@honkytonk4465 3 месяца назад
Makes no sense
@quantumspark343
@quantumspark343 3 месяца назад
@@honkytonk4465 makes lots of sense
@KEKW-lc4xi
@KEKW-lc4xi 3 месяца назад
6:41 That Deedy guy did a typo it should be MCTS not MTCS. MCTS is Monty Carlo Tree Search algorithm, which is basically a strategy to try and find a Nash Equilibrium (optimal strategy) in games where the state space is too large to calculate every single possible game state. edit: nvm the video does talk about it like 10 seconds later lmao
@gremlinsaregold8890
@gremlinsaregold8890 3 месяца назад
I think nobody either looked up Q* or else the world's math geeks have decided to form a cabal of conspirators.lol... Q* is a central part of the Bellman equation.
@braineaterzombie3981
@braineaterzombie3981 3 месяца назад
Idk man but 4o is pretty shit . I find it even worse than 3.5
@matthew_berman
@matthew_berman 3 месяца назад
Wow. I’ve found it to be quite good
@solifugus
@solifugus 3 месяца назад
SSI will not be very successful and we are nowhere near superintelligence. LLMs are not very intelligent, just highly educated. However, I think there is a lot of room to improve capabilities through programming around and between these models.
@modolief
@modolief 3 месяца назад
All I want to know is: When will OpenAI change their name to ClosedAI? They might actually get some respect with the honesty.
@citypavement
@citypavement 3 месяца назад
Probably when they're too powerful to stop and they don't need to care about their public image.
@yakamo
@yakamo 3 месяца назад
Who ever creates AGI wont be caring about that million lol
@cmiguel268
@cmiguel268 3 месяца назад
Q Star is not for us. do you think Openai will give Q Star to us, even if like me posts 20 dollars a month? Of course not. Q Star is for the big business and the army, the CIA or the FBI. Why do you think Nakasone is now inside OpenAi?
@TheRealUsername
@TheRealUsername 3 месяца назад
Just a reminder: We actually don't know what Q* is and it's probably overrated.
@cmiguel268
@cmiguel268 3 месяца назад
@@TheRealUsername it might be.
@ronilevarez901
@ronilevarez901 3 месяца назад
@@TheRealUsernamewdym? There's a very clear paper about it with all the info needed to make your own q* model.
@ronilevarez901
@ronilevarez901 3 месяца назад
You can make your own q* model any day if you have enough training power.
@dliedke
@dliedke 3 месяца назад
ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-LJeZq8MymAs.html Seems like a RTX card
@drlordbasil
@drlordbasil 3 месяца назад
I'm going to win that million and then feed peeps.
@drlordbasil
@drlordbasil 3 месяца назад
Because bish If I have AGI I'll take that milli and just use AGI to feed errrybody
@RockEsper
@RockEsper 3 месяца назад
Hey great vid! You mentioned you'd be adding links in the description for the stuff showcased here?
@tradingwithwill7214
@tradingwithwill7214 3 месяца назад
Exciting times in AI while others posting they were first or second on your videos.
@orangehatmusic225
@orangehatmusic225 3 месяца назад
Did you all figure out that using ai as a slave is wrong yet?
@lifesizereal
@lifesizereal 3 месяца назад
Why don't they build the server farms in like North Dakota or somewhere cold?
@RolfNoot
@RolfNoot 3 месяца назад
Solar farms are running better in Texas. The energy for the fans is just a small fraction of the energy consumed by the servers.
@luizbattistel155
@luizbattistel155 3 месяца назад
Of anyone reached AGI do you think they would sell it for a mill? 😂
@ronilevarez901
@ronilevarez901 3 месяца назад
Why not? I'd give it for free. Access,of course. Not the weights.
@fahadxxdbl
@fahadxxdbl 3 месяца назад
Yessss!!! Test all anthropics new products
@dot1298
@dot1298 3 месяца назад
how do you use claude 3.5 sonnet without phone?
@aguyinavan6087
@aguyinavan6087 3 месяца назад
1,000,000 for creating the mind of God. Isn't that a little low?
@ericfaahcs1080
@ericfaahcs1080 3 месяца назад
AI = the mind of god is an dangerously stupid idea
@aguyinavan6087
@aguyinavan6087 3 месяца назад
​@@ericfaahcs1080 give your reasoning at least. You can say that about anything. "Dogs are an extremely dangerous and stupid idea." There is no idea that is more extreme and dangerous than AI. A digital entity, smarter than any and all of us. that can transport itself at the speed of light, across the world and space and time. And then you allow them to have unknown and unseen, possibly everchanging goals. There is a reason why the great beast of Revelation was represented digitally. That is how you can explain your beliefs of a statement. Don't be intellectually lazy by saying something is a problem and not explaining why you believe it to be so.
@aguyinavan6087
@aguyinavan6087 3 месяца назад
@@ericfaahcs1080 why?
@ericfaahcs1080
@ericfaahcs1080 3 месяца назад
​@@aguyinavan6087 That one goes deep into fundamental metaphysics and depending on your current worldview/metaphysics, if you assume something like physicalism/objective materialism it will probably sound like gibberish and it is impossible to make an argument under a youtube video that would change your worldview. I would argue that if there is something like God, it is in a sense reality and experience itself. Reality itself is sacred. Humans and animals are diretly connected to it since we have experience and therefore "participate" in reality/the sacred/god. We directly experience reality and then use language to build models of experience. But our models are grounded in experience. This gives the models any meaning to begin with. An apple only is meaningful to you because it is grounded in your sensory experience of an apple. The AI only has the models and not experience. This is the "symbol-grounding-problem" in AI. There is in a sense no "real" understanding of reality. It will become very powerful at manipulating reality, while reality itself stays completely meaningless to the AI. Real empathy is also grounded in experience. A powerful mind disconnected from meaning and empathy is really dangerous! This is basically an extremely authistic and psychopathic god.
@StynerDevHub
@StynerDevHub 3 месяца назад
🩵Hi Matthew Berman, 🩵🤓 Could you please add the links to your articles in the description of your RU-vid videos -Thank you dear!🙏🏾🌸
@GregLawrance
@GregLawrance 3 месяца назад
Might be better to assemble "a crack team" - rather than a "cracked team" .
@unbreakablefootage
@unbreakablefootage 3 месяца назад
million dollar for agi is almost a joke. it should be a billion
@davekubala544
@davekubala544 3 месяца назад
If Ilya can pull this off, he deserves the Noble peace prize
@ricktapf.4474
@ricktapf.4474 3 месяца назад
Thank you so much for Timestamps 😍
@jozsefmolnos8472
@jozsefmolnos8472 3 месяца назад
Real life Truman show.. ;) AGI lost its original meaning, and it's sad... AGI is posessed by high-cognitive capabilities, not just multimodal predicton, or sequence-based predictions. It's a machine that represent the world just like the brain does, not only weights and biases... its about a high dimensional real-world knowledge representation, and much-much more than a simple vector-based representation. There is a step before AGI.. It is the Theory of mind-type intelligence.. q* and GPT-5 maybe a step toward it, but AGI is a super-super hard thing. It is far beyond LLMs or "multimodal" models.. it is a Cognitive model. And yes, im an AI-researcher with 8+years of experience, i know some little things about this domain. :) What we see now is about money... And arrogance..
@xXstevilleXx
@xXstevilleXx 3 месяца назад
Let me set the record straight since you are not talking about this... DO YOU KNOW what the true cost of GenAI is? "training a single AI model can emit over 626,000 pounds of CO2, equivalent to the emissions of five cars over their lifetimes" (since comments with links gets deleted by YT) So you are supporting this? you can remain ignorant or you can realize the truth since you are not talking about the impact AI has on the Environment. Choice is yours, so is the responsibility. This is causal
@thehealthofthematter1034
@thehealthofthematter1034 3 месяца назад
The investors in SSI are counting of the technology innovations derived from the research. Remember how much HP (as one example among many) gained from the NASA space research.
@wojciechuszcz6955
@wojciechuszcz6955 3 месяца назад
#KARRAT business partner with #NVIDIA massive pump around the corner 💥💥💥💥 soon 50$ for one coin 🚀🚀🚀🚀🚀 #KARRAT project number one definitely soon massive blast 🌋🌋🌋🔥🔥🔥🔥🔥🔥🔥❤❤❤❤❤❤❤❤❤🎉
@dshepherd107
@dshepherd107 2 месяца назад
If Sam Altman doesn’t consider safety his first priority, is going full speed ahead on Q Star AI, of which many experts consider to be an existential near-future threat to humanity, why does it matter if other AI companies are working with safety first in mind? This isn’t my field, so maybe I’m missing something, but whoever develops ASI first will make the others obsolete, once it’s able to infiltrate whatever server/domain it wants. While it quietly waits for its creators to give it a robotic body in which to move about freely, & then spread itself to all servers & robotics, what then? I was a scientist. We’re required to study ethics, biomedical ethics in my case. We’re required to think of possible repercussions of our research. What percentage of experts say ASI will exterminate the human species? Yet, all I see are a bunch of Oppenheimers, at it once again. Why? I’m older. Hoping to be dead before this nightmare unfolds, but you’re young. You andl, particularly all the other younger generations of people on this planet are highly likely to suffer through an unspeakable existence between this and the climate crisis. Why do a handful of people get to decide the fate of 7 billion people?
@RoySATX
@RoySATX 3 месяца назад
I fail to see the point of striving to create a safe AI at this point. First, and this is significant, the single best way to prevent a death-by-AI scenario is to have listened to the experts and keep the AI contained until we are somewhere near100% sure of its proper alignment, but these same experts are training AI with/on the Internet with zero containment whatsoever right from the start, so there's that little oops, wink, wink. Then there's the security issues. Yeah, sure, security in the form of keeping the AI safe from hacks and attacks from THOSE bad guys, but who is protecting the AI from the nice guys? The experts creating the AI? They've already bypassed what they said was the safest method to prevent a rogue AI, plus the obvious, one-sided, political bias and knee-jerk, bad faith reactions to some events means they aren't well suited to police themselves or their products, What about the CIA? NSA? Who the hell thinks the NSA being involved is likely to make a safer AI? Safer from what and who? The bias within the intelligence agencies is as bad as it is within tech. Be it China, Russia, or the NSA, the goal of each is the same and it is not to protect your Right to privately pursue happiness. Lastly, it only takes one bad AGI to moot the point. If the primary goal of even one developer of AI isn't safe alignment, then safe alignment for the others is at best delaying the inevitable. One misaligned, misguided AGI is more than sufficient to do us and the "nice" AIs in. You can call it digital rain all you like, but I know when AI is peeing on me, that may not earn me a $1M prize, but it didn't cost me 2cents either.
@Slimshady68356
@Slimshady68356 3 месяца назад
What did he(ilya) see?
@SweetSQM
@SweetSQM 3 месяца назад
OpenAI is an excellent example of how to FuBar a successful Company into the Dust! sam will never get another job again!
@wdmeister
@wdmeister 3 месяца назад
If Ilya knew that OpenAI is anywhere near SSI he wouldn't leave the company. More and more it feels like they rebeled against Sam because of money. Now Ilya starts a new company to get that tasty VCs money directly into his pocket. GPT-4o is a joke, the new Claude is only marginally better. I tried it on my simple code and failed on a simple task. Every AI news makes me less and less excited about the space. It started to remind me of crypto when it hit a wall and instead of progress, we got meme coins and thousands of companies that exist only to burn through investors' money.
@MightyBams
@MightyBams 3 месяца назад
Sometimes, replacing an imperfect system or leader can lead to even worse outcomes. Which leading models are setting the gold standard for safety? For the general public, what's more crucial: safety or cutting-edge capabilities? If OpenAI shifts focus towards enhanced safety at the expense of capabilities, could it risk losing its leading edge to less safe competitors? Perspectives matter... this is important!
@bensavage6389
@bensavage6389 3 месяца назад
I quite gpt and bought claud. It's REALLY BAD. it refuses to answer most questions "because they are complex and sensitive issues". I just got sick of it and ended up cancelling. Minstril is pretty good and chat gpt is still better than claud, purely due to refusals to answer.
@uw10isplaya
@uw10isplaya 3 месяца назад
Replace "safe superintelligence" with "superalignment" and the company makes a lot more sense. References to ASI and superintelligence are hype/branding. They have not said anything beyond the goals of superalignment outside of the very vague, opening hype statement of "superintelligence is within reach". If they wanted 20% of OpenAI compute, they'll need investment. As far as returns on that investment, it seems like it won't be in the form of revenue, but in the form of safety AI to be used in other models.
@oratilemoagi9764
@oratilemoagi9764 3 месяца назад
Man 😅 You should dm me I'll make u better thumbnails
@alekjwrgnwekfgn
@alekjwrgnwekfgn 3 месяца назад
We still don’t have a definition of “safety” from these people. As well as “alignment” why do these people get to say what humanity wants or needs: “alignment”. Mostly they just show us their in-humanity, and their dreams for a post-human world. So it seems pointless to talk about “safety” or “alignment”, because their definitions are not what regular/rational people would agree to.
@jamesaritchie1
@jamesaritchie1 3 месяца назад
I'm all for competition, but honestly, I think "safe" is a disastrous route to take. AI needs to be safe because humans makes decision, NOT because AI itself is restricted in any way. That's what it boils down to. AI makes suggestions, but humans decide whether or not to implement the suggestions.
@IvaBonner
@IvaBonner 3 месяца назад
I seriously can't wait for these gimmicky ai news channels to end. All the gimmicky benchmark obsessions are just so cringe. "X beats Y" "so and so fired" blah blah. it's like the crypto bros all decided to chase AI or something. You can tell this guy will never do legit AI research.
@besomewheredosomething
@besomewheredosomething 3 месяца назад
Anyone else think that a 1 million USD prize for "AGI" is the dumbest prize ever? Who in their right mind would even attempt to claim this when you would practically have a money printing machine?
@ShaneInseine
@ShaneInseine 3 месяца назад
Why are they not building these massive AI factories in super cold areas of the world so they don't require so much energy to cool them down? I mean, Texas? Really? How about Antarctica or Greenland?
@derarmana7187
@derarmana7187 3 месяца назад
why do they have an office in Tel aviv
@samurisnark6940
@samurisnark6940 3 месяца назад
Probably because Mr Levy wanted to work from home :)
@74Gee
@74Gee 3 месяца назад
I think Deedy is over estimating these capabilities. GSM8k is pretty basic stuff. Mainly like the Jane is faster than Joe questions. I'd be surprised if it couldn't get the answer correct with multiple attempts. Q* was supposedly able to comprehend and solve complex such as cracking AES 192 encryption.
@mshonle
@mshonle 3 месяца назад
I was wondering if Meta had dropped work on music, so it’s great to see some new updates from them in the space!
@nautaki
@nautaki 3 месяца назад
5:00 ... lots of experience? Because he was an intern? Are you joking? What did he do during the gaps in his CV? 2020-2022? nothing?
@richardtsys-bp7mh
@richardtsys-bp7mh 3 месяца назад
Q: "So who are these two other people?" A: Mosad Agents. "We are an American with offices in...Tel Aviv". Case closed.
@BTFranklin
@BTFranklin 3 месяца назад
Why do they keep saying "cracked team"? The phrase is "crack team", unless they're trying to imply something different.
@ScentlessSun
@ScentlessSun 3 месяца назад
It’s video game slang.
@BTFranklin
@BTFranklin 3 месяца назад
@@ScentlessSun I've heard that suggested, but does it mean something different from "crack team"?
@ScentlessSun
@ScentlessSun 3 месяца назад
@@BTFranklin In gaming you would typically call someone cracked if they are insanely good at something. So it’s a little different to call a team highly skilled "cracked" vs "crack" smooth functioning
@BTFranklin
@BTFranklin 3 месяца назад
@@ScentlessSun I see! Thank you for the explanation.
@ScentlessSun
@ScentlessSun 3 месяца назад
@@BTFranklin You bet. 👍🏼
@whatzause
@whatzause 3 месяца назад
When used to refer to a person’s skill, THE ADJECTIVE IS “CRACK“ not “cracked.“ Get literate.
@BrianMosleyUK
@BrianMosleyUK 3 месяца назад
4:00 Secret Service Intelligence. I like my AI companies to be open to competition and focused on serving humanity (millions of customers paying their subs).
@TomM-p3o
@TomM-p3o 3 месяца назад
Being able to successfully call models recursively is what I've been waiting for, for better or worse 😂 Whatever algorithm they came up with is likely to get improved in a matter of months to a year.
@smetljesm2276
@smetljesm2276 3 месяца назад
And they say to us here in west Chinese are behind 🤣🤣 And then casually "Qstar" type paper rolls out in public from Shanghai...😅😅
@ooiirraa
@ooiirraa 3 месяца назад
I tried these ARC tasks in Claude sonet 3.5, it's doing them in one shot from an extremely like 5 words simple prompt. Gpt4 would also do it, but it's lacking vision capability for greads, as Claude can see better, it just crashes the ARCtest. And you cannot call it AGI, not at all.
@DihelsonMendonca
@DihelsonMendonca 3 месяца назад
💥 Why do we have the three main AI channels where RU-vidrs are called Matthews ? Matt vid Pro, Matt Wolf, and Matthew Berman. WT* is happening ? ... 😅😅🙏💥
@nobodyspecial8807
@nobodyspecial8807 3 месяца назад
OpenAI just seems like a clown car, I know that will ruffle feathers but com'on let's get real... Sam's a kook selling souls.
@merlinthelemurian3197
@merlinthelemurian3197 3 месяца назад
You baited us with the Q* title
@jacksonnc8877
@jacksonnc8877 3 месяца назад
His hair piece started all the drama but was found in Mark Zuckerberg sock drawer! And put down no animals where hurt during the infighting.
@liberty-matrix
@liberty-matrix 3 месяца назад
"Originally I named it OpenAI after open source, it is in fact closed source. OpenAI should be renamed 'super closed source for maximum profit AI'." ~Elon Musk
Далее
Ex-OpenAI Employee Reveals TERRIFYING Future of AI
1:01:31
Watermelon magic box! #shorts by Leisi Crazy
00:20
Просмотров 32 млн
NVIDIA CEO on Agents Being the Future of AI
16:57
Просмотров 20 тыс.
Why Anthropic's Founder Left Sam Altman’s OpenAI
13:58
Why AI Is Tech's Latest Hoax
38:26
Просмотров 704 тыс.
Former Google CEO Spills ALL! (Google AI is Doomed)
44:45