@@italiangentleman1501 Plus Google are way ahead on the "evil maturity curve", by about two decades. Them winning the AI race would be a worst case scenario outcome.
@@italiangentleman1501 I disagree. OpenAI tweeted that GPT-4o has been in the works for 18 months and Sam keeps iterating over and over that he's doing a slow rollout. Count the months. That's before the first version of ChatGPT was released to the public.
@@Vyshada The elephant in the room. Everyone can see it, they even talk about it, but have we made any progress on the one thing that will make or break any AGI/human success? I doubt it...we can't do human/human alignment well and have had lots of time to work on it.
I don't know if it "really" feels emotions, or how we would even know, but it's weird to keep saying AI is a tool not a creature, then turn around and build Samantha from Her.
I was surprised Ilya didn't leave sooner. I can't imagine how awkward internal dynamic was after that debacle. But Jan leaving, I see 2 possibilities. Either he was/is on Ilya side, then him leaving is normal/OK. Or "Superalignment" team is just a PR now, and Jan grew more and more frustrated. Which is more concerning.
It's clear this stuff is getting at the most fundamental math we've ever even flown close to. Last time we did that we got nukes. Math is dangerous but only in the context of humans to misuse it.
They helped Sam Altman create human like AI that they can use to make a great product. Sam is the modern Steve Jobs and Ilya is Wozniak. Off to work on something MORE than just a friendly computer interface. I think Ilya is off to create something much deeper!!
No idea @@literailly after leaving Apple, he didn't do much tbh, which is definitely a possiblity for Ilya... He might just spend the rest of his life traveling and trying new foods, idk lol but the man could do bigger things if he wanted to, the potential is there...
"After permanently leaving Apple in 1985, Wozniak founded CL 9 and created the first programmable universal remote, released in 1987. He then pursued several other businesses and philanthropic ventures throughout his career, focusing largely on technology in K-12 schools.[3]" -Wikipedia Sounds like he had a quiet but successful afterlife post Apple. Cool guy. Wish Ilya the best.
The open-source AI community needs influential leaders. With this in mind, I hope Ilia Sutskever doesn't join a company but instead leads a genuine open-source AI initiative that can drive safe and innovative AI development. If he truly believes that these companies pose a societal risk by prioritizing profit over decentralization, then the open-source community could be protected as most developers prefer open-source solutions over large corporations exploiting private data for profit. Like the eternal battle between light and dark in the Star Wars universe, I see open-source as the light side, rebelling against the corporate greed of the dark side.
Cost of compute may definitely prohibit any altruistic, non-profit, best for humanity approach. Oh wait, Is there a cure for human greed and power lust? Any evidence of it throughout human history?
@@skane3109 We are on the brink of incredible scientific advancements that can change the course of history, like discovering human longevity. All thanks to greedy humans. Such groundbreaking discoveries outweighs the negatives
Probably Ilya was under a contractual obligation to keep out of the headlines as well. He likely got quite a sweet severance payout in return. Something that he might appreciate if he is going to do his own thing. But yeah, in general I think you painted the whole thing quite well. The growing pains and social dynamics of fast growing startups can't be overstated. Kudos for bringing up Dunbar's number by the way. Too many people forget to include details like this in analysis. Moving forward it still seems like a powder keg and quite volatile. I'm quite certain we haven't seen the last dramatic event from them.
Funnily enough, the latency of my earbuds are about equal to that, and it feels so surreal to feel the sounds of spesific breaths really sync the video.
This is a positive direction for Ilya Sutskever, et;al. They now can go on and are free agents. Very excited to see where the smartest man in the world lands. Am eager to hear from his perspective, no need for judgment. The time is for Discernment instead. There is no tomorrow or past just ever present moment...
I don’t think Ilya was talking to Sam. And I don’t think Ilya gives a fig about his job prospects. I think he was simply trying to do the right thing. OpenAI is the most likely company to achieve AGI and he remained, not to try and restore his power, but because he felt he had some obligation to us all to try and steer it safely. His leaving coincides with the sexualisation of their product, which makes it all too clear that commerce is trumping ethics in the boardroom. The raft is in the rapids now, there is no more steering.
Nah I bet they asked him to stay to for like 6 months until the PR fire goes away. They scheduled their release to coincide to soften it. Probably woulda been fine until Jan
It must be frustrating to deal with the tradeoff between "safety" and "optimized". Imagine if you were in charge of Google Docs, and they wanted to make sure that no one ever wrote something that was "bad". If I started typing something like "men are smarter than women" the program would cut off my sentence and say I'm not allowed to write it. If you try to make everything safe algorithmically, then it must be the case that it frequently errs on the side of caution and significantly underperforms. Another example would be if you were using a drawing program like photoshop, and you were trying to make wheels on a car, and the program stopped you when you made 2 circles because it thought you were trying to make breasts or something. I'd rather see a higher performing model, even if it sometimes produced inappropriate responses. I can imagine where exactly to draw the line in these tradeoffs can cause a lot of friction at these companies.
I think part of the reason people are leaving has to do with they can see from the inside how the have and have nots is being divied, and they want to be in the haves section so they will make it for themselves.
David. I've never paid for a YT comment before but I love your videos and I would genuinely love for you to list some of your favourite books of varying topics such as maybe psychology, AI, futuristic topics anything but add a little variety and not all one topic if you can but honestly I will take any list you give 😭 I've read a few I've heard you mention here and there an I believe I would have an high interest in other books you find enjoyable as well. Thanks and thank you for always the great content.
This!! The way Altman's ego often shows in his statements scares me, especially when he talks about how much power this could give OpenAI. I'm a fan of the work they do, but I would almost prefer such power to go to one of the established players who are more predictable and already have some experience with power. This is 100% subjective, but he feels a bit like a wild card.
David, great channel and great episode as always. Wasn't Ilya "the brains" behind openai? If that's the case, does he not have all the power? He can essentially go anywhere he wants, write his own paycheck and he would be given the keys to the kingdom. Isn't this a huge loss for OpenAI? I hope Elon scoops Ilya up and together they figure out how to get Sam Altman-Fried in line, or out of OpenAI. I don't know why, but I do not trust Sammy as far as I could throw him.
Is it just me who thinks that there's a hint in GPT 4 "o". Like "o" is almost Q, its unfinished version. And the way they call it "ominous" is a synonym for "general".
one thing that still doesnt add up to me is, during sam's brief "vacation" from openai, no employees quit. why? is sams leadership just that good, or were they on the cusp of learning something big(Q*), or something else?
@@ryzikx well there was an immediate effort to get him back, no reason to quit at the time since they still have to wait for microsoft and sama to strike a deal, especially since there was a big pressure for the employees to threaten to quit if they don’t get him back. Essentially making OAI a ghost ship, which is bad for the new board and employees considering they also have stock
@@ryzikx Chairman Greg Brockman and Jakub Pachocki, the company’s director of research; Aleksander Madry, head of a team evaluating potential risks from AI, and Szymon Sidor, a seven-year researcher at OpenAI.
For all the reasons you mentioned, plus, having been through several software releases myself, it's very common for people to make job moves right before or after a release. It's a good time to go because usually your tasks are fairly wrapped up, so you leave fewer loose ends and gnarly bugs versus if you up and leave in the middle of a release.
I don't think it has to be as simple as humanity vs profits. I think that's a common jumping-the-boat conclusion. You can have two parties not consumed with making as much profit as possible but wanting to Progress in this dept, in terms of it's cause & competition while having a different, less-cautious POV. $$ Greed certainly doesn't have to play into it. And it could even be as simple as "you're overly cautious, and while I'm not the polar opposite, I believe I have the validated reason why my very different POV has adequate caution into play."
David, can you do a video on the societal risks of AI the media largely ignore and few people debate Jan Leike on why he left: "I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics. Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done. OpenAI is shouldering an enormous responsibility on behalf of all of humanity. We are long overdue in getting incredibly serious about the implications of AGI. We must prioritize preparing for them as best we can. Only then can we ensure AGI benefits all of humanity. "
i don’t think typical corporate tech behavior applies well with people like Sam, Ilya, and Jan. They know what they are on the precipice of and their own interpretation of how that should be managed is what is driving decisions. I think next year we will look back at this moment as a canary in the coal mine. I’ve been off the doomer train for a while now, but this just upped my p value significantly.
I loved the comparison to mammalian behavior. I seek to break down complex-looking behaviors in these types of models as well. Most people I've encountered irl are not receptive to (and many are against) these types of psychological breakdowns; but, I think they offer a clue/direction for reasoning in the same sense first principle's thinking helps in science.
When i hear the word alignment I think of the next stage after the fine tunning process, perhaps they are now using GPT for alignment and a human team is no longer needed. Perhaps we are thinking about it too much I dunno
Your analysis makes sense. The other view I have heard is that since it is almost exactly six month since the coup, he and the others who just left were asked back then to stay six months for the sake of the appearance of company stability.
I learned many lessons this past month. One such lesson is that the phrase “the beatings will continue until morale improves” applies more often than it has any right to. - Ilya Sutskever (@ilyasut)December 6, 2023
I think you nailed it there pretty much. Altman definitely has a sizeable ego, outwardly it’s subtle and he masks it well but if you watch his interview performances you can spot it.
I’m glad to hear that you’re sharing my sentiments on the course of OpenAI and its “mission”. And this is coming from a guy who today makes his living consulting Copilot to companies. I’d love to hear your thoughts on WorldCoin and other ethically dubious things that Sam has in works. Dare to tackle that w/ a video? ⚖️👀
I've been talking about proof of useful work since before Bitcoin was cool. It's going to be a bigger deal than store of value. It closes the AI crypto loop, and completely commoditizes compute.
I didn't, but Dylan did! ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-nxNcg98ImMM.htmlsi=7Fju9DbHHJkq_yXh Dylan covered almost everything I would have covered about it, but yeah. It's dystopian AF>
@@ryanmadden752 there’s HUGE fomo happening in the corporate world regarding AI adoption. I’d guess anyone following frontier AI news tubers like David AND having some experience tinkering with ChatGPT, Copilot or other applications could pay their bills preaching to the choir. I have a full team doing that.
@@ryanmadden752 huge FOMO happening at corporate world. I have a full team of consultants preaching to the choir. I bet anyone listening to frontier AI RU-vidrs like David and experimenting with ChatGPT, Copilot or other applications could earn a living. Ofc u need big cohones to do it as well 😅
Personally think this david shapiro guy could significantly benefit humanity if he were involved in very important projects like agi think he could do a solid job in super alignment or at least the theoretical side of it. Ive been thoroughly impressed by your predictive analysis videos and how you factor in various domains. Idk if or how but I trust this guy more than anyone at openai especially sam altman guys blinded by his own ego wanting to go down as the modern Oppenheimer
sama tweet on 5/14 with the Ilya announcement: "...I am forever grateful for what he did here and committed to finishing the mission we started together..." hard 'finishing the mission', AGI must be fully baked
I reckon Ilya quit/was fired back in Nov '23 and was serving his standard 6-month notice period ... explains the radio silence and timing of the exit announcement. If I were a betting man, I would expect Ilya to become active on social media from Jun onwards.
Did their stock options vest 100% before they left? For Ilia, Jan, Karpathy did they even have stock options? Do they get to keep their stock options if they leave?
I think your read on it is pretty good. I've been following Sam ALtman for some time. I've never trusted the guy. He always struck me as slippery and dishonest. Anyone who is so openly agreeable to opposition and gives the appearance of altruism while also amassing absurd wealth and power is NOT to be trusted. I think many are gong to regret trusting OpenAI and its stated mission of developing safe AGI that benefits all humanity. To me, this kind of mission statement is analogous to Google's "Don't be evil." If you are making a point of telling me you are NOT evil, you are. OpenAI is already suggesting ideas like hardware level encryption on GPUs. I anticipate this will be something they pursue to ensure no consumer grade hardware can run open source AI models. So, I think where I disagree is that OpenAI is net positive for humanity.
The direction of travel was Obvious the minute Larry Summers got involved. I suspect Ilya is leaving now because some sort of lock in on his contract just expired
In my opinion they got out before the shit hits the fan (aka new voice update). It's going to send shockwaves through humanity. I feel they resigned under protest because we're not (as a whole) ready for this "Her" level of AI interactiveness.
I don't have an issue with them making money, I just don't see how this works in the long run as if we only use AI to get our results then we aren't going to the source websites to find out stuff but then that means they aren't getting our ad money which means they will close down and AI won't have anything new to tell us.
I think Ilya wants to create an AI that learns like a child. Not driven by data, but stories about growing up. He will be the father of the first ai child that becomes super ai over time. It fits his childhood stories about how he experienced becoming conscious. Like the movie chappie.
Honestly, I believe that AI should have remained in the hands of universities and labs. Yes, it would have gone much slower, but originating AI as a product is going to have very unfortunate effects.
Microsoft is definitely around them. If they want to reagin their independence, they need to become even bigger. Even more profitable. And they have a chance to do that...
They need to purchase the taiwanese company and become OpenAIAOpen. I know what you mean though, it's now or never before awareness increases too much. The "smartest" man alive taught us the fastest way to burn 44 billion dollars is playing games with the name after literally everyone knows it.
I hate to say this but I think you got it slightly wrong. I can tell you that founding creators get disposed of once the company doesn’t need them anymore. That’s just how it works. If they kept them on, they would have to reward them.
Interesting analysis in terms of social dynamics. From my perspective it just seems Ilya got fired right after the coup attempt or at least it was obvious he would be marginalized. It's just that they didn't want to make it look vengeful bc that would have been bad PR so they waited 6 months and made it official a day after successful launch while everyone is distracted by Samantha. That was the gentlest firing ever.
clearly Sam had some kind of "hold" on Ilya the diff between this and early Facebook is that OpenAI has actual technology and the ability to use AI to move human progress forward
Dave, just want to add one more detail to the puzzle - Ilya leaving now is surely because of some contractual obligation. Altman was fired Nov 17th and Ilya left exactly 6 months later to date. Not sure if it's noncompete, NDA or just coordinated PR stunt on both sides but to me it seems this was decided back in the day during the drama.
My greatest fear is that Ilya and Jan are correct, and we reach an unaligned AGI very soon. This would be catastrophic for the human species, and the only thing preventing this is the chance that AGI is harder than we think and therefore can be aligned through slow iteration over a decade or more. I don't want to think about how horrible a true AGI which is unaligned would be for humanity.