Full podcast episode: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-jvqFAi7vkBc.html Lex Fridman podcast channel: ru-vid.com Guest bio: Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and many other state-of-the-art AI technologies.
dude its happening now. They take more than ever, they waste more than ever. Everything is more expensive. War is Europe, War in the middle east. Drug epidemic World wide. No regular person in the WEstern World can afford to live a reasonable lifetyle. Shit is hitting the fan as I type this. Why you may ask...Money. Its all about money and human beings these days are CHEAP. We live in disgusting times , when we could all be brothers on this little planet, together. Yet we fight and kill each other for nothing more than silly things. All the time, constant war an destruction... Because some guy in power said to do it. A lot of humans are animals . I dont want to kill a man. THis World is turning ourseves against each other once again. I dont like it
00:00:08 Talk about building systems with specific capabilities rather than aiming for AGI. 00:00:52 Expect the development of quite capable systems by the end of this decade. 00:02:50 Consider major transitions like significant advancements in scientific discovery as impressive AI achievements. 00:04:11 Imagine interacting with AGI and asking fundamental yes or no questions about scientific theories or extraterrestrial life. 00:05:00 Prioritize gathering data and inventing technology to enable AGI to provide insightful answers.
I’ll just say it - what we used overseas and what most of our FSRs (these were folks from Stanford, Vanderbilt, Carnegie Mellon) algorithmic tool sets and AGI was around the corner late 2010 [Palantir - Gotham - Metropolis] deployments. Reflect back to IBM and we already had Deep Blue and Watson it’s just precursor to meld the technology more so quantum compiling and processing. The reservation then and I’m confident now is what you allow the box to rip thru data set wise - look when one box speaking with another and it’s self learning created their own language to communicate with higher brevity + speed the lab was shut down
You can prompt GPT4 to include questions if it would help answer your question and it does. Or maybe you meant asking questions as in a consciousness type of way.
@@krisjanispetrucena2642 "Why did apple fall to the ground" was the question that Newton asked that led him to answer gravity. If AI could ask such internal questions and work on answering itself then thats AGI to me.
@@quicklaughs2092 curiosity is not the same as capabality to ask questions. Questioning is already employed in autonomous ai feedback cycle however it is not yet considered AGI
Should we expect an honest answer from him while this is part of the lawsuit against him by Elon and given he has terms with microsoft that depend on the definition of AGI?
To state the obvious: there has not been increasing skepticism towards science in recent years. What there has been however, is increasing politicization of science, with one party in particular attempting to present itself as the "enlightened" party, the party of science, and calling on science to justify choices that were never scientific but purely political in nature. This has been met, in the sounder part of the population, with the degree skepticism that it deserves.
The more I watch Sam Altman interviews, the more I get the impression he doesn't actually have any idea what he's talking about. I feel like he's going for mysterious intellect but missing the mark heavily.
@@raul36 Talk is cheap. It's achievements that count. Maybe they already figured out how to mimic human brain, just lack necessary bandwidth. What is he trying to say, at current rate, at the end of the decade we'll have enough processing power to simulate human brain.
@@miroslavstevic2036 Having computational power has nothing to do with being able to simulate the human brain. Starting because the brain is not just neurons. When it says sufficient processing power, it is because we will have computational power to simulate several million neurons connected in parallel, but the human brain not only operates with neurons, but with several other types of human cells, as well as neurotransmitters and many, much more. Many, which are beyond the reach of current technology. The synapse between neurons is so "slow" not because it is slower than a processor, but because it transmits much more electrical and electrochemical information than any current device. There is a widespread myth that the brain is "slow." No, it is not. Any biological neuron transmits much more information in real time than any current device, compared in relative size. Remember. It is one thing to be able to simulate the set of neurons, and quite another thing to be able to simulate the human brain. They are 2 totally different things. The brain is not, by any means, just neurons.
I wonder how many scientific breakthroughs are essentially already there, lying in wait in data that has already been acquired, but that nobody so far has thought of connecting the right way. AI is good with things like that - as I understand.
If Sam Altman's prediction of achieving impressive AGI by the end of the decade holds true, then many knowledge-based jobs could be significantly impacted. This would create a world where the nature of work itself is fundamentally different from what we know today.
Physicists would cooked. Mathematicians would be cooked. Coders would be cooked. Engineers and Architects would be cooked. Even physical labor would be gone after AGI designed the right robots for it to do any task. Total human redundancy.
@@aidandraper4096 For the most part yes. I don't believe all 8 billion people find much meaning in life outside of work. Most people need to feel useful and don't want to endlessly pursue hobbies and consume products. Many old people end up getting part-time jobs because of this.
if we will ever be able to create real AI (which is now called AGI for some retarded reason), we will not need experiments to verify new physical theories provided by this system. why? because that would mean that we are able to model mind, consciousness and freedom (free will and choice). however, this requires a transcendental jump from us: understanding understanding and modelling the only think that is able to model (our own mind). this is meta-understanding and meta-modelling. but this is nothing but induction of induction (meta-induction). in other words: if we are able to model and understand our own mind (and thus copy it in the form of an AGI), then we also automatise understanding and logical induction. this would actually mean bypassing the Gödelian limit (i.e. elimination of the consistency-completeness trade-off).
I had exactly this thought yesterday, and posted it om facebook. It occurred to me adter watching the Nvidia video showing all their new projects. You don't have to have a general purpose AI to make a robot that flips hamburgers or takes youur order or does customer service. I'd say that 90 % of jobs can be replaced within the next two to 5 years by LLM and GR00T. Or GROOT. A different special prpose AI for each job, specially trained. Maybe the other 10% will require intelligent generalist ai.
There is no dysfunction. Your assumptions are wrong. You think mankind is supposed to be happy. But it doesn’t work that way. Only the strong deserve to be happy. The rest of you can lick our boots.
human condition isn't a dysfunction nor can it be fixed, we just have to correct ourselves as individuals and help others directly, but only when they directly ask. nothing more and nothing less.
we build society and everything in society as a whole not as individuals but also as individuals due to duality . we both together and as individuals build AGI.
🎯 Key Takeaways for quick navigation: 00:03 *Speculating on when AGI will be created is difficult due to varying definitions of AGI.* 00:44 *By the end of the decade, we may have highly capable AI systems, but AGI marks a milestone, not an endpoint.* 01:23 *ChatGPT-3.5 may not have changed the world, but it shifted expectations and trajectories for AI.* 02:08 *AGI implies a significant transition, akin to the impact of Google search or the internet.* 03:03 *Sam Altman sees AI's impressive achievement when it significantly accelerates scientific discovery.* 03:55 *Interacting with AGI raises questions about what one would ask, possibly starting with fundamental scientific inquiries.* 04:38 *Initial questions to AGI might involve fundamental physics inquiries or speculation about extraterrestrial life.* 05:19 *AGI might require further technological advancements or data before providing meaningful answers.* Made with HARPA AI
Altman talking about "different definitions" of agi, and makes me think it's already here. This allows him to not call it agi without being dishonest technically. Also notice how his standards (definition) for agi seem quite high.
lmao he's wrong - technological advancament gave us cars, but you still need plain old discipline and courage to drive them and we are eroding the economy of those.
Good rapport does not equate to the good found in a person, Hitler, Stalin, and Maos were incredibly charismatic before and while they held power, that is a very silly consideration
.... I can't decide which is more annoying...the guys who talk super-fast to show off how smart they are, or the guys who talk super-slow to show off how smart they are. I also can't stand vocal fry.
I don't think anyone grasps what AGI will actually mean for the world. A certain country in the middle east is developing AI drones to target other people using facial recognition. Boston Dynamics is working with the same country to develop a robot that can target people using AI. When we have AGI, millions will be out of work. But protests will be met with AI drones to ensure nobody gets out of line. T3 the movie is just a small insight into what we will see in the future. Good luck!
Could you explain to a layman why it is far off? I find it hard to read in between the lines of all the hype and big business and where the actual tech is at.
I straight up dislike how this human being “operates”. He seems like a dangerous near-genius, and I find him bland, boring, calculated, and utterly untrustworthy in general appearance and overall demeanor. I wish people would stop giving him a spotlight.
I have a disease which I really hope it can solve. Once again, there probably isn't enough data but I might know what data it needs to collect. Or maybe the answer is already out there in the 1000s of unread published papers that no human can possibly read.
All right guys time and time again we’ve seen these predictions so when he says end of the decade or possibly sooner, it means it will be here in at least 9 to 15 years
For the longest time the definition of AGI included self-awareness / consciousness being achieved as part of the AGI threshold, because of the huge hype and gold-rush to "AGI" being achieved, of course Silicon Valley has done what it often does and now there's a push to take the self-awareness aspect out of the definition... The achieve human level intelligence the system needs to be able to: ponder, formulate its own questions, seek answers to those questions and self-train on those answers. Traditionally this is the AGI... Now when you search on AGI definitions, you'll find all of those critical features have been removed from the definition, just to pave way for the race to be had to achieve what is actually unachievable, at least with current AI and compute... Silicon Valley is truly disgusting. It's all about the greed.
I can't stand these Tech Bros (Lex, I like). I mean, Sam Altman, I've never seen such an ego like this! Like we are morons, and he is levitating above us... the way he talks is so condescending.....doesn't talk like a scientist, talks like a salesman. I would have fired him right away!
Will AI be able to upgrade its own algorithms? if so what capability does it want to have, will it ever think about the meaning of existence of itself ?
what am i missing, it should already be increasing diagnostic medicine and treatments drastically! could it possibly be usless bureaucrats stifling real progress
Life is a reflection of what we see when we watch movies that just totally out of this world it gets our brains working and what powerful brains we have but somehow we can only harness 20 percent of our brains power. We will create what we see as in every science fiction movie that has been created we make the shit in those movies. So now let’s reflect on this our planet is finite there is no more room on earth for another top species regardless of what it is it needs energy to survive just like us and since we can’t share the earth comfortably with just humans at this moment how can we expect to share it with beings that require tons of power to survive. It’s no feasible if you watch any movie about humans sharing earth with an intelligent mechanical species well you know it doesn’t end well and since all we do is create what we see then you should be able to see the horrible path we have put ourselves on!