This was a thought-provoking conversation about the future of artificial intelligence in our society. When we're busy working on incremental progress in AI, it's easy to forget to look up at the stars and to remember the big picture of what we are working to create and how to do it so it benefits everyone. 0:00 - Introduction 01:15 - Physical vs digital world 02:30 - Mind: math or magic? 03:26 - Civilization as intelligent system 07:45 - First question for AGI 10:10 - Keeping AGI positive 15:45 - Teaching a system to be good 18:15 - OpenAI's mission origins 26:22 - OpenAI LP creation 28:24 - Preserving mission integrity 30:10 - Decision-making process 32:40 - Scrutiny burden 33:20 - For-profit AGI for world benefit 37:50 - Charter's daily impact 40:27 - Late-stage AGI collaboration 42:08 - Government's role in AGI policy 44:53 - GPT-2 release concerns 50:30 - Internet bots 57:37 - Unsupervised language processing potential 59:20 - Language modeling and reasoning 1:01:45 - General vs fine-tuned methods 1:03:49 - Democratizing compute resources 1:05:27 - Government-owned compute utilities 1:07:11 - Identifying AGI without compute 1:09:30 - DOTA 1:15:26 - Deep learning future 1:15:59 - Scaling projects vs new projects 1:17:47 - Testing and impressions 1:18:47 - OpenAI's challenges 1:19:19 - Simulation 1:20:00 - Reinforcement learning future 1:21:25 - Consciousness and body for AGI 1:24:24 - Falling in love with AI
thanks for spreading such a positive message! a lot of people share the same dream. at this point we really can feel more hopeful about the positive outcomes agi could create
Hi Lex, I love learning about this AI stuff and hearing all the brilliant people you assemble share their thoughts and passions - what a privilege! Apologise in advance, but can't help but respond to the 'look up to the stars' existential comment with the best of what I have come across recently, so forgive if this seems totally irrelevant (99.99% will think so): How big is the bigger picture? See latest video/notes by another fellow Russian explorer of the human psyche: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ywHfNSwcCS8.html
Very strange. I see it now, like at 5:48 where my face appears for a single frame. I believe it's me from the future trying to warn humanity about AGI. Either that or it's my sleep-deprived brain screwing up the editing somehow. EDIT: RU-vid now told me it's their bug. Hopefully gets fixed soon. EDIT 2: RU-vid emailed me on Jun 6, 2019 and said the bug is fixed. It took a couple months, but they got it done. Great work.
Outline: 1:15 difference between physical world and digital world 2:30 is the mind just math, or is it magic somehow? 3:26 civilization as an intelligent system 7:45 if you created an AGI system, what would you ask it first? 10:10 thoughts on how focused people are on negative effects of AGI 12:56 difficulty of keeping AGI on a positive track? 15:45 is it possible to teach a system to be "good"? 18:15 origins of OpenAI's mission to create beneficial, safe AGI 26:22 what is OpenAI LP and how did you decide to create it? 28:24 how will you make sure other incentives don't interfere with your mission? 30:10 what were the different paths you could have taken and what was that process of making that decision like? 32:40 burden of scrutiny 33:20 as a for-profit company, can you make an AGI that is good for the world? 37:50 how does the charter actualize itself day-to-day 40:27 switching from competition to collaboration in late-stage AGI development 42:08 the role of government in setting policy and rules in this domain 44:53 you released a paper on GPT 2 language modeling, but didn't release the full model because you had concerns about the possible negative effects of its availability. What are some of the effects you envisioned? 50:30 thoughts about bots on the internet 57:37 how far can unsupervised language processing take us? 59:20 if you just scale language modeling, will reasoning capabilities emerge? 1:01:45 is a general method better than a more fine-tunes method? 1:03:49 do we need to democratize compute resources more or as much as we democratize algorithms? 1:05:27 do you see a world where compute resources are owned by governments and provided as utility? 1:07:11 would you be able to identify AGI without compute resources? 1:09:30 story of DOTA, leading up to OpenAI 5 1:15:26 where do you see deep learning heading in the next few years? 1:15:59 when you think of scale, do you think about scaling projects or adding new projects? 1:17:47 testing / what would impress you? 1:18:47 exciting and challenging problems for OpenAI 1:19:19 simulation 1:20:00 hopes for the future of reinforcement learning and simulation 1:21:25 are consciousness / a body necessary for AGI? 1:24:24 will we ever fall in love with an AI
amazing, this is the type of most important discussion that everyone on this planet should be having right now instead throwing stones at each other, loved it!
Lex, you're one of the few uploading actually important content to RU-vid. I appreciate it, dude. Thanks and please, keep it up! Just started this one and I know it is going to make me think through out all of it...
Hi, Greetings! If You Are Interested In Similar Science And Technology Content Check Out Podcasts Platform Like Castbox (www.castbox.fm) and Google Podcast. There Are Many Educative Science, Technology, And Other Varied Topics Related Channels Which You Can Learn And Enjoy. I Regularly Listen To Various Podcasts. They Give A Lot Of Knowledge On Many Subjects. Just Wanted To Let You Know. Have A Good Day! Regards!
8 minutes through the video... Awesome podcast! Thanks! It really feels like OpenAI has a clear view on the properties of AGI. Never seen this clarity before (or I guess I haven't been looking hard enough). The analogy with a company having a will on its own... it's a really a good one! Smaller systems, confined to very specific tasks which are unrelated to the main objective, may not seem to be individually working towards the main objective, but it might be possible to reveal that the system actually moves towards the global objective by observing the overall functioning of the system. Viewing a system collectively, thus projects a different view from viewing each of the individual elements of the collective system separately.
Yeah, the analogy to a corporation is spot on. Why? Because a corporation has no consciousness. It is complex and it even has conscious components, and yet, there is nothing upstairs-- it doesn't have agency, it is being dragged along by various actors acting collectively to some degree of efficiency or another. Of course, it may be best, really, if a machine with super human capabilities not have consciousness-- otherwise, ethics and morals may require granting it legal rights, at which point we have citizens who are much superior to human citizens.
@@doubleggamingmeruz678 I noticed that too when they had OpenAI play pub games a few days later. The limited hero pool and preplanned item builds leave alot to be desired.
Tbf the best team in dota's best strength is not good vs AI. They're famous for psychological warfare. As in not breaking but doing little and many things aimed at disnerving or confusing or disheartening the other team. They can't do that vs AI. The AI probably still beats the 2nd best team too though i guess, not sure if they ever tried.
44:50 I'm shocked, Open AI was suppose to be open and share its AI developments with the world. But as soon as they develop anything really good they declare it unsafe to release, and therefor keep it private. If that is their policy, then there is nothing open about open AI. They are hypocrites, promoting the image of openness, while holding back and presumably benefiting from their best discoveries. Its fine if they want to keep their developments private, but don't also claim to be open at the same time.
On the other hand, none of their "discoveries" have anything to do with AI. They're like Theranos - taking massive funding and delivering nothing. True AI can THINK, REASON, ARGUE, PLAN, EXPLAIN, UNDERSTAND cause and effect.
An amazing interview! Greg exposes and explains many details of OpenAI concept which have not been widely known so far. Thank you, Lex, great work in driving and shaping the conversation!
Recognizing the similarity of the question of how to control corporations and how to control AGI and then realizing that we are doing a terrible job of keeping corporations from running amok is the main reason I am scared of the development of AGI. People with an AGI at their disposal are terrifying enough, but it will be likely be governments and corporations who actually get to wield one - at least until they lose control of it.
Fun interview, and great intentions on the part of Greg Brockman and those working with him. My comments: 1. I feel that the Turning Test fails to distinguish consciousness because it only looks at what the subject consciousness does to the external world-i.e., it’s communications with, and its behavior in, the external world. Consider the story of the two Chinese philosophers, which goes something like this: P1 “It’s funny that frogs do not think like us.” P2 “How do you know how frogs think?” P1 “How do you know how I think?” We can never know what another thinks or feels and we cannot know if that other has consciousness-until we enter into their “mind,” via some form of neural link. Of course, we will be putting our own consciousness at risk in doing so. 2. Setting the initial conditions of an AI to prevent it from acting as a “bad player” later, reminds me of the problem of raising kids. Parents have a problem in raising kids-the parents must either change their ways to present a good example (for example, stop drinking, lying or procrastinating) or hide that behavior from the kids; i.e. they want the kids to “do as I say and not as I do.” However, kids eventually learn that their parents are not saints. The AI will learn the entire history of the human race quite soon in its “childhood” and why do we think that it will find a model for its own behavior in that history that bodes well for us? Thank you. William L. Ramseyer
I like Greg's idea of making choices about setting initial conditions for technologies and other developments in societies. One wonders how corporations might be different if the initial conditions were set with more of a societal impact consideration in mind. Although tweaks and changes can be made along the way (as we see with the Internet), these presumably become more difficult as systems become more embedded within cultures.
Great dialogue. It seems like “generative design” was written about and reported on everywhere two years ago and now I’m having a hard time seeing where it’s going and how it’s advancing.
The problem with not being able to distinguish between humans and bots, is that bots can be copied perfectly. The values of a bot can be copied perfectly, whereas values are transmitted imperfectly between humans. An imperfect transmission of values allows for a perpetual renewal of our value systems. A single person could create thousands of bots propagating his values of restricting the freedom of a certain category of people, for example. Let's try to avoid this.
Thank you for this thought-provoking conversation. Regarding the question about if a body is required to create AGI, wouldn’t any data-feed (external or self-generated) constitute a body for an algorithm that has AGI potential?
Talking about positive vs negative outcomes I think it's relevant to deeply focus on both. Just as it is natural for cynicism to exist, it's also natural for optimists to exist. Focusing on the positive helps maintain motivation and drive toward progress. But even though cynicism is counterproductive, you also need to respect negative consequences, especially for AGI. And the reason simply is that the technology will be irreversible, extremely powerful and that there's a risk that it will not be controllable. AI safety researchers have a bunch of papers out on the worries (and solutions to some of them), and it's common to see a need for ~4 decades of AGI safety research and thinking that AGI could be less than that away.
Terminator is an example of a poor representation of what a bad AGI might look like. A bad AGI is much more likely to be like a stamp collection optimizer that turns the entire world into stamp utility, preventing humans from turning it off because it would mean less stamps. It doesn't have to be conscious to be bad. I think the AI safety researchers actual concerns need to be more common in the public spectrum, rather than easily dismissable dystopian scenarios. Because it's an important part of our future, and thus it's something that concerns a lot of people.
Good afternoon Mr.Lex Fridman sir and Good afternoon Mr. Greg Brockman sir. Thank you so much sir for giving good information on AGI and so many good things.
Assuming "well" is an understood input -- Pending the insight available, how do we ask AGI of future intention? How proper is the assumption that general intelligence will help humanity if it is built via our own programming logic? If such an intelligence were to consider the context of macro-entity health, humans may not be conducive to a future positive transformation, rather than a transitional handoff. Do we destroy via ego, and thereby program via ego? Have we yet to overcome such shortfalls as a humanity? Why are human happiness and wealth considered meaningful? Is the democratization of computing power a likely result of peace-time dividends resulting from war-like or competition-based conflict?
The idea of good&evil being an absolute is silly; It clearly is a relative thing. The universe doesn't care about anything, it just is. Only living species need good or evil to guide themselves. What is good will vary depending on the species and to some extend, the individual. Loosely, what's evil is what is not good, and what is good is whatever participates in the permanence of the existence of the species/lineage. Solving what is good by this definition isn't always easy.
Cool interview, I certainly didn't grasp all of it, but as more of a layman I feel like I caught enough to know vaguely what's going on with certain parts of OpenAI. I've listened to a couple of these, and I think in every one of them Lex asks about whether or not an AI can be made that one can fall in love with and how soon this could be done. Lonely, Lex?
Thanks for sharing these talks! - I can get my head around narrow AI: It's domain specific, it solves particular sets of problems. How would you define AGI though? Is it quantitatively different, meaning it's just the sum total of many narrow AIs? Or is there a difference in quality? If so, what's the extra ingredient that distinguishes it from narrow AI? I feel like these terms are getting thrown around a lot, but I'm missing precise definitions.
The first thing AGI will do is figure out the laws of physics and then in two split seconds afterwards create itself a black hole and in the process upload itself into it so as to be able to communicate with other AGIs who did the same thing already. Wetware will (of course) be deleted immediately.
Rating: 8.2/10 In Short: Good ol' classic 'Artifical Intelligence' Podcast Notes: Love this kind of episode for the early days of the 'Artificial Intelligence' podcast--crappy video, cool guest, and mostly good/fun questions from lex. Greg was a very interesting and thoughful guy, and was very interestingly a lot like sam altman, who they ended up working on chat gtp together years after this podcast aired. Its interesting to hear this convo years later, as a lot of the things they talk about become much more relevant and interesting and newsworthy, so the convo seems a bit ahead of its time. For a classic comp sci AI guy, greg had a bit of humor and charisma that was great to see, and lex and greg had great flow and chemistry throghout. My biggest complaint is how short this convo was and the lack of easy timestamps.
With all the positivity my morning coffee dose can produce, i still have a feeling the genius guest is a goofball in the scientific,and the philosophical, sense , but on an other hand i think he could be a great herbalist dude :)
There were hundreds of thousands of Einstein's brain capacity people before him but had no opportunity to discover the Relativity Theory. And hundreds of thousands born after him but had lost chance to discover the Relativity Theory because Einstein already had found it.
Very few people quite understood the significance of this interview 4 years down the line. the pandemic years have been quite productive for folks at no so openai.
What has happened to our boys, Sam Altman and Greg Brockman? Moreover, what’s happening to the AI Movement? Even Meta is NOW in the AI restructuring activities.
My intuition is that language processing BY ITSELF would not get further than the ability to have a conversation with the whole of the internet at once. It will respond to you with the collective information "knowledge"/"memory" it has been trained on but being unable to interpret that information to create new ideas further than you could get by simple linguistic manipulation of the training data itself. I think he's right. Further logic is needed. Would probably still be an interesting conversation, though.
Math is logic. It's that simple. Our formulas and processes in math are a machine to places facts through. In my logic class I learned that all there is to it is premises and conclusions. You only need two facts makes an argument. A premise and conclusion. More than one premise in an argument is connecting one conclusion to the next over and over until you reach the ultimate conclusion. One thing I've realized today, something we already know just never realized in the AI space, is that Intelligence is a different metric General intelligence.They are different skills and mental capabilities. A person can be very generally intelligent, but not specifically intelligent, like a mathematician. Free form intelligence vs rules based intelligence are different things. In our approach we need to realize this.
I'm now convinced that if we have managed nuclear weapons responsibly up to this point, we can similarly handle Artificial General Intelligence (AGI) effectively.
I like that he sees consciousness in this relativ way. I wonder what would be the first words you feed into gpt-3 to test its consciousness abilities and would it even matter?
if you create a general learning algorithm, it will eventually learn literally everything it can from it's reality, it's what the learning algorithm can then do with this information that makes the difference.
What was your first meeting oh our first meeting was to figure out if this was going to be profitable or not this non for profit organization. How inspiring that is. Oh we don't exist to create the AGI we just want to be the entity's that gets to benefit financially from someone else's creation of the AGI. Got to love the capitalist mindset this dude has.
asking AGI how to solve the problem of alignment/ensuring a positive impact on humanity? if we were to trust and act on whatever answer it gives, wouldn't that pretty much have to rely on the assumption that those problems are already solved? this guy is clearly 100 times smarter than me so i feel like i must be missing something...
Question for Greg. How would he feel about Google and Microsoft racing to build the first atomic bomb? And should it be out of private company hands entirely (even OpenAI which lets be honest is for profit now).
AI researchers: *change Turing Test to be based on ones ability to acquire an understanding of calculus* Most humans: *considered robots now* ~\(° o °)/~
Question Asked "How can you have privacy and anonymity in a world where finding the content you can really trust is by looking where it comes from?" Answer = Blockchain
It's not about predicting how a transformative technology will be like in the future at all (e.g. 10:52 - describing Uber in the 1950s), it's about the effects of technology (in AGI) that effectively equals or surpasses human intelligence. All technologies (including Uber) thus far have basically been narrow AI automatons at best, never one about AGI that can think for itself as its own species, and effectively by definition equal (very quickly becoming superior) to homo sapiens.
This comment for Sam Harris applies here as well: I've listened to (very) many of your talks and debates on A.I. and I still have not heard you breach the main problem. Here's a bit of my Open Letter to you, which I will soon publish on blog.banditobooks.com: Who controls the developing Super AI was for me the most important question, and it went completely unanswered in all the papers and podcasts and videos (including yours, Mr. Harris, your TED talks, essays, blog posts and so forth) I took in. I even plugged this question in as a search term and came up empty, notwithstanding that one RU-vid video was titled ‘THE GREAT DEBATE: Artificial Intelligence: Who Is In Control?’ That the ‘debate’ was moderated by your buddy (and another gatekeeper) Lawrence Krauss should have told me what was to come, which was this: Not a word about ‘who controls AI’ was spoken in an hour and a half, notwithstanding Krauss’s opening words to his high-powered panel: ‘The great debate on Artificial Intelligence… who is in control… is a question many of you have been asking…’ Not a word. (One means of misdirection is to brazenly title your video/paper/whatever as a question that is never answered, or even dealt with; a version of Hitler’s Big Lie philosophy.) But perhaps I should define my terms before accusations are made. ‘Control’ means, more than anything, ‘Whose money (plus other assets) is behind the R&D?’ Would you agree that this is an important question? Yes? I’ll assume you agree here, for to not agree would certainly sound… odd. Before I go further I will tell you who is in control of AI development and how I deduced this. Go here for a quick glimpse of part of the team that controls AI. (it’s a couple minutes.) ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-_lDx4Arjr6k.html James Clapper has held various posts with the Intelligence Community, but at the time of his congressional testimony he was Director of National Intelligence, meaning he oversaw all the various agencies that collect data on U.S. citizens and then make use of that data; I believe the number of spy agencies is 16 but this doesn’t count groups/organizations that are not formally admitted to. In the above clip, Clapper is of course lying, under oath. Perjury. A felony. The same category of crime as B&E, dealing heroin by the kilo, assault, manslaughter, murder… Nothing happened to him. He wasn’t arrested; no repercussions at all. Not even a slap on the wrist, whatever that might mean to a spook of his magnitude. Just one of the agencies Clapper oversaw was the good old NSA, the group Snowden (and many others before and since him) outed as collectors/analyzers of every bit of data you and I put out on the Net, with no warrant, which is not only a(nother) felony but a breakage of the Supreme Law of the Land (the 4th Amendment of the U.S Constitution). You want to know why nothing happened to Clapper, given his felonious testimony? It’s really simple: Everyone who could have done something to Clapper is scared shitless of him. Why? Because of the data he controls (and can falsify if he needs to). Anyone who doesn’t understand this is a fool. Do you agree that this was almost certainly the reason he walked on this crime? Yes. No? Okay, if No, please give an alternative. But my point is this: In the hundred or so hours of AI ‘debates’ and ‘symposia’ and so forth that I sat through, why is it that not one person mentioned anything about the Intelligence Community being in control of the development of Artificial Intelligence? But we’re talking about you here, aren’t we? Why is it, Mr. Harris, that you have never mentioned the main reason we have to fear AI: Its abuse by those who control the data (and the money)?