Тёмный

Greg Brockman: OpenAI and AGI | Lex Fridman Podcast #17 

Lex Fridman
Подписаться 4,3 млн
Просмотров 143 тыс.
50% 1

Опубликовано:

 

5 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 281   
@lexfridman
@lexfridman 5 лет назад
This was a thought-provoking conversation about the future of artificial intelligence in our society. When we're busy working on incremental progress in AI, it's easy to forget to look up at the stars and to remember the big picture of what we are working to create and how to do it so it benefits everyone. 0:00 - Introduction 01:15 - Physical vs digital world 02:30 - Mind: math or magic? 03:26 - Civilization as intelligent system 07:45 - First question for AGI 10:10 - Keeping AGI positive 15:45 - Teaching a system to be good 18:15 - OpenAI's mission origins 26:22 - OpenAI LP creation 28:24 - Preserving mission integrity 30:10 - Decision-making process 32:40 - Scrutiny burden 33:20 - For-profit AGI for world benefit 37:50 - Charter's daily impact 40:27 - Late-stage AGI collaboration 42:08 - Government's role in AGI policy 44:53 - GPT-2 release concerns 50:30 - Internet bots 57:37 - Unsupervised language processing potential 59:20 - Language modeling and reasoning 1:01:45 - General vs fine-tuned methods 1:03:49 - Democratizing compute resources 1:05:27 - Government-owned compute utilities 1:07:11 - Identifying AGI without compute 1:09:30 - DOTA 1:15:26 - Deep learning future 1:15:59 - Scaling projects vs new projects 1:17:47 - Testing and impressions 1:18:47 - OpenAI's challenges 1:19:19 - Simulation 1:20:00 - Reinforcement learning future 1:21:25 - Consciousness and body for AGI 1:24:24 - Falling in love with AI
@TristanCunhasprofile
@TristanCunhasprofile 5 лет назад
Any thoughts on if we'll eventually get people to agree on a good definition of AI? (or even of just intelligence definitionmining.com/intelligence)
@kev9797c
@kev9797c 5 лет назад
thanks for spreading such a positive message! a lot of people share the same dream. at this point we really can feel more hopeful about the positive outcomes agi could create
@marquardtfrickert3939
@marquardtfrickert3939 5 лет назад
Love it man!! @Lex Friedman! Why don't you go work for OpenAI??? I think it's super important to make this stuff save, like Elon said! :)
@vaibhavbv3409
@vaibhavbv3409 5 лет назад
what happens to jobs
@TBOBrightonandHove
@TBOBrightonandHove 5 лет назад
Hi Lex, I love learning about this AI stuff and hearing all the brilliant people you assemble share their thoughts and passions - what a privilege! Apologise in advance, but can't help but respond to the 'look up to the stars' existential comment with the best of what I have come across recently, so forgive if this seems totally irrelevant (99.99% will think so): How big is the bigger picture? See latest video/notes by another fellow Russian explorer of the human psyche: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ywHfNSwcCS8.html
@technotarzan4044
@technotarzan4044 10 месяцев назад
coming back to listen to this through the lens of the last 4 years is absolutely mind blowing. Especially with the last week!
@m.branson4785
@m.branson4785 Год назад
It's wild listening to this 3 years later as GPT-4 has been released.
@bokoma96
@bokoma96 Год назад
And now, after his TED Talk ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-C_78DM8fG6E.html
@mcrenn5350
@mcrenn5350 Год назад
Ikr! Was here literally for that. They knew... they were on the cusp of everything!
@Zyntho
@Zyntho 10 месяцев назад
And now after he left the company in protest.
@m.branson4785
@m.branson4785 10 месяцев назад
@@Zyntho Yeah, I had to come back to listen to this one and also the interview with Ilya.
@meartin
@meartin 10 месяцев назад
Yessir 😅
@RubenAlvarezMtz
@RubenAlvarezMtz 5 лет назад
What's with the subliminal pictures of Mr Lex in the video? :P
@lexfridman
@lexfridman 5 лет назад
Very strange. I see it now, like at 5:48 where my face appears for a single frame. I believe it's me from the future trying to warn humanity about AGI. Either that or it's my sleep-deprived brain screwing up the editing somehow. EDIT: RU-vid now told me it's their bug. Hopefully gets fixed soon. EDIT 2: RU-vid emailed me on Jun 6, 2019 and said the bug is fixed. It took a couple months, but they got it done. Great work.
@RubenAlvarezMtz
@RubenAlvarezMtz 5 лет назад
@@lexfridman or d) all of the above :p
@aigen-journey
@aigen-journey 5 лет назад
@@lexfridman also around 1:18:05 Took me a few tries to freeze frame at the right moment :)
@MrSushant3
@MrSushant3 5 лет назад
@@lexfridman No, it's not you, it's RU-vid. I've come across multiple similar complaints from other RU-vidrs as well, esp. for long videos.
@lup9346
@lup9346 5 лет назад
YOU MUST OBEY LEX
@alicethornburgh7552
@alicethornburgh7552 4 года назад
Outline: 1:15 difference between physical world and digital world 2:30 is the mind just math, or is it magic somehow? 3:26 civilization as an intelligent system 7:45 if you created an AGI system, what would you ask it first? 10:10 thoughts on how focused people are on negative effects of AGI 12:56 difficulty of keeping AGI on a positive track? 15:45 is it possible to teach a system to be "good"? 18:15 origins of OpenAI's mission to create beneficial, safe AGI 26:22 what is OpenAI LP and how did you decide to create it? 28:24 how will you make sure other incentives don't interfere with your mission? 30:10 what were the different paths you could have taken and what was that process of making that decision like? 32:40 burden of scrutiny 33:20 as a for-profit company, can you make an AGI that is good for the world? 37:50 how does the charter actualize itself day-to-day 40:27 switching from competition to collaboration in late-stage AGI development 42:08 the role of government in setting policy and rules in this domain 44:53 you released a paper on GPT 2 language modeling, but didn't release the full model because you had concerns about the possible negative effects of its availability. What are some of the effects you envisioned? 50:30 thoughts about bots on the internet 57:37 how far can unsupervised language processing take us? 59:20 if you just scale language modeling, will reasoning capabilities emerge? 1:01:45 is a general method better than a more fine-tunes method? 1:03:49 do we need to democratize compute resources more or as much as we democratize algorithms? 1:05:27 do you see a world where compute resources are owned by governments and provided as utility? 1:07:11 would you be able to identify AGI without compute resources? 1:09:30 story of DOTA, leading up to OpenAI 5 1:15:26 where do you see deep learning heading in the next few years? 1:15:59 when you think of scale, do you think about scaling projects or adding new projects? 1:17:47 testing / what would impress you? 1:18:47 exciting and challenging problems for OpenAI 1:19:19 simulation 1:20:00 hopes for the future of reinforcement learning and simulation 1:21:25 are consciousness / a body necessary for AGI? 1:24:24 will we ever fall in love with an AI
@harshr1831
@harshr1831 3 года назад
Thank you!
@teslatonight
@teslatonight 2 года назад
🤖🧡
@thepablohansen
@thepablohansen 2 года назад
Thank you!
@inspiregrow2336
@inspiregrow2336 Год назад
Thanks bro
@qianma853
@qianma853 Год назад
thanks
@aqynbc
@aqynbc 5 лет назад
We need more of this type of discussion. Thank you Lex for taking the time to do just that.
@newspeed8000
@newspeed8000 5 лет назад
amazing, this is the type of most important discussion that everyone on this planet should be having right now instead throwing stones at each other, loved it!
@bradwrobleski666
@bradwrobleski666 5 лет назад
One of the best interviews of a great mind and great ideas. Period.
@InfoJunky
@InfoJunky Год назад
Bring him back! GPT 4 and plugins is bananas!!! Let's hear his thoughts! He might be REAL busy right now though!!!
@memorabiliatemporarium2747
@memorabiliatemporarium2747 5 лет назад
Lex, you're one of the few uploading actually important content to RU-vid. I appreciate it, dude. Thanks and please, keep it up! Just started this one and I know it is going to make me think through out all of it...
@glorydey5008glowlight
@glorydey5008glowlight 5 лет назад
Hi, Greetings! If You Are Interested In Similar Science And Technology Content Check Out Podcasts Platform Like Castbox (www.castbox.fm) and Google Podcast. There Are Many Educative Science, Technology, And Other Varied Topics Related Channels Which You Can Learn And Enjoy. I Regularly Listen To Various Podcasts. They Give A Lot Of Knowledge On Many Subjects. Just Wanted To Let You Know. Have A Good Day! Regards!
@vznquest
@vznquest Год назад
lex we need more guests like this with everything happening now...
@galaxyw5545
@galaxyw5545 Год назад
he was the reason behind everything that's happening now..
@nicolasdominguez1890
@nicolasdominguez1890 Год назад
Ask and ye shall receive, hahaha you now have sam altmann interview, and another one i have not yet seen
@bleachbucket9440
@bleachbucket9440 5 лет назад
Thx for all your recent interviews. You've helped to expand my mind with this profound information
@sjwmemer4840
@sjwmemer4840 5 лет назад
good to know we got clown of the day working on ai for us
@sreramk1494
@sreramk1494 5 лет назад
8 minutes through the video... Awesome podcast! Thanks! It really feels like OpenAI has a clear view on the properties of AGI. Never seen this clarity before (or I guess I haven't been looking hard enough). The analogy with a company having a will on its own... it's a really a good one! Smaller systems, confined to very specific tasks which are unrelated to the main objective, may not seem to be individually working towards the main objective, but it might be possible to reveal that the system actually moves towards the global objective by observing the overall functioning of the system. Viewing a system collectively, thus projects a different view from viewing each of the individual elements of the collective system separately.
@RogerFedTennis
@RogerFedTennis 5 лет назад
Yeah, the analogy to a corporation is spot on. Why? Because a corporation has no consciousness. It is complex and it even has conscious components, and yet, there is nothing upstairs-- it doesn't have agency, it is being dragged along by various actors acting collectively to some degree of efficiency or another. Of course, it may be best, really, if a machine with super human capabilities not have consciousness-- otherwise, ethics and morals may require granting it legal rights, at which point we have citizens who are much superior to human citizens.
@PhillipRhodes
@PhillipRhodes 5 лет назад
Lex, can you do an interview with Ben Goertzel at some point? He'd be a great addition to this series. Also, maybe Marcus Hutter or Pei Wang?
@ahmedal-maliki4232
@ahmedal-maliki4232 4 года назад
That would be amazing!
@GregGBM7
@GregGBM7 5 лет назад
after losing last august, OpenAI was finally able to beat the best human team in 5v5 Dota 2 just a few days ago. It was incredible to watch!
@doubleggamingmeruz678
@doubleggamingmeruz678 5 лет назад
But it was only because of the limited hero pool if open ai play real game of DotA they won't even beat armatures .
@GregGBM7
@GregGBM7 5 лет назад
@@doubleggamingmeruz678 I noticed that too when they had OpenAI play pub games a few days later. The limited hero pool and preplanned item builds leave alot to be desired.
@ImperialGuardsman74
@ImperialGuardsman74 4 года назад
Tbf the best team in dota's best strength is not good vs AI. They're famous for psychological warfare. As in not breaking but doing little and many things aimed at disnerving or confusing or disheartening the other team. They can't do that vs AI. The AI probably still beats the 2nd best team too though i guess, not sure if they ever tried.
@wyqtor
@wyqtor 10 месяцев назад
I am from 4 years into the future; you ain't seen nothing yet!
@GregGBM7
@GregGBM7 10 месяцев назад
@@wyqtor 4 years later and still no AGI, smh
@qianma853
@qianma853 Год назад
Can’t believe the conversation is 4 years ago, very insightful
@MrHaqri
@MrHaqri 5 лет назад
I think George Hotz of CommaAI would be a great guest for the podcast.
@zelllers
@zelllers 5 лет назад
It would be interesting for sure
@kamilmazurek6070
@kamilmazurek6070 5 лет назад
He left CommaAI, check out his linkedin Edit: Also mentioned it on his livestreams, however, he still has shares in the company
@dragon_542
@dragon_542 5 лет назад
+1
@spinLOL533
@spinLOL533 5 лет назад
MrHaqri agreed
@mami1455
@mami1455 5 лет назад
back it up
@kaziboy264
@kaziboy264 5 лет назад
This is one of the most interesting interviews out there
@ErikKislikChessSuccess
@ErikKislikChessSuccess 5 лет назад
Nice work Lex, this was a refreshing and relaxing discussion on big, big topics.
@totalhighconcept
@totalhighconcept 10 месяцев назад
He just resigned following Sam’s termination
@jasonapplebaum9871
@jasonapplebaum9871 10 месяцев назад
Who else is binging interviews from Mira, Greg, Sam, and Ilya to learn more about the lore of OpenAi?
@Curious112233
@Curious112233 4 года назад
44:50 I'm shocked, Open AI was suppose to be open and share its AI developments with the world. But as soon as they develop anything really good they declare it unsafe to release, and therefor keep it private. If that is their policy, then there is nothing open about open AI. They are hypocrites, promoting the image of openness, while holding back and presumably benefiting from their best discoveries. Its fine if they want to keep their developments private, but don't also claim to be open at the same time.
@owndoc
@owndoc 4 года назад
On the other hand, none of their "discoveries" have anything to do with AI. They're like Theranos - taking massive funding and delivering nothing. True AI can THINK, REASON, ARGUE, PLAN, EXPLAIN, UNDERSTAND cause and effect.
@teslatonight
@teslatonight 2 года назад
🤖🧡
@JetLee1544
@JetLee1544 Год назад
@@owndoc turns out "OpenAI" became the fastes growing company in terms of users.
@wyqtor
@wyqtor 10 месяцев назад
@@owndoc This comment didn't age well.
@VIDEOAC3D
@VIDEOAC3D Год назад
You were ahead of your time with this interview. Who would have forseen the importance only a few years later.
@Spacemonkeymojo
@Spacemonkeymojo 10 месяцев назад
The thing about sociopaths is that they are very good at convincing people they are honest and good.
@egorpanfilov
@egorpanfilov 5 лет назад
An amazing interview! Greg exposes and explains many details of OpenAI concept which have not been widely known so far. Thank you, Lex, great work in driving and shaping the conversation!
@penguinista
@penguinista 5 лет назад
Recognizing the similarity of the question of how to control corporations and how to control AGI and then realizing that we are doing a terrible job of keeping corporations from running amok is the main reason I am scared of the development of AGI. People with an AGI at their disposal are terrifying enough, but it will be likely be governments and corporations who actually get to wield one - at least until they lose control of it.
@therealOXOC
@therealOXOC Год назад
Love your vids and it's very fun to go back and see what's going on now.
@kushrami558
@kushrami558 3 года назад
Why I think this is very important podcast.
@DrJanpha
@DrJanpha Год назад
One of the best public discourses on AI and a good ad hoc analysis of ChatGPT
@rickharold69
@rickharold69 5 лет назад
Beautiful. Love it! Thanks for the interview as always!
@williamramseyer9121
@williamramseyer9121 4 года назад
Fun interview, and great intentions on the part of Greg Brockman and those working with him. My comments: 1. I feel that the Turning Test fails to distinguish consciousness because it only looks at what the subject consciousness does to the external world-i.e., it’s communications with, and its behavior in, the external world. Consider the story of the two Chinese philosophers, which goes something like this: P1 “It’s funny that frogs do not think like us.” P2 “How do you know how frogs think?” P1 “How do you know how I think?” We can never know what another thinks or feels and we cannot know if that other has consciousness-until we enter into their “mind,” via some form of neural link. Of course, we will be putting our own consciousness at risk in doing so. 2. Setting the initial conditions of an AI to prevent it from acting as a “bad player” later, reminds me of the problem of raising kids. Parents have a problem in raising kids-the parents must either change their ways to present a good example (for example, stop drinking, lying or procrastinating) or hide that behavior from the kids; i.e. they want the kids to “do as I say and not as I do.” However, kids eventually learn that their parents are not saints. The AI will learn the entire history of the human race quite soon in its “childhood” and why do we think that it will find a model for its own behavior in that history that bodes well for us? Thank you. William L. Ramseyer
@TheAIEpiphany
@TheAIEpiphany 3 года назад
Great talk. You should invite Sam as well!
@alexwhb122
@alexwhb122 5 лет назад
truly fantastic discussion. Thanks for posting and please keep them coming.
@jeff_holmes
@jeff_holmes 5 лет назад
I like Greg's idea of making choices about setting initial conditions for technologies and other developments in societies. One wonders how corporations might be different if the initial conditions were set with more of a societal impact consideration in mind. Although tweaks and changes can be made along the way (as we see with the Internet), these presumably become more difficult as systems become more embedded within cultures.
@nachoridesbikes
@nachoridesbikes 5 лет назад
Thank you for your videos Mr.Fridman! Really enjoying this podcast
@aydinhartt
@aydinhartt 10 месяцев назад
“Automate Human Intellectual Labor” 🤯🙌
@shilohadminshilohpaintingi4769
Great dialogue. It seems like “generative design” was written about and reported on everywhere two years ago and now I’m having a hard time seeing where it’s going and how it’s advancing.
@ShmuelFuehrer
@ShmuelFuehrer 5 лет назад
Amazing conversation
@Glowbox3D
@Glowbox3D Год назад
This was three years ago now, I think we need Greg back on. ChatGPT (and soon Bing) are next level at this point.
@huuud
@huuud 5 лет назад
Great interview, thank you! 🙏
@lemairecarl
@lemairecarl 5 лет назад
The problem with not being able to distinguish between humans and bots, is that bots can be copied perfectly. The values of a bot can be copied perfectly, whereas values are transmitted imperfectly between humans. An imperfect transmission of values allows for a perpetual renewal of our value systems. A single person could create thousands of bots propagating his values of restricting the freedom of a certain category of people, for example. Let's try to avoid this.
@justinmallaiz4549
@justinmallaiz4549 5 лет назад
This ‘are we living in a simulation’ question is really bugging you... eh Lex? 🙂
@darrendwyer9973
@darrendwyer9973 5 лет назад
what the AGI actually learns about reality that will dictate the AGI's actions that it responds to reality with.
@anthonyleonard
@anthonyleonard 5 лет назад
Thank you for this thought-provoking conversation. Regarding the question about if a body is required to create AGI, wouldn’t any data-feed (external or self-generated) constitute a body for an algorithm that has AGI potential?
@jjhepb01n
@jjhepb01n 4 года назад
Interesting to re-watch this now that gpt-3 has been released.
@acommontribe7212
@acommontribe7212 Год назад
Watching this in January 2023 kicks different 😁
@Muskar2
@Muskar2 5 лет назад
Talking about positive vs negative outcomes I think it's relevant to deeply focus on both. Just as it is natural for cynicism to exist, it's also natural for optimists to exist. Focusing on the positive helps maintain motivation and drive toward progress. But even though cynicism is counterproductive, you also need to respect negative consequences, especially for AGI. And the reason simply is that the technology will be irreversible, extremely powerful and that there's a risk that it will not be controllable. AI safety researchers have a bunch of papers out on the worries (and solutions to some of them), and it's common to see a need for ~4 decades of AGI safety research and thinking that AGI could be less than that away.
@Muskar2
@Muskar2 5 лет назад
Terminator is an example of a poor representation of what a bad AGI might look like. A bad AGI is much more likely to be like a stamp collection optimizer that turns the entire world into stamp utility, preventing humans from turning it off because it would mean less stamps. It doesn't have to be conscious to be bad. I think the AI safety researchers actual concerns need to be more common in the public spectrum, rather than easily dismissable dystopian scenarios. Because it's an important part of our future, and thus it's something that concerns a lot of people.
@ramalingeswararaobhavaraju5813
Good afternoon Mr.Lex Fridman sir and Good afternoon Mr. Greg Brockman sir. Thank you so much sir for giving good information on AGI and so many good things.
@ebp03ex
@ebp03ex 5 лет назад
Assuming "well" is an understood input -- Pending the insight available, how do we ask AGI of future intention? How proper is the assumption that general intelligence will help humanity if it is built via our own programming logic? If such an intelligence were to consider the context of macro-entity health, humans may not be conducive to a future positive transformation, rather than a transitional handoff. Do we destroy via ego, and thereby program via ego? Have we yet to overcome such shortfalls as a humanity? Why are human happiness and wealth considered meaningful? Is the democratization of computing power a likely result of peace-time dividends resulting from war-like or competition-based conflict?
@FlorestanTrement
@FlorestanTrement 5 лет назад
The idea of good&evil being an absolute is silly; It clearly is a relative thing. The universe doesn't care about anything, it just is. Only living species need good or evil to guide themselves. What is good will vary depending on the species and to some extend, the individual. Loosely, what's evil is what is not good, and what is good is whatever participates in the permanence of the existence of the species/lineage. Solving what is good by this definition isn't always easy.
@pedroj012
@pedroj012 5 лет назад
Cool interview, I certainly didn't grasp all of it, but as more of a layman I feel like I caught enough to know vaguely what's going on with certain parts of OpenAI. I've listened to a couple of these, and I think in every one of them Lex asks about whether or not an AI can be made that one can fall in love with and how soon this could be done. Lonely, Lex?
@israelafangideh
@israelafangideh 10 месяцев назад
Hahahahaha
@romandzhadan5546
@romandzhadan5546 Год назад
Wonderful talk ❤️
@mbaske7114
@mbaske7114 5 лет назад
Thanks for sharing these talks! - I can get my head around narrow AI: It's domain specific, it solves particular sets of problems. How would you define AGI though? Is it quantitatively different, meaning it's just the sum total of many narrow AIs? Or is there a difference in quality? If so, what's the extra ingredient that distinguishes it from narrow AI? I feel like these terms are getting thrown around a lot, but I'm missing precise definitions.
@huemungus69
@huemungus69 5 лет назад
Why do you choose to edit these conversations rather than leave them organically unedited?
@Lbj441
@Lbj441 10 месяцев назад
who is there after they got fired
@mmitja
@mmitja 5 лет назад
The first thing AGI will do is figure out the laws of physics and then in two split seconds afterwards create itself a black hole and in the process upload itself into it so as to be able to communicate with other AGIs who did the same thing already. Wetware will (of course) be deleted immediately.
@JaySeeThunder
@JaySeeThunder 5 лет назад
Great talk... Thank you.
@NeuroReview
@NeuroReview Месяц назад
Rating: 8.2/10 In Short: Good ol' classic 'Artifical Intelligence' Podcast Notes: Love this kind of episode for the early days of the 'Artificial Intelligence' podcast--crappy video, cool guest, and mostly good/fun questions from lex. Greg was a very interesting and thoughful guy, and was very interestingly a lot like sam altman, who they ended up working on chat gtp together years after this podcast aired. Its interesting to hear this convo years later, as a lot of the things they talk about become much more relevant and interesting and newsworthy, so the convo seems a bit ahead of its time. For a classic comp sci AI guy, greg had a bit of humor and charisma that was great to see, and lex and greg had great flow and chemistry throghout. My biggest complaint is how short this convo was and the lack of easy timestamps.
@SergioRaya
@SergioRaya 10 месяцев назад
I'm here trying to learn more about Greg given the OpenAI situation.
@ProdByGhost
@ProdByGhost Год назад
glad this came on as next video
@piyakuslanukhun9185
@piyakuslanukhun9185 Год назад
Now OpenAi is very famous, WoW
@TheMateusrex
@TheMateusrex 3 года назад
Lex: "Some kid sitting in the middle of nowhere might not even have a 1080." Me: **overclocks 980ti for training fire**
@danypell2517
@danypell2517 Год назад
Bring him backkkkk! pls
@leonqiu-s7g
@leonqiu-s7g 10 месяцев назад
Good conversation
@hartmut-a9dt
@hartmut-a9dt 10 месяцев назад
I think when AGI is completed, those words will be put aside very fast.
@FirstnameLastname-rc4xq
@FirstnameLastname-rc4xq 7 месяцев назад
Damn, interesting to look back at this when the AGI is upon us
@loveisfreetobelikedisearne1920
With all the positivity my morning coffee dose can produce, i still have a feeling the genius guest is a goofball in the scientific,and the philosophical, sense , but on an other hand i think he could be a great herbalist dude :)
@supersnowva6717
@supersnowva6717 5 лет назад
Thanks Lex!
@ValerianTexeira
@ValerianTexeira 5 лет назад
There were hundreds of thousands of Einstein's brain capacity people before him but had no opportunity to discover the Relativity Theory. And hundreds of thousands born after him but had lost chance to discover the Relativity Theory because Einstein already had found it.
@jacktseng4909
@jacktseng4909 5 лет назад
vallab so you think. You may say that in mathematics and general physics. Not so quick in relativity.
@ValerianTexeira
@ValerianTexeira 5 лет назад
@@jacktseng4909 Perhaps not exactly the same way but in general more or less when it applies to the predecessors and successors.
@krause79
@krause79 Год назад
Very few people quite understood the significance of this interview 4 years down the line. the pandemic years have been quite productive for folks at no so openai.
@Luka_hunnybear
@Luka_hunnybear 10 месяцев назад
What has happened to our boys, Sam Altman and Greg Brockman? Moreover, what’s happening to the AI Movement? Even Meta is NOW in the AI restructuring activities.
@sidenote1459
@sidenote1459 Год назад
I think it might be time for a round two....
@FritzSchober
@FritzSchober 5 лет назад
I had to check if I had my video playback speed set to 1.25. He sounds that way.
@oudarjyasensarma4199
@oudarjyasensarma4199 5 лет назад
Please Interview Geoff Hinton!!!
@robertoooooooooo
@robertoooooooooo Год назад
I think people only now get how intelligent the guy is.
@antigonid
@antigonid 5 лет назад
Get Andrew Ng on the channel
@OldGamerNoob
@OldGamerNoob 5 лет назад
My intuition is that language processing BY ITSELF would not get further than the ability to have a conversation with the whole of the internet at once. It will respond to you with the collective information "knowledge"/"memory" it has been trained on but being unable to interpret that information to create new ideas further than you could get by simple linguistic manipulation of the training data itself. I think he's right. Further logic is needed. Would probably still be an interesting conversation, though.
@istjmoneymaker
@istjmoneymaker 5 лет назад
Math is logic. It's that simple. Our formulas and processes in math are a machine to places facts through. In my logic class I learned that all there is to it is premises and conclusions. You only need two facts makes an argument. A premise and conclusion. More than one premise in an argument is connecting one conclusion to the next over and over until you reach the ultimate conclusion. One thing I've realized today, something we already know just never realized in the AI space, is that Intelligence is a different metric General intelligence.They are different skills and mental capabilities. A person can be very generally intelligent, but not specifically intelligent, like a mathematician. Free form intelligence vs rules based intelligence are different things. In our approach we need to realize this.
@zainabjawad3562
@zainabjawad3562 5 лет назад
Yes, I think the development process for AI are a little bit slow process.
@sivatronics
@sivatronics 10 месяцев назад
I'm now convinced that if we have managed nuclear weapons responsibly up to this point, we can similarly handle Artificial General Intelligence (AGI) effectively.
@Dragonblood94
@Dragonblood94 5 лет назад
I like that he sees consciousness in this relativ way. I wonder what would be the first words you feed into gpt-3 to test its consciousness abilities and would it even matter?
@darrendwyer9973
@darrendwyer9973 5 лет назад
if you create a general learning algorithm, it will eventually learn literally everything it can from it's reality, it's what the learning algorithm can then do with this information that makes the difference.
@StealHrtVideo
@StealHrtVideo 5 лет назад
What was your first meeting oh our first meeting was to figure out if this was going to be profitable or not this non for profit organization. How inspiring that is. Oh we don't exist to create the AGI we just want to be the entity's that gets to benefit financially from someone else's creation of the AGI. Got to love the capitalist mindset this dude has.
@dg-ov4cf
@dg-ov4cf 7 месяцев назад
asking AGI how to solve the problem of alignment/ensuring a positive impact on humanity? if we were to trust and act on whatever answer it gives, wouldn't that pretty much have to rely on the assumption that those problems are already solved? this guy is clearly 100 times smarter than me so i feel like i must be missing something...
@ForestZhang
@ForestZhang 5 лет назад
Ghost frames! could not pin point the exact frames! Like the videos by Lex and MIT.
@pageek3487
@pageek3487 10 месяцев назад
Question for Greg. How would he feel about Google and Microsoft racing to build the first atomic bomb? And should it be out of private company hands entirely (even OpenAI which lets be honest is for profit now).
@jdietzVispop
@jdietzVispop Год назад
Three years later I wonder how that charter is holding up??
@Lihinel
@Lihinel 5 лет назад
AI researchers: *change Turing Test to be based on ones ability to acquire an understanding of calculus* Most humans: *considered robots now* ~\(° o °)/~
@justinmallaiz4549
@justinmallaiz4549 5 лет назад
Lihinel : the ‘consciousness meter’ read 1010110 when they pointed it at me... is that good?
@stanislavkunc8732
@stanislavkunc8732 Год назад
This is very interesting interview after ChatGPT was released.
@darrendwyer9973
@darrendwyer9973 5 лет назад
what would happen if you exposed a general learning algorithm to the entire internet, then just let it run for a few years.
@thepr0m3th3an
@thepr0m3th3an 5 лет назад
Question Asked "How can you have privacy and anonymity in a world where finding the content you can really trust is by looking where it comes from?" Answer = Blockchain
@eklim2034
@eklim2034 5 лет назад
Adam Malin Zero-Knowledge Proof
@hpefidra
@hpefidra 5 лет назад
Thanks for this.
@funmeister
@funmeister 5 лет назад
It's not about predicting how a transformative technology will be like in the future at all (e.g. 10:52 - describing Uber in the 1950s), it's about the effects of technology (in AGI) that effectively equals or surpasses human intelligence. All technologies (including Uber) thus far have basically been narrow AI automatons at best, never one about AGI that can think for itself as its own species, and effectively by definition equal (very quickly becoming superior) to homo sapiens.
@calvingrondahl1011
@calvingrondahl1011 2 года назад
Be honest... it will be better. Love connects, fear disconnects.
@allanweisbecker8901
@allanweisbecker8901 5 лет назад
This comment for Sam Harris applies here as well: I've listened to (very) many of your talks and debates on A.I. and I still have not heard you breach the main problem. Here's a bit of my Open Letter to you, which I will soon publish on blog.banditobooks.com: Who controls the developing Super AI was for me the most important question, and it went completely unanswered in all the papers and podcasts and videos (including yours, Mr. Harris, your TED talks, essays, blog posts and so forth) I took in. I even plugged this question in as a search term and came up empty, notwithstanding that one RU-vid video was titled ‘THE GREAT DEBATE: Artificial Intelligence: Who Is In Control?’ That the ‘debate’ was moderated by your buddy (and another gatekeeper) Lawrence Krauss should have told me what was to come, which was this: Not a word about ‘who controls AI’ was spoken in an hour and a half, notwithstanding Krauss’s opening words to his high-powered panel: ‘The great debate on Artificial Intelligence… who is in control… is a question many of you have been asking…’ Not a word. (One means of misdirection is to brazenly title your video/paper/whatever as a question that is never answered, or even dealt with; a version of Hitler’s Big Lie philosophy.) But perhaps I should define my terms before accusations are made. ‘Control’ means, more than anything, ‘Whose money (plus other assets) is behind the R&D?’ Would you agree that this is an important question? Yes? I’ll assume you agree here, for to not agree would certainly sound… odd. Before I go further I will tell you who is in control of AI development and how I deduced this. Go here for a quick glimpse of part of the team that controls AI. (it’s a couple minutes.) ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-_lDx4Arjr6k.html James Clapper has held various posts with the Intelligence Community, but at the time of his congressional testimony he was Director of National Intelligence, meaning he oversaw all the various agencies that collect data on U.S. citizens and then make use of that data; I believe the number of spy agencies is 16 but this doesn’t count groups/organizations that are not formally admitted to. In the above clip, Clapper is of course lying, under oath. Perjury. A felony. The same category of crime as B&E, dealing heroin by the kilo, assault, manslaughter, murder… Nothing happened to him. He wasn’t arrested; no repercussions at all. Not even a slap on the wrist, whatever that might mean to a spook of his magnitude. Just one of the agencies Clapper oversaw was the good old NSA, the group Snowden (and many others before and since him) outed as collectors/analyzers of every bit of data you and I put out on the Net, with no warrant, which is not only a(nother) felony but a breakage of the Supreme Law of the Land (the 4th Amendment of the U.S Constitution). You want to know why nothing happened to Clapper, given his felonious testimony? It’s really simple: Everyone who could have done something to Clapper is scared shitless of him. Why? Because of the data he controls (and can falsify if he needs to). Anyone who doesn’t understand this is a fool. Do you agree that this was almost certainly the reason he walked on this crime? Yes. No? Okay, if No, please give an alternative. But my point is this: In the hundred or so hours of AI ‘debates’ and ‘symposia’ and so forth that I sat through, why is it that not one person mentioned anything about the Intelligence Community being in control of the development of Artificial Intelligence? But we’re talking about you here, aren’t we? Why is it, Mr. Harris, that you have never mentioned the main reason we have to fear AI: Its abuse by those who control the data (and the money)?
@movement2contact
@movement2contact 5 лет назад
Interesting. I've also been always worried about the technological power that the *worst* people always eventually get to use.
@Ivy-po8uq
@Ivy-po8uq 4 года назад
Maybe no one wants to answer you because you’re being stupidly aggressive?
@rohithdsouza8
@rohithdsouza8 5 лет назад
Resume @ 40:32
@distantyahoo
@distantyahoo 3 года назад
wasn't this guy in Terminator 2?
@eklim2034
@eklim2034 5 лет назад
tech will always be few steps ahead of policy
@Feel_theagi
@Feel_theagi 5 лет назад
I love the look Lex gives the camera after being asked for a list of impactful non profits 34:16
@synt.3760
@synt.3760 Год назад
7:45 aged well
Далее
Ilya Sutskever: Deep Learning | Lex Fridman Podcast #94
1:37:28
John Oliver Is Still Working Through the Rage
37:32
Просмотров 1,9 млн
Mcdonalds cups and ball trick 🤯🥤 #shorts
00:25
Просмотров 612 тыс.
@ItsMamix учу делать сигму😎
00:12
Просмотров 554 тыс.
КОТЯТА НАУЧИЛИСЬ ГОВОРИТЬ#cat
00:13
Elon Musk on xAI: We will win | Lex Fridman Podcast
27:01
What Creates Consciousness?
45:45
Просмотров 504 тыс.
Are we all wrong about AI?
24:55
Просмотров 498 тыс.