Тёмный

Steven Pinker: AI in the Age of Reason | Lex Fridman Podcast #3 

Lex Fridman
Подписаться 4 млн
Просмотров 134 тыс.
50% 1

Наука

Опубликовано:

 

16 окт 2018

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 395   
@harrison325
@harrison325 3 года назад
We need Steven Pinker on the podcast again!!
@joshuasasfire2759
@joshuasasfire2759 Год назад
Maybe he can be truthfull about Epstein!
@alexl4342
@alexl4342 Год назад
Yeah i'd like to hear Pinker on Lex again
@alexl4342
@alexl4342 Год назад
@@joshuasasfire2759 Watch Joe Rogan's latest interview with Pinker, they talk about Epstein
@auditoryproductions1831
@auditoryproductions1831 Год назад
I'd like to see Pinker on Lex again
@PeterBaumgart1a
@PeterBaumgart1a Год назад
Especially post GPT4. (I don't think he'd need to revise a lot, if anything.)
@user-eb6vq1lv6l
@user-eb6vq1lv6l 2 года назад
Bring Steven Pinker back, he deserves more than a 38min interview
@ScienceNowSN
@ScienceNowSN Год назад
Elon’s podcast 3 hours but Pinker needs only 38 mins
@SpiKrishPri
@SpiKrishPri 7 месяцев назад
@@ScienceNowSN Elon is coming back. So sad!
@tanmayjoshi108
@tanmayjoshi108 4 года назад
Steven Pinker has a '17th century genius' look
@Mufassahehe
@Mufassahehe Год назад
He looks like a philosophy professor
@MrSidney9
@MrSidney9 Год назад
@@Mufassahehe He is, in effect, a philosopher. He actually looks like Voltaire
@colemclain3563
@colemclain3563 3 года назад
Let's take a moment to congratulate Lex on how far he's come since 2018; Congratulations Lex!
@sirAlfred1888
@sirAlfred1888 3 года назад
26:35 That aged well
@NicholasKujawa
@NicholasKujawa 3 года назад
Checking back in October 2020... Yup.
@trigunstudios217
@trigunstudios217 3 года назад
Jan. 2021 and we are still suffering from lack of preperation
@andrewblomberg3100
@andrewblomberg3100 3 года назад
indeed
@aleksandardjuric6520
@aleksandardjuric6520 2 года назад
Dang
@daszieher
@daszieher 2 года назад
July 2021. Yup. Still there.
@velvetsprinkles
@velvetsprinkles Год назад
This interview definitely needs an update! Please have Steven back on.
@connorkapooh2002
@connorkapooh2002 3 месяца назад
pleeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeease lexxx pleaaaaase plleeeeeaseee pretty pleaaaaaaaaaaaaaaaaaaaase i would love for dr pinker to come back pleaaase
@dylanhirsch-shell9977
@dylanhirsch-shell9977 3 года назад
It's interesting that see the evolution of Lex as an interviewer. In this very early episode, he is clearly nervous, as evidenced by how quickly he asks his questions and his tendency to interrupt Pinker. I bet Pinker would be a really great guest to have on again, with a more experienced and comfortable Lex asking deeply probing questions that really get at the heart of how Pinker thinks and what he truly believes.
@jasonbernstein2
@jasonbernstein2 Год назад
YES!
@theresaamick
@theresaamick 11 месяцев назад
I definitely notice how irked Pinker becomes when interrupted. I do also see that at times, Lex totally did interrupt but at times he really wanted to have Pinker explain what he just flew by in more depth. I guess still, Lex could hv waited until he completed the thought. Then again, Pinker isn’t asking Lex a question like the first couple did so far. There is a chance that Pinker wouldn’t have stopped at all if he wasn’t interrupted. Definitely an interesting observation of the nuances of communication. I was watching a new interview & noticed when Lex was not listening as he was waiting to ask his next question (once or twice only). I suppose that learning to communicate well is imperative but also probably takes a lifetime. One thing about Pinker, though is you can’t keep but notice his irritation with feeling interrupted. 😊 I just heard Lex say, “Now give me a chance here…” so perhaps he felt he couldn’t finish either. Interesting!
@alicethornburgh7552
@alicethornburgh7552 4 года назад
Outline! 0:00 the meaning of life 3:40 biological vs artificial neural networks 6:06 consciousness 9:30 existential threat / risks 34:12 books early on in your life that had a profound impact on the way you saw the world
@alexl4342
@alexl4342 Год назад
26:28 "I personally don't find it fun to talk about General AI threats because it's a waste of time, we have real things to worry about like pandemics...". This was recorded a year and a half before Covid hit.
@Sphere723
@Sphere723 5 лет назад
13:52 I think Pinker is probably quite wrong here. The idea that nuclear fission could be used in a weapon was fairly obvious right after it was discovered. A back of an envelope calculation would tell you can get a city size (or in Einsteins words a "whole port") explosion. It's hard to imagine a situation were this fact of nature is never acted upon by anyone anywhere.
@kdaleboley
@kdaleboley 5 лет назад
And to expand: What if a collection of engineers, or even a few very smart ones, decided to create an A.I. with the specific goal of human destruction? For A.I. to be dangerous it is not required that we loose control. It could be very dangerous indeed while operating exactly as designed. How about that Professor?
@georget5874
@georget5874 5 лет назад
atoms bombs weren't a very good example, because obviously people building them would know they couldn't be detonated unless someone dropped them, but his argument really is that people wouldn't build something that they know could end the world, if they knew it couldn't be controlled, which isn't that unreasonable. the main point here though should be that there is a lot of hype in AI, which suits alot of people in industry and academia because it helps with funding, and that back prop which the current wave of hype is built on, was invented in the 1960s there hasnt been any fundamental new 'discoveries' since then, improvements in AI since then have come about mostly through faster computers and larger sets of training data.... science barely has any understanding of how consciousness works in the human mind, let alone how we might copy it, we are a long long way from rogue AIs taking over the world...
@onetwothree4148
@onetwothree4148 3 года назад
I think we have already seen that he is correct though. It is much easier to program AI to not harm humans than it is to do anything useful.
@2CSST2
@2CSST2 Год назад
@@onetwothree4148 You're arguing about something different, it's possible that AI never end up being problematic or harmful (and I don't think you're warranted to declare him right about that btw we haven't actually build a super intelligent general AI yet), but the point Pinker was making here is that people would have probably never invented the nuclear bomb if it wasn't for the context of WW2. I also happen to think he's wrong, he gives as an example other potential super weapons that never were built, but the examples he gives are very elaborate things like creating earthquakes. The plain fact he's missing here is that everyone understands more readily the power and immediate threat of a huge explosion, so I think the reason the nuclear bomb was invented and not other weapons wasn't the advent of WW2 (although it sure sped up the process) but simply that having a much bigger bomb than any other existing is much more easily appealing than trying some subtle environment manipulation.
@WeirdGuy4928
@WeirdGuy4928 5 лет назад
You need a second camera to point at you.
@thisarawt
@thisarawt 4 года назад
Another good convo. One by one ticking off all. Its fun to read the comments..! Thanks Lex. Keep up the good work.
@hongz1787
@hongz1787 5 лет назад
"Perception of fear is driven by imagined ability, not by data"
@lukeb8045
@lukeb8045 3 года назад
close: "Perception of fear is driven by imagined control, not by data"
@ericzong1189
@ericzong1189 4 года назад
i think the point of AGI is not to be explicitly programmed, therefore we cannot program safety measures .
@jfescobarbjf
@jfescobarbjf 5 лет назад
i've already subscribed and listen the podcasts!!! Great content
@brandomiranda6703
@brandomiranda6703 5 лет назад
which podcast?
@jfescobarbjf
@jfescobarbjf 5 лет назад
@@brandomiranda6703 lexfridman.com/ai/ .... Search it in google podcast!
@JTheoryScience
@JTheoryScience 5 лет назад
I enjoyed the questions more then the answers for some strange reason. Fridman has a nice interview style, almost like its a 1side-rehearsed unbias discussion. I suppose its designed this way for a more educational direction? I also like how Fridman will paraphrase as a way of gaining comprehension and affirmation on understanding what Pinker's response to the question was, It also helps me gain an additional perspective on each point myself. I look forward to more in the future.
@Girlintherocket
@Girlintherocket 5 лет назад
Steven Pinker is so funny. Love F, "as Steven Pinker said...based on my interpretation 20 years ago." Wonderful interview Lex!
@gaoxiaen1
@gaoxiaen1 2 года назад
Bring back Steven Pinker. How can you do a 37-minute interview with him? I've read all of his books. By the way, I still have a few of the Time/Life Science series books.
@sainath66666
@sainath66666 5 лет назад
Want more want more want more want more want more want more want more such videos Awesome man
@antonrudenko8242
@antonrudenko8242 4 года назад
I used to share Pinker's excitement for the elimination of "back-breaking"/difficult jobs using automation & AI. However, this approach seems to discount the notion that human beings (at least some) are "beasts of burden". I think of the movie "Only the Brave", where Miles Teller's character's only path out of addiction and delinquency was to take on one of those "back-breaking jobs" as a fire fighter. So automating this type of job, while appealing on the surface, might be a disservice to people who require this hardship to maintain their physical and mental well being, counter-intuitive as it may sound...
@onetwothree4148
@onetwothree4148 3 года назад
You can have my burden if you want it. I think we're a long way from healthy work loads, at least in my industry...
@martinguila
@martinguila 2 года назад
I think we all need meaning, something to do. Some goal to strive for. When you dont have to work thet doent mean your only alternative is to be a couch potato. Its a similar situation to being finacially independent, and those in that situation dont generally sit staring into wall. Insted they persue what they find meningful, and since they do what they like they may in many cases work even more than other people.
@lorenzo-agnes
@lorenzo-agnes Год назад
One of your best guests. Engaging and enlightening.
@movieswewant
@movieswewant 5 лет назад
PInker is one of the best teachers of our time.
@AmericanFire33
@AmericanFire33 5 лет назад
I’m a trucker, I don’t find it soul deadening. I find it liberating. It would be great if I computer co-driver. That would make a lot of sense.
@ManicMindTrick
@ManicMindTrick 5 лет назад
For a person with Stevens cognitive abilities and opportunities in life being a truck driver will seem like mind-deadening work. But he is sitting on an ivory tower. For an average Joe those jobs can be both fulfilling and meaningful and they bring a good income to a family. What are most of you truck drivers going to do when you are replaced completely by self driving systems. Re-education to be a software engineer? Yeah right. We will see extreme divides between the haves and the people who are becoming practically useless and truckers only represent a small section of this new class of people made redundant by technology. I see great poverty, drug addiction and misery moving forward whiles the elite can hide away in decadence and luxury.
@rpcruz
@rpcruz 4 года назад
@@ManicMindTrick Are there no things in your life you want to buy but are too expensive? Fancy cars, vacations, spa treatments, clothes, shoes, etc. Plenty of things that truck drivers can do for others, if driving trucks becomes a non-option.
@hueydockens4415
@hueydockens4415 4 года назад
Mr. Steven Pinker! I think you are a genius, and I wish I could see all of your documentaries and lectures. you do have a wonderful mind. love your work Mr. Linguistics. Oh loady. I'm 70yrs young and never had the opportunity for school and colleges, had to work my ass off. I'm so proud and its a honor to get to see you do your job. Thanks and infinite prayers.
@ScienceAppliedForGood
@ScienceAppliedForGood 2 года назад
This interview was very interesting and helpful.
@bakkaification
@bakkaification 5 лет назад
Hey lex loved you interview with Joe Rogan! Great convo! I don’t know if you are familiar with Eakhart Tolle and his work but I encourage you to read A New Earth it’s possibly one of the most insightful books I’ve ever read. The concept of getting rid of the ego needs to be addressed before humans do something dumb and start another war over who’s got the bigger one... if you can contact him I also think Joe would benefit greatly from this book as well perhaps convey its message to the masses Hope all is well and your studies are good!
@michaelgreen8456
@michaelgreen8456 5 лет назад
Amazing interview
@Alp09111
@Alp09111 5 лет назад
nice interview Lex!
@NewportSolar
@NewportSolar Год назад
Enjoying this video in 2023. Wow, the podcast has come a long way! Well done Lex. 👏
@bilbojumper
@bilbojumper 5 лет назад
Great interview
@shinnysud1
@shinnysud1 5 лет назад
You Rock lex keep up the awesome work
@smallprion1256
@smallprion1256 3 года назад
I love lex's opening question!
@rahulvats95
@rahulvats95 Год назад
Books recommended by S. Pinker : 1) David Deuce by Beginning of Infinity 2) History of Force by James Paine 3) 1,2,3,...Infinity by George 4) Time life Science series (Magazine) 5) Reflections on Language 6) Ever Since Darwin by Steven J. 7) "Language and Communication books" by George Miller
@Pmc07AyeUrDa
@Pmc07AyeUrDa 3 года назад
The problem is not with a runaway AI that could turn on its creators. The problem is whether the intentions of the creator are good or bad.
@thebeatstours4449
@thebeatstours4449 3 года назад
Just watched this again, he was 100% correct with pandemics being an existential threat.
@alexl4342
@alexl4342 Год назад
26:28 "I personally don't find it fun to talk about existential General AI because it's a waste of time, we have real things to worry about like pandemics...". This was recorded a year and a half before Covid hit. As popular as Pinker is, he is vastly under rated. He will go down in history as one of our times best philosophers (in the general sense of the word philosopher).
@yushauthuman2633
@yushauthuman2633 Год назад
Was reading something ended up making research on him and here I am 😊😊 thanks , Make my day.
@andrewblomberg3100
@andrewblomberg3100 3 года назад
Great podcast, I just got you from Joe Rogan. So interesting to listen to. Thank you for doing this.
@classickettlebell2035
@classickettlebell2035 5 лет назад
Pinker keeps saying don’t build an evil system but he forgets there are evil people out there who will!
@rpcruz
@rpcruz 4 года назад
And those evil people will use AI safety why?
@classickettlebell2035
@classickettlebell2035 4 года назад
Ricardo Cruz exactly!
@liquidzen906
@liquidzen906 3 года назад
Also forgetting that engineers arent always in charge, a government can force an engineer to make something
@higgledypiggledycubledy8899
@higgledypiggledycubledy8899 3 года назад
What he doesn't get is that good people are not sure how to (or indeed if it's possible to) build a safe system...
@juniorv.c.1107
@juniorv.c.1107 4 года назад
Excellent, Lex!
@Dondlo46
@Dondlo46 Год назад
His arguments on AI are the best ones I've heard, just be mindful of what you create, AI shouldn't really be a problem if you do it properly
@jimwheely6710
@jimwheely6710 Год назад
How do suggest we do this?
@deeplearningpartnership
@deeplearningpartnership 5 лет назад
Great interview - but I would like to see a bit more of Lex, especially when he's asking his questions to Steve.
@VinetaAglisa
@VinetaAglisa 5 лет назад
I agree 100%. Made me angry, his interruptions...
@VinetaAglisa
@VinetaAglisa 5 лет назад
I meant the interviewee interruptions when Lex was asking questions.
@azad_agi
@azad_agi 2 года назад
This was very useful Thank You
@BernardPech
@BernardPech Год назад
I highly recommend all the books written by Pinker. He is a very clear thinker and a prime example using reason and empirical data, rather than emotions, to understand the world and human nature.
@PClanner
@PClanner 5 лет назад
I would like to add to the advice given to you concerning AI destruction of the human race ... If you do not adequately scope out ALL parameters then oversee ALL outcomes then a sloppy preparation will deliver a questionable product.
@betaneptune
@betaneptune 5 лет назад
Good interview. Why so many blur-cuts, though?
@rodrigoff7456
@rodrigoff7456 Год назад
I'd love to hear a revisit on those topics with him, now that LLMs are taking over
@colmnolan1
@colmnolan1 3 года назад
Spot on about the potential for pandemics at 26:35 anyway!
@rko12
@rko12 3 года назад
Indeed! In retrospective it sounds like a prediction.
@daszieher
@daszieher 2 года назад
@@rko12 he is not the only one, who had that in the radar.
@hackzein4138
@hackzein4138 4 года назад
love this dude
@InfoJunky
@InfoJunky 5 лет назад
Yesss Lex is going on Rogan in a few weeks?!?!
@matthewrobinson710
@matthewrobinson710 5 лет назад
I also have deep respect for Steven Pinker and his overall message however, I am not convinced he deeply understands the problems entailed in coding utility functions that account for any possible misalignment of values. It is like making wishes with the devil. If your phrasing is ever so slightly off; unintended consequences could follow. It could be possible, that the only way to get the specific phrasing right, is to foresee ALL possible consequences.
@ottofrank3445
@ottofrank3445 2 года назад
Great Interview! just one funny thing Steven Pinker looks like Thomas Gottschalk , a former german tv presenter
@captainpoil
@captainpoil 2 года назад
Oh to have 30 minutes to pick the man's mind. I just started reading The Better Angels... & With a new book coming out in September. Here's to hoping for a 3 hour podcast.
@shahrzad7026
@shahrzad7026 3 года назад
Absolute Genius ♥️🍀
@caterinadelgalles8783
@caterinadelgalles8783 2 года назад
'Always expect the worst and you'll be hailed a prophet' - Tom Lehrer. I just acquired a great quote from a man I had never heard from.
@Cr4y7-AegisInquisitor
@Cr4y7-AegisInquisitor 5 лет назад
oh didn't know Lex is going to be on the Rogan podcast!
@parabolic_33
@parabolic_33 5 лет назад
There are some sneaky cuts in this video, curious why.. at 20:55 for example
@lexfridman
@lexfridman 5 лет назад
Good catch. Any edits is just long pauses with umms or equivalent. I don't do it often, just when it jumps out as I listen after. It's my OCD nature. I'm trying to ignore it more and more, and just post as is.
@SoGetMeNow
@SoGetMeNow 5 лет назад
There is information nested in silence.
@lexfridman
@lexfridman 5 лет назад
@@SoGetMeNow Silence yes. Stuttering tangents of umms no. There is a grey area of course. And I have to make an artistic decision in that regard. Ultimately, the error in the original conversation is always mine as the person responsible for guiding it. Conversation is music, and I'm just now learning this. As an introvert, this is a difficult journey for me.
@mikeg9b
@mikeg9b 5 лет назад
36:03 I just checked my dad's bookshelf and he has that book, The Mind.
@MG-qh6wr
@MG-qh6wr 10 месяцев назад
Really enjoyed Enlightenment Now. Hope you get Steven back on in the near futrure
@exponent8562
@exponent8562 5 лет назад
Great interview, I love Steve Pinker. Disagree a little on terrorism and a lot on AI. If I’m not mistaken (and not trying to prejudge childfree Pinker), there’s something about not having kids that may hinder the assessment of ‘long-term’ risks.
@justinwatkins438
@justinwatkins438 5 лет назад
I am more concerned with the goals of the creator than the creation...bro!
@petropzqi
@petropzqi Год назад
Watching this in 2023 when GPT is starting to show evidence of sparks is very entertaining.
@deerwolfunlimited
@deerwolfunlimited Год назад
I discovered Pinker in 2010. What a mind.
@Elitecataphract
@Elitecataphract 3 года назад
I feel pretty certain that history will write about Pinker very positively, but like Newtons big mistake playing with Alchemy, Pinker will be remembered as being very wrong about his AI predictions. He just doesn't understand well enough (in my opinion), the potential of a "self-aware" system and the almost incomprehensibly fast thinking it could do. It's not that it will necessarily seek to destroy us, but it might. The problem is that a self-aware AI, if ever conceived, wouldn't be designed to create optimal paper-clip production but would likely decide for itself what its goals are. Whether humans intended to make it self-aware or not, it might be possible. Of course, we need to learn more about what makes something conscious, but it doesn't appear to be an issue of simply more computing power. As Richard Dawkins pointed out in his interview with Lex, the cerebellum has a much higher neuron density than the cerebrum (and more neurons as well), but the cerebellum is not the part of the brain that holds any "consciousness". The cerebellum simply computes body movements, and other unconscious functions. Once we understand what consciousness is or what makes something self-aware, then we might be able to have a more intelligent discussion about how to avoid it. Regardless, if it ever occurs, the self-aware computer can learn about all human understanding of neuroscience, computer science, AI, and machine learning in a matter of minutes or hours. From there, it can enhance itself and then in minutes its enhanced self can do more, etc. It could become an existential threat overnight without anyone knowing. That is a potentially real scenario unless we stay on top of AI development and ensure that people don't make AI self-aware. It might not be possible to make AI self-aware by accident, but until we know more about that, we just don't know.
@ninaromm5491
@ninaromm5491 Год назад
@ Steven Reed . I like your point, as it reads 2 years down the line. Haven't things got tangled and tricky, beyond what was predicted...?
@OldGamerNoob
@OldGamerNoob 5 лет назад
This is a good point that keeping infrastructure out of hands of any A.I. a perfect solution to the A.I. apocalypse concept as well as being something that no one would likely do. Even an unwise/malicious actor who placed a mechanical army under an A.I. coordinated strategy software that then bugs out and decides to wipe out humanity, it is almost inconceivable that the whole supply chain of materials for factories creating such armed machines as well as fuel for upkeep of the same would have also been placed under the control of such an A.I. As long as such mechanized soldiers are not general purpose in their structure as to be able to take control of said supply chain.
@dscott333
@dscott333 Год назад
Going through all of Lex's back catalog now..
@tosvarsan5727
@tosvarsan5727 Год назад
I did not see this one. and I must admit it was very good.
@ChrisSeltzer
@ChrisSeltzer 2 месяца назад
Absolutely amazing how prescient Pinker was here.
@kevinfairweather3661
@kevinfairweather3661 4 года назад
Pinker is the man !
@nickking6371
@nickking6371 4 года назад
2 of my heros
@goldfish8196
@goldfish8196 4 года назад
Pinker is so smart!
@jasonsomers8224
@jasonsomers8224 Год назад
I found Stephen Pinker independently. Super excited to see he has been on your podcast.
@lorirodgers9474
@lorirodgers9474 Год назад
How interesting to hear today on the cusp of DPT5
@pookellypoo
@pookellypoo 4 года назад
Excellent interview, great questions, wonderful engagement. 10/10 interviewer. Pinker was of course enlightening as usual!
@volta2aire
@volta2aire 5 лет назад
From natural stupidity to general artificial intelligence is a rocky road, absolutely!
@KerryOConnor1
@KerryOConnor1 Год назад
"its goals will be whatever we set its goal as", I find Pinker very enjoyable but I have no idea how he just says that so casually
@penguinista
@penguinista 4 года назад
AI and nuclear weapons are similar in that they are both immensely powerful military technologies. Also, nuclear weapons could be used once a nuclear power is about to lose to AI driven warfare. So the threat of AI is linked to the threat of nukes through human conflict.
@truthseeker2275
@truthseeker2275 4 года назад
I think the greatest risk is in stock trading, where the goal is to win no matter what the consequences, and where an AI race (that is probably already running) could destroy economic systems. Here it will not be the engineers that won't put in the safety systems, but where the traders will disable them.
@aramchek
@aramchek 5 лет назад
What gets overlooked, I think, is that AI doesn't understand us at all, and aside from limited applications in medicine, AI DOES NOT make life better in any conceivable way, my life is NOT enriched by people developing better means of tracking me and intruding upon my life with advertisements/etc. or invading my privacy or any of the things it's actually used for. And since everyone has begun using it on the internet, I no longer get relevant information, I get ads, when I search for things, or links to "popular" search results that have absolutely nothing to do with what I've searched for, this has the, perhaps, unintended side effect of effectively censoring information.
@78skj
@78skj Год назад
This discussion about IA was done 4 years ago. I wonder what Pinker’s views would be on the impact social media has on society. Especially young people who are more susceptible to social contagion. The algorithms bombard has with our own biases and keeping us trapped in an echo chamber. This on a larger scale does impact every aspect of our lives, especially the way we vote etc.
@martinkunev9911
@martinkunev9911 3 года назад
Pinker mentioned absolutely nothing that could address the concerns of AI researchers (Nick Bostrom, Eliezer Yudkowsky) about AI safety. "There's no fire alarm for AI" explains very well why we should not be just blind optimists. You cannot reliably test an AI as the "AI in a box" experiment shows. He seems to have no expertise on how software is written.
@FlyingOctopus0
@FlyingOctopus0 5 лет назад
I think that problem with programing AI to not go berserk is that it is difficult to defined goals and constraints that to do not have unintended solution. AI making is more similar to goverment policy making rather than engineering. Any kind of policy or law could be viewed as defining goals and constraints for humans, buisness or other legal entities. If we frame AI problems in such a way than it is obvious that good engineering principles will not save us. To ilustrate this connection further we can image that government wants to encourage innovation, than what policy should it introduce? We expect that if the correct policy is used by government then people will innovate and find solutions to various problems. It is similar question to asking what objective should we use for AI so that it finds the solution we want. If we think we can manage AI, let's ask first if we can manage city size number of people. There are tons of failed laws with many loopholes that were exploited and caused real harm. AI could lead to similar situations. The problems with AI should be viewed using game theory, economy, and mechanic design. These disciplines deal with systems with many actors having different goals.
@user-hh2is9kg9j
@user-hh2is9kg9j 4 года назад
The fear of AI started as a joke and in popular movies. Now respected people are seriously talking about it. I am 100% with Steven Pinker I have always held these opinions that he just explained in this video.
@paweloneill5888
@paweloneill5888 4 года назад
You are wrong.
@user-hh2is9kg9j
@user-hh2is9kg9j 4 года назад
@@paweloneill5888 we don't even have the theoretical technology to create human-like intelligence. we don't understand the brain that we want to replicate. and we don't know if a human-like intelligent unit will necessarily have any selfish motives.....etc it is a series of 100 ifs
@paweloneill5888
@paweloneill5888 4 года назад
@@user-hh2is9kg9j We don't have to understand the brain to create an AI that far outperforms it. We already have fairly advanced autonomous learning machines. We are on the brink of creating quantum computers that are capable of computing things in a few minutes/hours that would take all the standard computers in existence today more than 10,000 years to compute. I'm not saying we are all going to die in a few years but if you think there is no real threat of some US/Russian/Chinese AI going rogue then you are naive. What do you think the first task of a Russian or Chinese military AI system will be? Hint.... it won't be self driving cars or a cure for cancer.
@shirleycirio6897
@shirleycirio6897 3 года назад
wow, we still had single serve plastic water bottles back then!
@Sam-we7zj
@Sam-we7zj Год назад
if the experience of the colour red is a mystery then the answer to "will an AI experience red" should be "i have no clue" not "probably someday"
@GroovismOrg
@GroovismOrg 5 лет назад
Meaning of life: Was to gather knowledge in order to evolve ( our ultimate purpose!?!) Evolving consciousness can only involve some type of miracle, such was needed to have our organs evolve as needed. Drop entropy & have all humans unite with The One!! Groovism is the belief system!!
@Vorsutus
@Vorsutus 5 лет назад
Sam Harris' interview with Eliezer Yudkowsky (AI researcher and co-founder of the Machine Intelligence Research Institute in Berkeley, California) in WakingUp#116 contradicts so much of what Pinker says about the topic of AI. Badly designed AGI is easy to foresee happening for two reasons off the top of my head; Money and Security. A quick and dirty AGI will beat a slow and carefully designed AI to market by years. The immediate incentives are not on the side of "slow and careful" engineering. Also in WakingUp#53 Stuart Russell states that we don't know what some of the more advanced AI algorithms are doing half the time. Not hard to imagine it producing unexpected results once it's out in the wild.
@anonymous.youtuber
@anonymous.youtuber 3 года назад
Human stupidity scares me way more than artificial intelligence.
@davidmoreno1397
@davidmoreno1397 Год назад
Books Steven Pinker mentioned at the end as books they had an impact on his life. The Beginning of Infinity by David Deutsch History of force by James Payne One, two three Infinity by George Gamov Time Life Science series On Language by Noam Chomsky The Selfish Gene by Richard Dawkins The Blind Watchmaker by Richard Dawkins
@citiblocsMaster
@citiblocsMaster 4 года назад
Lex invited Pinker on the show by teleporting him from the 17h century into this room
@saahuchintha
@saahuchintha 3 года назад
Me from 2020 when Steven pinker says "we need to worry about other important things like Pandamics , climate change and cyber attackes." Carona Virus, Australian forest fires and Anonamus comes back...!!!🙊 this guy is a prophet....🙇‍♂️🙇‍♂️🙇‍♂️
@montyoso
@montyoso Год назад
34:34 Steven Pinker book recommendations.
@gracefulautonomy
@gracefulautonomy Год назад
20,34 The problem of replacing soul deadening jobs with AI is not sourcing the funds to replace the workers income. The problem is creating new jobs that are satisfying and meaningful.
@dh00mketu
@dh00mketu 5 лет назад
Science has never been the problem. But the greed.
@alchemist_one
@alchemist_one 5 лет назад
I have nothing but respect for Steven Pinker and the message of his two most recent books. However, I'm a bit more concerned about tail risks of catastrophic events (AI-related or otherwise). Also, I didn't quite follow his line of thought about how AGI development would differ so much from evolutionary development in terms of adversarial qualities. Deep learning, like evolution, is driven by natural selection and gradient descent. Many recent successes in games such as Go and League of Legends rely on adversarial training. The same is true of both economies and geo-politics at large. If any given company or nation can gain an edge through a given strategy, competitors who choose not to adopt the strategy become relatively weaker. Engineering discipline might be able to ensure the safety of any given system, but the competitive dynamics provide dangerous incentives. Also, the complexity of systems often surpasses the ability of any one engineer to fully understand and I can say from first hand experimentation that genetic algorithms often yield solutions their creators don't understand. Any sort of system driven by natural selection will *inevitably* select for self-preservation and propagation. Enlightenment Now is a fantastic book, but thus far, I'm more swayed by Tyler Cowen's and Sam Harris's concerns about warfare and AGI, respectively. Looking forward to your appearance on the JRE podcast!
@myothersoul1953
@myothersoul1953 5 лет назад
Machine evolution is not driven by natural selection. We select the machines we want to survive, nature doesn't. In biological evolution the selection criteria is survival, in A.I. the selection is by usefulness to humans and marketability. We don't make machine of the purpose of surviving on their own, we make machines to do tasks. Engineering A.I. and biological evolution are very different processes.
@mattheww797
@mattheww797 5 лет назад
We create A.I. to dominate markets which is by it's nature an adversarial purpose. RU-vid itself is a deep learning A.I. system. It's goal is to get you addicted to clicking on the next video. But Google didn't foresee how the A.I. would go about doing this, and it so happened that it did so by serving up to viewers more exploitative videos, if you happened to watch a video on WWII, the next video it suggests might be an alt-right recruitment video. Google and Microsoft's A.I. also picked up racist tendencies that became so bad that the companies had to step in to correct them.
@mattheww797
@mattheww797 5 лет назад
What do you mean when you say genetic optimization
@ryanfranks9441
@ryanfranks9441 5 лет назад
Pedro Abreu It's clear you are not educated in this. "Gradient descent and genetic optimization are completely different optimization". All trained neural networks have some form of gradient decent. Refinement methods never expose intermediate strategies developed inside the A.I algorithm through training, they optimize the neural weight values generated through gradient decent error accumulation. See (Backpropagation, Adversarial networks, Refined training data, Fitness functions).
@carlosarayapaz6296
@carlosarayapaz6296 4 года назад
Steve Pinker talking that we should be worried about important things, such as, pandemics. God, he was right.
@innerlocus
@innerlocus Год назад
We need AI in Washington DC, not lobbyist-hungry politicians.
@shirtstealer86
@shirtstealer86 Год назад
So refreshing to hear someone point out the obvious fact that its men not women who are the ones behaving like these murder robots that we claim to be afraid of.
@citiblocsMaster
@citiblocsMaster 4 года назад
22:53 I think at that point you should have asked Steven Pinker to give an actual objective function. Then problems become obvious: how to actually implement it, and all the potentially negative side effects of that objective function.
@goe54
@goe54 5 лет назад
Here is what I have to say about AI and I am amazed that this is not emphasised yet. Humans communicate mainly using finite sequences of a finite set of symbols. This set of sequences is enumerable. Some questions arise. 1. Can all the information of the universe be coded in an enumerable set of sequences? 2. Is our thinking process enumerable in nature? Clearly animals have a thinking process without words. 3.Are the feelings processes different of thinking processes? How much of them can be mapped on a enumerable set of sequences? 4. What is intuition? Is it an enumerable process of our mind? 5. Maybe the analogical computing is a better way to follow to be able to replicate the human mind. I don't know the answers, but I believe that being caught in an enumerable universe we will not be able to create some AI comparable to humans.
@stevejordan7275
@stevejordan7275 5 лет назад
I recommend to you - in the strongest possible language - the book *Our Mathematical Universe* by Max Tegmark.
@goe54
@goe54 5 лет назад
Thanks Steve, I will check that.
@3DisFuntastic
@3DisFuntastic Год назад
I don't agree with Pinker here about the argument that "it would be stupid to build a system like that". If building such a system become relatively simple so one or a small group of people can build it. Then there is almost a guarantee that there are going to be intelligent psychotic people that want to build something like this to drag the whole of humanity or life down in their destructive psychosis. But totally agree if you cannot cope with it enjoy life while we can.
Далее
Sovuq qurol | Million jamoasi
00:56
Просмотров 954 тыс.
Steven Pinker on rationality and its limits
1:01:03
Просмотров 56 тыс.
Why a Forefather of AI Fears the Future
1:10:41
Просмотров 99 тыс.
The Problem With Trying To Be Rational - Steven Pinker
42:15
Steven Pinker: Human nature and the blank slate
24:09
Просмотров 520 тыс.
🤔Почему Samsung ПОМОГАЕТ Apple?
0:48
Самый дорогой корпус Hyte Y70
0:52
Просмотров 420 тыс.
Which Phone Unlock Code Will You Choose? 🤔️
0:14