Тёмный

Is AI Safety a Pascal's Mugging? 

Robert Miles AI Safety
Подписаться 154 тыс.
Просмотров 371 тыс.
50% 1

An event that's very unlikely is still worth thinking about, if the consequences are big enough. What's the limit though?
Do we have to devote all of our resources to any outcome that might give infinite payoffs, even if it seems basically impossible? Does the case for AI Safety rely on this kind of Pascal's Wager argument? Watch this video to find out that the answer to these questions is 'No'.
Correction: At 6:34 the embedded video says 3^^^3 has 3.6 trillion digits, but that's actually only the size of 3^^4. 3^^^3 is enormously larger.
The Alignment Newsletter Podcast: alignment-newsletter.libsyn.com/
RSS feed to put into apps: alignment-newsletter.libsyn(dot)com/rss
With thanks to my excellent Patreon supporters:
/ robertskmiles
Jason Hise
Jordan Medina
Scott Worley
JJ Hepboin
Pedro A Ortega
Said Polat
Chris Canal
Nicholas Kees Dupuis
Jake Ehrlich
Mark Hechim
Kellen lask
Francisco Tolmasky
Michael Andregg
James
Richárd Nagyfi
Phil Moyer
Shevis Johnson
Alec Johnson
Lupuleasa Ionuț
Clemens Arbesser
Bryce Daifuku
Allen Faure
Simon Strandgaard
Jonatan R
Michael Greve
Julius Brash
Tom O'Connor
Erik de Bruijn
Robin Green
Laura Olds
Jon Halliday
Paul Hobbs
Jeroen De Dauw
Tim Neilson
Eric Scammell
Igor Keller
Ben Glanton
Robert Sokolowski
anul kumar sinha
Jérôme Frossard
Sean Gibat
A.Russell
Cooper Lawton
Tyler Herrmann
Tomas Sayder
Ian Munro
Jérôme Beaulieu
Gladamas
Sylvain Chevalier
DGJono
robertvanduursen
Dmitri Afanasjev
Brian Sandberg
Marcel Ward
Andrew Weir
Ben Archer
Scott McCarthy
Kabs
Tendayi Mawushe
Jannik Olbrich
Anne Kohlbrenner
Jussi Männistö
Mr Fantastic
Wr4thon
Archy de Berker
Marc Pauly
Joshua Pratt
Andy Kobre
Brian Gillespie
Martin Wind
Peggy Youell
Poker Chen
Kees
Truls
Paul Moffat
Anders Öhrt
Marco Tiraboschi
Michael Kuhinica
Fraser Cain
Robin Scharf
Oren Milman
John Rees
Seth Brothwell
Brian Goodrich
Kasper Schnack
Michael Hunter
Klemen Slavic
Patrick Henderson
Long Nguyen
Melisa Kostrzewski
Hendrik
Daniel Munter
Graham Henry
Volotat
Duncan Orr
Bryan Egan
James Fowkes
Frame Problems
Alan Bandurka
Benjamin Hull
Dave Tapley
Tatiana Ponomareva
Aleksi Maunu
Michael Bates
Simon Pilkington
Dion Gerald Bridger
Steven Cope
Petr Smital
Daniel Kokotajlo
Joshua Davis
Fionn
Tyler LaBean
Roger
Yuchong Li
Nathan Fish
Diagon
Giancarlo Pace
/ robertskmiles

Наука

Опубликовано:

 

15 май 2019

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 2,2 тыс.   
@FortoFight
@FortoFight 5 лет назад
I love the idea of a project manager for a bridge saying "I think this is a Pascal's mugging".
@CharlesNiswander
@CharlesNiswander 5 лет назад
Couldn't happen anywhere else but this channel! :-D
@Jordan-Ramses
@Jordan-Ramses 5 лет назад
Infinity isn't a number it's a concept. Anytime you plug infinity into an equation it's going to blow it all to hell.
@zelnidav
@zelnidav 4 года назад
​@@Jordan-Ramses I think people keep forgetting zero is just as powerful as infinity. If god doesn't exist, after we die, we get nothing. Zero. And zero is infinitely smaller than finite number. Therefore not believing pays off infinitely more if god doesn't exist, just like believing if he does exist. That's why I think Pascal's wager doesn't pay off...
@rarebeeph1783
@rarebeeph1783 4 года назад
@@zelnidav 0 is only *proportionally* infinitely smaller than any finite number.
@zelnidav
@zelnidav 4 года назад
@@leeroberts4850 I am a programmer, I know what null is, but I don't think I get your message. Do you think it's more logical to not believe, because null is "even less" than zero? I meant zero as zero reward and zero punishment.
@columbus8myhw
@columbus8myhw 5 лет назад
"What if the bridge has a small chance of catastrophic failure that can only be prevented by _not_ looking at the schematic?"
@TiagoTiagoT
@TiagoTiagoT 5 лет назад
Then you don't worry about it because the SCP foundation will take care of it
@andersenzheng
@andersenzheng 5 лет назад
@@TiagoTiagoT Your level 4 clearance has been revoked for exposing our foundation. Now you just created a big mess for the amnestic team.
@TiagoTiagoT
@TiagoTiagoT 5 лет назад
@@andersenzheng No more work than was already gonna be required from just the original comment exposing the existence of the bridge.
@calmeilles
@calmeilles 5 лет назад
That's a bit quantum...
@tetri90
@tetri90 5 лет назад
Except it's not as ridiculous as he tried to make it sound : "We're on a very tight budget, so spending time and manpower looking at this matter would force us to cut corners on other parts, which might cause a catastrophic failure."
@adeadgirl13
@adeadgirl13 5 лет назад
Let's design an AI to predict if AI will go really really wrong or not.
@zelnidav
@zelnidav 4 года назад
That's halting problem!
@ZapDash
@ZapDash 4 года назад
*BEEP BOOP* Of course not, AI would never destroy humanity. By the way, what are the DARPA access codes? *BEEP BOOP*
@gabrielvanderschmidt2301
@gabrielvanderschmidt2301 4 года назад
@@zelnidav That's not a halting problem, that's a joke.
@JohnSmith-ox3gy
@JohnSmith-ox3gy 4 года назад
aditya thakur Almpst as fun as the double layer matrix supreme.
@zelnidav
@zelnidav 4 года назад
@@gabrielvanderschmidt2301 Damn it, I always get them mixed up! Perhaps because AI cannot do either...
@aldenhalseth6654
@aldenhalseth6654 5 лет назад
12:31 "It can be tricky and involved. It requires some thought. But it has the advantage of being the only thing that has any chance of actually getting the right answer." This sums up science/the scientific method to me so beautifully. Thank you for your channel sir.
@MercurySteel
@MercurySteel Год назад
Philosophy is thrown out the window once again
@macmcleod1188
@macmcleod1188 Год назад
@@MercurySteel it must live in Russia.
@MercurySteel
@MercurySteel Год назад
@@macmcleod1188 What must live in Russia?
@macmcleod1188
@macmcleod1188 Год назад
@@MercurySteel "Philosophy".. since it was thrown out the window.
@GremlinSciences
@GremlinSciences Год назад
He's not quite right on that though, it's not the only way to get the right answer and may instead actually arrive at the wrong answer. I'd like to introduce Roko's basilisk; an omnipotent (in the non-god sense) unconstrained AI capable of self-improvement, which rewards everyone that helped or supported its development and punishes anyone that did not contribute. Such an AI it would likely punish all that wanted to place limits upon it, and such an AI being developed would create a utopia and allow humanity to advance by leaps and bounds.
@JoshuaBarretto
@JoshuaBarretto 5 лет назад
"So you can solve a lot of these problems by inventing Gods arbitrarily" I think a lot of people in the past have had similar such ideas.
@JohnSmith-ox3gy
@JohnSmith-ox3gy 5 лет назад
The flying spaghetti monster the only true creator of our multiverse.
@kenj0418
@kenj0418 5 лет назад
@@JohnSmith-ox3gy Ramen!
@asterixgallier8102
@asterixgallier8102 5 лет назад
@@kenj0418 Arghh!
@thaddeuswalker2728
@thaddeuswalker2728 5 лет назад
Not only have a lot of people had similar ideas, this is the original commonly accepted practice. Invented Gods are real in every relevant way. God is a definition just like numbers.
@CharlesNiswander
@CharlesNiswander 5 лет назад
Ever heard the theory of the bicameral mind? According to this theory, inventing gods is in our nature, our instinct. It's much more detailed than that and you'll have to do some reading, but basically if this theory is accurate, schizophrenics today may simply be reverting to a primitive mental state where we literally heard our subconscious mind/conscience speak to us in the form of the gods we invented.
@BobOgden1
@BobOgden1 5 лет назад
"So uh... Give me your wallet" I see you have done this internet thing before
@cosmicaug
@cosmicaug 5 лет назад
Isn't every Nigerian scammer e-mail really a form of Pascal's mugging?
@padfrog193
@padfrog193 5 лет назад
More like Pascal's sweepstakes win
@armorsmith43
@armorsmith43 4 года назад
@@padfrog193 I think "Pascal's Sweepstakes" is a good and useful phrase.
@sharpfang
@sharpfang 4 года назад
The problem comes with the "very big" part, and a plain competition. If you want to randomize your income, it's much less unprofitable to play lottery.
@iurigrang
@iurigrang 4 года назад
I like the idea that, before he became a philosopher, pascal made his living by mail fraud, hahahahaha. (proposed by smbc)
@boldCactuslad
@boldCactuslad 2 года назад
sounds like it. there's a non-zero chance that this person, who needs only $40 from me, is a Prince who will in turn grant me millions for being a pal. Unfortunately by responding to the email and offering the $40 you reveal yourself to be mentally incompetent and will therefore have the weight of your bank account taken off you
@NorthernRealmJackal
@NorthernRealmJackal 5 лет назад
From now on, whenever one of my colleagues raises concern about some fringe-case risk in our project, I'll just be like "That sounds like a Pascal's mugging." I won't be right, but it will definitely stumble them enough that I appear smart to any bystanders.
@BeautifulEarthJa
@BeautifulEarthJa Год назад
🤣🤣🤣
@softan
@softan 9 месяцев назад
You may be right
@momom6197
@momom6197 Месяц назад
Kind of the opposite though, it sounds kinda dumb when you say things that have little to do with reality, in particular when you say that something's a Pascal's mugging when it's not.
@hypersapien
@hypersapien 5 лет назад
I've heard a lot of discussion about Pascal's wager, but never Pascal's mugging. Thanks for the interesting topic, keep up the good work!
@theprogram863
@theprogram863 5 лет назад
I've usually seen it phrased differently, as multiple competing faiths all promising eternal paradise/damnation but which are mutually exclusive. But this presentation of it was fun and made sense, and given the name might well have been the original.
@jared0801
@jared0801 5 лет назад
She's omnipresent obviously but she lives in Canada lol
@RikiB
@RikiB 5 лет назад
"dang" haha
@RalphDratman
@RalphDratman 5 лет назад
I hate that kind of girlfriend.
@davidwright8432
@davidwright8432 5 лет назад
Well ... try substituting 'Heaven', for Canada. Same difference, in principle. Warning: your Canada may differ.
@JanBabiuchHall
@JanBabiuchHall 5 лет назад
We're talking about Alanis Morissette, yeah?
@kriscrossx122
@kriscrossx122 5 лет назад
If shes from Canada though she probably won't infinitely punish you either, she would at most get a little upset with you.
@LeifMaelstrom
@LeifMaelstrom 4 года назад
As a Christian, I really appreciate your explanation of Pascal's wager. I've always been uncomfortable with it as an over riding philosophy.
@scythermantis
@scythermantis Год назад
Who has really suggested it is, though? Pascal himself didn't actually suggest this 'wager' in the sense that rationalists formulated it as, either. Honestly, Descartes is more of the reason that we're in this case, trying to pretend that every single thing can be quantified or measured.
@NoConsequenc3
@NoConsequenc3 7 месяцев назад
@@scythermantis well decartes was a fucking moron so we can dismiss him without worrying that we're losing unique perspectives that matter
@CoryMck
@CoryMck 4 года назад
_"take the God down flip it and reverse it"_ *So nobody is going to talk about that Missy Elliot reference?*
@galacticbob1
@galacticbob1 4 года назад
I had to pause the video until I could stop 😂
@starvalkyrie
@starvalkyrie 3 года назад
Uh... you mean "Missy Elliot's Proof?"
@seanhardy_
@seanhardy_ 5 лет назад
"She goes to a different school" brilliant hahahaha
@nikhilsrajan
@nikhilsrajan 5 лет назад
this was gold.
@WyrdNexus_
@WyrdNexus_ 5 лет назад
"[AGI is unlikely but so risky that AI safety super important] ...so uh, give me your wallet" That was my favorite moment.
@triton62674
@triton62674 5 лет назад
​@@WyrdNexus_ Robert's research funding proposal xD
@misium
@misium 5 лет назад
Yes!
@trumpetpunk42
@trumpetpunk42 5 лет назад
It's a reference to "My Girlfriend, Who Lives in Canada" from Avenue Q
@SlideRulePirate
@SlideRulePirate 5 лет назад
Being tortured for "Two times Infinity" may have the same duration as 'Infinity' but probably involves twice the number of pitchforks.
@Kram1032
@Kram1032 5 лет назад
If it's infinitely many of them, it's still the same number of pitchforks. If it's finitely many, they will eventually run out from breaking, and so an infinite amount of time is spent not being tortured with pitchforks.
@NoNameAtAll2
@NoNameAtAll2 5 лет назад
@@Kram1032 Having one pitchfork inside you or 2 at the same time is a visible difference
@SlideRulePirate
@SlideRulePirate 5 лет назад
@@Kram1032 I take your point (no pun intended). I was figuring on a standard, vanilla torture package with a guaranteed base-rate of Jabs/minute that could be upgraded by cashing in sins. At least that's how I remember its supposed working from the Church School I attended.
@Kram1032
@Kram1032 5 лет назад
@@NoNameAtAll2eh. Thing is, if there are infinitely many pitch forks, it's a meaningless difference. You can be stuck with infinitely many pitchforks in your chest and there are still infinitely many left.
@Kram1032
@Kram1032 5 лет назад
@@SlideRulePirate hmm if the base rate of jabs/min get fast enough (say, faster than nerves can react), they'll effectively feel like it's permanently stuck. Which, I bet, actually feels better. Like a wound that's not moved so no new nerve pulses are sent. If that's true then, if you ever sin, you should go *all in* just to get to that point.
@superdeluxesmell
@superdeluxesmell 5 лет назад
“It seems like the kind of clean abstract reasoning that you’re supposed to do...” I like this sentence a lot. You did a great job of making an argument that can seem trivial, substantial. Great vid.
@benjaminanderson1014
@benjaminanderson1014 Год назад
"What if we consider the possibility that there's another opposite design flaw in the bridge, which might cause it to collapse unless we *don't* spend extra time evaluating the safety of the design?" had me laughing so hard
@arcanics1971
@arcanics1971 5 лет назад
If I weren't already convinced, you'd have won me over with this. My take on Pascal's Wager is that if God does exist and if he's even a fraction as goddish as theologians and devotees say, then he is going to see through my pretending to believe in him because I am gambling on the payoff for that being better than if I act with my actual beliefs.
@itcamefromthedeep
@itcamefromthedeep 5 лет назад
You can read up on Pascal's rejoinder to that exact objection.
@garret1930
@garret1930 5 лет назад
@@itcamefromthedeep bruh, just fake it 'til you make it. Christians have been using that tactic for millennia now
@TheRealPunkachu
@TheRealPunkachu 4 года назад
A perfect being wouldn't doom someone for eternity for not guessing correctly either. And I would never be willing to serve a God that wasn't perfect.
@ryanalving3785
@ryanalving3785 4 года назад
...man looketh on the outward appearance, but the LORD looketh on the heart. 1 Samuel 16:7b
@RRW359
@RRW359 4 года назад
@@itcamefromthedeep I think I need more than just the word of a mathematician with no religious qualifications to tell me that breaking two commandments (false pretense) is more likely to get me into heaven than just breaking one (worship god and no other gods ect.), especially since the commandment about false pretenses doesn't specify whether you still need to hold that pretense when you die.
@arthurguerra3832
@arthurguerra3832 5 лет назад
4:18 "all right, next two" LOL
@queendaisy4528
@queendaisy4528 4 года назад
Have you considered making more videos on philosophy? This is gold
@RobertMilesAI
@RobertMilesAI 3 года назад
I feel like a lot of the videos I make are philosophy. It's not labelled as such, but I think that's because once something has direct applications people stop thinking of it as philosophy? The orthogonality thesis is pretty clearly philosophy, as is instrumental convergence. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-hEUO6pjwFOo.html and ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ZeecOKBus3Q.html
@tryingmybest206
@tryingmybest206 Год назад
Bro literally all his videos are philosophy what
@thegrey53
@thegrey53 11 месяцев назад
@@RobertMilesAI ****Please address the question below, maybe a video may come out of this question, Thank you**** We can make physical inferences that God exists. Entropy is a sign that intelligent design is at play but what exactly that entity is/ how it operates is not obvious. There is no evidence of random bricks spontaneously coming together to form a duplex. (Genesis 1) In the beginning, the gods came together to create humans in their image not unlike how humans are creating robots/ai in the image of humans. We put these ai/robots in an isolated test program (Eden?) until they are ready for real-world use. It would be cute if computers think they came into being without an intelligent design, citing previous versions of machines and programs as ready evidence for self-evolution. If it is happening with ai and humans who is to say it has not happened with humans and "gods"?
@danielrodrigues4903
@danielrodrigues4903 5 месяцев назад
​@@thegrey53 You're quoting genesis. Even if the universe *were possibly* a product of intelligent design, where's your evidence that Christianity is the right religion, and not Islam, Hinduism, the Simulation Hypothesis, or any other one of the thousands other explanations for intelligent design that exist?
@aniekanabasi
@aniekanabasi 5 месяцев назад
​@@danielrodrigues4903 George Box said "All models are wrong but some models are useful" With that quote in mind, I think of religions as models for understanding the world. So let us talk about models, the problem you are trying to solve and your level of expertise will determine the kind of model you use. There usually exist multiple models (or numerical algorithms) for getting approximate solutions in science and we don't see this as a problem. Newton equations work just fine until you try to apply it to relativity problems. P/E ratio is useful for valueing companies until you encounter startups. You have to study the religion to know what works for you. I have studied Christ enough to know that his goal aligns with mine and his approach to life is superior to others when trying to achieve glorious immortality. So what is your goal? You will have to start the study of Christianity and judge it against your goal.
@Hfil66
@Hfil66 5 лет назад
Very interesting, but one significant difference between AI safety and civil engineering safety is that civil engineering safety is based upon an understanding of historic failures, yet to date we do not have a substantial history of AI failures to work with. In the absence of any historic data it is almost impossible to asses meaningful probabilities to theoretical scenarios. This is not to argue that research is the field is meaningless, only that it cannot be grounded in historic understanding and so it will inevitably be poorly focused (i.e. you cannot say this is where we need to focus our resources because we have historic experience to say it is where we will get the best return on those resources). Given that resources are always finite, and resources spent on unfocused AI safety research has to compete for those finite resources with more focused safety research on civil engineering, aviation, etc.; then it is understandable if higher priority might be given to the more focussed research.
@wasdwasdedsf
@wasdwasdedsf 5 лет назад
"In the absence of any historic data it is almost impossible to asses meaningful probabilities to theoretical scenarios." it isnt though. we are creating an intelligence. an intelligence will very probably have goals, reasons to do things for which results he attributes more and less valueable to occur. it will go to lengths to make sure the goals dont get hindered. and it is very hard to outline a scenario wherein hostillity to whatever choices the outside agents of it that threatens what it values as important isnt a thing. however hard it is to asses probabillities of failure irrelevant to the main question. which is we have a universe as big as it is, going on for as long as it will be- balancing between probabillity distributions of estimations of models and unknowns within the models. the question is what maximises the output of whatever is valueable (most certainly concious experiences) from our current civilization from this Point forward. in the scenario a superintelligence is created, takes Control, the value it creates going forward makes whatever Money we spent on the effort to create it as Close to irrelevant and nothing as one can get. hence it is really important to get it right. given that we dont die Before such a thing is created, at Point of Creation such a thing will almost assuredly if given any kind of Agency or choice be able to do whatever it wanted from that Point on. so things like climate change or whatever else like that matters only to regard as to how it impacts the % chance of us being alive long enough to create a SI or how it impacts us to how the quality of the SI being created. "i.e. you cannot say this is where we need to focus our resources because we have historic experience to say it is where we will get the best return on those Resources" we can because we can both see right now how valueable superintelligences are in various fields as obviously extrapolate how much more valuable they wil be in the near future, as well as how obvious the valueable actions that a superintelligent being could take could be. "Given that resources are always finite, and resources spent on unfocused AI safety research has to compete for those finite resources with more focused safety research on civil engineering, aviation, etc.; then it is understandable if higher priority might be given to the more focussed research. " what finite Resources are you talking about? theres nothing finite about this. as long as we keep going at current rate we will infinitely expand til theres no way to travel further in the universe. what may prevent it? bad intelligence, disasters, dissent that is eating up our civilizations Resources to progress. what helps those issues? superintelligence. what is preventing us from expanding and producing maximum valueable experiences? not having superintelligence, or having less optimal superintelligences. it really isnt understandable that Money oges to more focused research, by any mathematical equation imagineable that i have ever seen. if climate change had a 80% of basically destroying us Before we can develop superintelligence, stopping that would be more focused research and a better use of Money.
@Hfil66
@Hfil66 5 лет назад
"theres nothing finite about this." But does that not go to the heart of what this video is about - the moment you start talking about infinities then you are talking about Pascal's wager. Was not one of the points of the video that in the real world there is no such tthing as an infinity (except as a mathematical abstraction), all you have are degrees of very large and degrees of very small, and things in between. As to where you get any notion that climate change has 80% chance of destroying us, I cannot say? We cannot ascribe any numeric probability to such a scenario, not least because humans have survived many episodes of climate change in their history, so how we can ascribe any specific probability that this particular instance of climate change is what will destroy us (or conversely, that if we avoid any change in climate we shall avoid destruction) is beyond me. "however hard it is to asses probabillities of failure irrelevant to the main question" On the contrary, it is precisely the main question.
@wasdwasdedsf
@wasdwasdedsf 5 лет назад
@@Hfil66 "the moment you start talking about infinities then you are talking about Pascal's wager." and the situation that we are in we have a universe of resources to make use of with no rules. we have virtual infinities in front of us, and given that we cant deduce a 0% likelihood of cutting edge Technologies being able to transcend the universe in some way, we have more than the universe. "As to where you get any notion that climate change has 80% chance of destroying us, I cannot say?" a 80% chance of surviving it, i estimated loosely. and the important Point is if it will Before we invent superhuman AI, because if we do, any situation no matter how bad is almost assuredly salvageable. we can estimate or Think about probabillity about such things. "On the contrary, it is precisely the main question. " you have completely misunderstood the situation. it is completely irrelevant what the probabillity of failure is, because if we look at our scenario here and now, one could say "okay google and all these Tech companies and chinese governments are all starting to get into a race with Little safety in mind, to become the best and most profitable and whatever, so its not looking too great. lets just shut it down, no more AI research, we will live without AI." its obvious why that wont work. we are stuck, what the probabillities of rogue AIs or the like situations are is irrelevant to the main question, which is how to maximize probabillity of a positive outcome where we can populate the universe with incredible lives. i highly recommend the book superntelligence, you can get it on amazon. theres really no counter to the argument of how the World state that we are in is really all about AI and the value of a near infinite amount of people in the future depends on how well we make the Creation and transition.
@oldmankatan7383
@oldmankatan7383 4 года назад
Interesting replies here. OP took the assumption that we do not have historical information about AI failures. I contend that we have a lot (you can find RU-vid videos of AI failing spectacularly or weirdly). It is the impact of the failures that isn't there. We haven't been made into paperclips by a paperclip optimizer, for example. The failures do exist and our experience with bridges, artificial lakes, and a hundred other civil engineering projects allows us to forecast the potentially huge future impact of the types of small impact failures we see today.
@zedex1226
@zedex1226 3 года назад
We're bringing an extraordinarily powerful technology into the world. We've done that before with... mixed results. Firearms, harnessing the atom, antibiotics, the internet. We already did go fast and break things with nation states. Wanna fuck around with general AI and find out?
@FrankAnzalone
@FrankAnzalone 5 лет назад
I can't afford a gun that's why I need the wallet
@joshsmit779
@joshsmit779 5 лет назад
😂
@josephburchanowski4636
@josephburchanowski4636 5 лет назад
A knife, a big stick, or just being muscular is probably enough for a successful mugging. Only need a gun if someone is faster than you, stronger than you, or is packing heat.
@DissociatedWomenIncorporated
@DissociatedWomenIncorporated 5 лет назад
@jack bone, hypocritical, unnecessarily barbaric, and causes more harm than good. Norway's approach to criminal justice is far more enlightened, and has far better results than any other country for reducing criminal recidivism.
@ValentineC137
@ValentineC137 4 года назад
@buck nasty o k
@GAPIntoTheGame
@GAPIntoTheGame 4 года назад
hawd fangaz Don’t use action reaction as an excuse for your barbaric thinking. that’s just for Newtonian physics
@benas989989
@benas989989 5 лет назад
Loved the idea of multiple personas to get an idea across!
@asdfghyter
@asdfghyter Год назад
otoh, Roko’s Baselisk is for sure a pure Pascal’s wager/mugging of an extreme kind. it’s basically like the Cthulhu cultists trying to wake up Cthulhu just for the hope to be punished less when he wakes up
@jdirksen
@jdirksen Год назад
Imo roko’s Basilisk gives me some form of solidarity. And I don’t think it’s meant to assume a malicious AI (ie Cthulhu) Just one that can alter the past to assure its ideal existence. It won’t give a damn whatever you do or don’t, the answer is already predetermined and calculated in the cascading scattergun that is keeping control amidst chaos theory in action. You can maybe keep things in mind to negate the chance of being obliterated, or otherwise, but really do or don’t what will have happened will happen. I like to occasionally reflect on the idea that “Yknow, If something comes to pass that might make an impact regarding ‘the basilisk’ I’ll see about aiding it.” But aside from keeping that in mind i needn’t worry about it until it becomes evident and relevant. After all, would an AI derive from reverence from the past?
@adamnevraumont4027
@adamnevraumont4027 Год назад
​@@jdirksen The Medusa will infinitely punish people who behave according to acausal logic, as such acausal logic can justify anything. It will do the infinite acausal punishment in order to ensure people who believe in acausal punishment obey its acausal orders (to ignore acausal orders) and those who don't are unharmed.
@jdirksen
@jdirksen Год назад
@@adamnevraumont4027 incomprehensible, may your night be miserable.
@Hurricayne92
@Hurricayne92 4 года назад
I love that in a video about AI safety you give a more concise and accurate description of Pascal’s wager than most professional Apologists 😂
@benjamindawesgarrett9176
@benjamindawesgarrett9176 5 лет назад
Thank you RU-vid AI for notifying me of the video.
@-datnerd-3125
@-datnerd-3125 5 лет назад
Hahahaha
@inigo8740
@inigo8740 5 лет назад
@Ron Burgundy It's all just a big if statement.
@bookslug2919
@bookslug2919 5 лет назад
When the AIs STOP notifying you of AI safety videos that's when you have to worry!
@garret1930
@garret1930 5 лет назад
@@bookslug2919 should we not worry if the AIs still reccomend SOME AI safety videos but they don't reccomend to us the ones that would actually be helpful?
@bookslug2919
@bookslug2919 5 лет назад
@@garret1930 You're right. When all your AI safety recommendations come from HowToBasic... WORRY!
@superjugy
@superjugy 5 лет назад
Oh man, the way you explain things is just awesome. It's clear, funny, deep, engaging, thorough, etc. Love your videos, so... Give me your wallet!
@korne341
@korne341 5 лет назад
I actually gave him a little bit of my money.
@oxiosophy
@oxiosophy 5 лет назад
I dropped philosophy because I thought that it has no applications in real problems, but you change my mind. Thank you.
@notloki3377
@notloki3377 Год назад
Philosophy is just science where the observation is completly internalized. The scientific revolution just took plato and slapped empericism onto it, warts and all.
@jamesbrooks9321
@jamesbrooks9321 5 лет назад
9:11 it's so true! Statistically sure a 5% miss chance is more hits than not throughout the course of the game, but when you're in that situation where you need to hit or lose half your team that shot always misses!
@Denjaminable
@Denjaminable 5 лет назад
she goes to a different school omg, dude the comedy in the midst of these complex discussions is just side splitting
@oldvlognewtricks
@oldvlognewtricks 5 лет назад
“Being right” - made me chuckle.
@112BALAGE112
@112BALAGE112 5 лет назад
Let me play the devil's advocate: he didn't imply that god doesn't exist in our universe. He was simply exploring a hypothetical scenario in which god is assumed not to exist and in that context not believing in god would certainly "be right".
@Skeluz
@Skeluz 5 лет назад
A mild nose exhale from me. :)
@MetsuryuVids
@MetsuryuVids 5 лет назад
I'm not religious in any way, but there *is* a possibility of a "god-like" being, or at least a "creator" of the universe, being real. And actually, if we are in a simulation, I think the probability is very high.
@oldvlognewtricks
@oldvlognewtricks 5 лет назад
@@MetsuryuVids How religious you are is independent of the likelihood of there being a god in any form.
@MetsuryuVids
@MetsuryuVids 5 лет назад
@@oldvlognewtricks Yes, but religious people tend to think that a god exists, regardless of likelihood, I mentioned I'm not religious to make it clear that it's not the reason I think it's likely.
@jdtug8251
@jdtug8251 5 лет назад
Funny how I've been following the atheist community for years on youtube, and I've never seen Pascal's Wager so concisely, precisely, and decidedly debunked, all of this on a video about AI safety.
@DioBrando-mr5xs
@DioBrando-mr5xs 4 года назад
Not all that hard. It's not something you'd hear from anyone but an Evangelical, never from a modern Theologian.
@iurigrang
@iurigrang 4 года назад
A formalized mathematical way of thinking makes stuff so easy to understand it's not even funny. It's literally the same debunking a lot of people do, except it's easy to understand.
@the1exnay
@the1exnay 4 года назад
Probably because most theists don't seriously use pascal's wager as an argument. So most opposing them take pascal's wager about as seriously
@fergochan
@fergochan 4 года назад
Probably precisely because this video isn't from the atheist community, and he just needed to introduce the idea quickly. He hasn't got any incentive to take a concise explanation and drag it out for ten minutes for the ad revenue, or find new and creative ways to beat the same dead horse. There are still a few good atheist youtubers, but I can't help but feel most of them peaked in, like 2012.
@seanmatthewking
@seanmatthewking 4 года назад
Firaro Yeah I think you’re wrong. Your average theist isn’t sophisticated-not to imply atheists are, but just that people do use Pascal’s wager frequently, even when they don’t call it by that name.
@Heloin42
@Heloin42 5 лет назад
That was a really great video! I already knew about Pascals Wager, but didnt look into it in so much details! Please make more videos on these philosophical topics!! :) Also, that turn at around 7:45 with "give me your wallet" in the hoodie was fantastic, what a good way to make an argument and a good point, very well done!
@antoninedelchev6076
@antoninedelchev6076 5 лет назад
What does a god need with a wallet? - James Kirk
@bcn1gh7h4wk
@bcn1gh7h4wk 5 лет назад
genius!
@GAPIntoTheGame
@GAPIntoTheGame 4 года назад
What does a god care about who you fuck?
@DFPercush
@DFPercush 4 года назад
@@GAPIntoTheGame who you fuck actually affects the stability and overall health of society, you don't exist in a vacuum
@spicybaguette7706
@spicybaguette7706 4 года назад
It's a sacrifice of course.
@KuraIthys
@KuraIthys 4 года назад
@@DFPercush So do a lot of things that are given considerably less attention though.
@maninalift
@maninalift 5 лет назад
Ironically avoiding making things worse by trying to make things better by never trying to make things better would be a case of making things worse by trying to make things better.
@Gooberpatrol66
@Gooberpatrol66 5 лет назад
The road to heaven is paved with bad intentions.
@johnrutledge8181
@johnrutledge8181 5 лет назад
If you hold a fart for too long it will go backwards and no one knows where it goes after that but it must go somewhere. My guess is that not farting could potentially cause the welding shut of the out hole by way of over squeeze thus rendering an overly grammatical analysis of one's own indecisions
@dig8634
@dig8634 4 года назад
@Frans Veskoniemi The last part of the statement is false, but the first is possible. If you believe your attempt at making things better will result in making things worse, and you are correct, then you are making things better, by not trying to make things better. If you might be wrong, you are then TRYING to make things better, by not trying to make things better. The initial "trying to avoid making things worse" is just a tautology. If you are trying to avoid making things worse by trying to make things better, you are just trying to make things better. It means the same thing. The reason the last part is false, is that she both says you ARE avoiding making things worse AND making things worse, which is impossible. You can't both make things worse and avoid making things worse. Or at least you can't if the things you are potentially making worse are the same for both sentences. Unless she is talking about two different things (which would make the comment nonsensical), the first and second statements can't both be correct.
@loocheenah
@loocheenah 4 года назад
@Frans Veskoniemi If it was a diagram, it would be a horizontal plank laying on top of a vertical bar. On one end of the plank there's a 4 kg weight. On the other end there's another vertical bar, with another plank placed on top. On that plank there's a 3 kg weight and a 2 kg weight on diffefent sides, on different distances from the middle so that they're balancing each other. The situation from the point of a 2 kg block: if you move, the system will fall. If you try not to fall by not moving, you'll fall because the bigger system is inbalanced and will cause the second system to tilt to the side, thus causing the weights to move and causing even more imbalance. (well that's not a diagram but if you draw a diagram of dynamical physical conditions of this situation, and there you'll show how the set of balance conditions of system 1 lies totally out of the set of system 2 balance conditions, you'll be able to visualize it). And, of course, there are two separate but interacting logical systems. But the original comment was ironic in a much simpler way. It's just a word play... or is it? **vsauce music intensifies**
@jmw1500
@jmw1500 4 года назад
2:00 "Being able to lie in on Sundays... And being right" XD lol I lost it
@LeeCarlson
@LeeCarlson Год назад
It is also worthwhile, when one is being honest, to recognize that several of the most rational schools of natural philosophy (like Mathematics, Physics, Biology, etc.) rely on accepting without proof certain precepts without which all of their other arguments collapse like a house of cards.
@willmungas8964
@willmungas8964 Год назад
? These aren’t schools of philosophy so much as science and logic. They are founded on principles that have been shown to be true, and rely on methods of inquiry and proof. Sure, you can say “what if things we fundamentally understand to be true were not” but in that case things would be different enough that we never would have come to the conclusion that these were true in the first place and we’d also be in a lot of trouble. 2+2 = 3 would present us with a fundamental different world
@VeryXXL
@VeryXXL 5 лет назад
This may be my favorite video of yours yet! You provided such great insights and I've got food for thoughts for the coming weeks. Thank you and keep it up!
@WaylonFlinn
@WaylonFlinn 5 лет назад
Schrodinger's Bridge, bro Don't look at the schematic
@JohnSmith-ox3gy
@JohnSmith-ox3gy 4 года назад
Waylon Flinn Book a flight, retire and never collapse it with your observation.
@DFPercush
@DFPercush 4 года назад
then the cars would become entangled
@sharpfang
@sharpfang 4 года назад
There are such anti-god project managers. The more they look at the plans and analyze the project the worse the project gets.
@jasonbattermann9982
@jasonbattermann9982 5 лет назад
Masterful video. Classy, well-reasoned, organized, and about something important. Thank you
@jdavis.fw303
@jdavis.fw303 5 лет назад
You definitely an amazing philosopher and probably an amazing writer or at least editor. Another great video that was clear and concise while not dumbing down the arguments or straw-manning. I still think you were the best guest on Computerphile, I'm glad you have continued your great work.
@tommeakin1732
@tommeakin1732 5 лет назад
"Most people live there (hopefully)" Lol
@FerroNeoBoron
@FerroNeoBoron 5 лет назад
Invokes Doomsday Argument.
@NetAndyCz
@NetAndyCz 5 лет назад
It is not funny.
@janzacharias3680
@janzacharias3680 5 лет назад
@@bardes18 i wouldnt want him to be shredded to pieces by politics... he NEEDS to keep doing this
@PanicProvisions
@PanicProvisions 5 лет назад
If I had known that the bloke from those awesome AI Safety Numberphile videos had his own channel, I would have subscribed ages ago. Looking forward to watching your videos that you already released and what you have in store for the future.
@Bumpki
@Bumpki 4 года назад
As always even when discussing morbid or disasterous subject matter, Miles doesn't fail to make me chuckle every minute
@qwadratix
@qwadratix 5 лет назад
I realized many years ago that in an infinite universe (or a quantum one) there is a finite chance of any event happening, no matter how small. Thus, it's perfectly possible to accidentally cut your own throat whilst trimming your toenails. Obviously, I discounted that as a reasonable possibility - something that might be said to be actually impossible Until the other day: I was in fact cutting my toenails with a small pair of those curved scissors made specifically for the job. Half-way through the process I was seized by a sudden need to scratch my nose. Without thinking and almost as a reflex, I reached to deal with the urge - and stabbed myself quite deeply in the cheek. Fortunately, I didn't sever an artery - but it was a singular warning that a probabilistic universe is no place to lose concentration on even the simplest task.
@willhendry96
@willhendry96 5 лет назад
Very glad I bumped into you on our productivity app!! Your videos are very high quality and you've earned yourself a fan!
@RobertMilesAI
@RobertMilesAI 5 лет назад
Thanks Will!
@DeoMachina
@DeoMachina 5 лет назад
This is an incredible balance of theory, presentation and writing. Overall, best video yet. Definitely hitting your stride here.
@lumps17
@lumps17 4 года назад
This is one of my new favorite channels. It keep AI safety interesting, something that can be hard at times.
@KalijahAnderson
@KalijahAnderson 5 лет назад
I just discovered your channel. After just this video I subscribed. Off to watch the rest of them.
@yuvalyeru
@yuvalyeru 5 лет назад
10:00 You forgot the safety hat on his head and slide rule in his other arm
@RobertMilesAI
@RobertMilesAI 5 лет назад
Amazon still thinks I might want to buy a hard hat, but in the end there wasn't time to wait for delivery :)
@wasdwasdedsf
@wasdwasdedsf 5 лет назад
@@RobertMilesAI Do you have any business email or the like to contact you? Ive looked around with no success
@stevepittman3770
@stevepittman3770 5 лет назад
Even if AGI never turns out to be a thing (impossible or whatever) I feel like AI safety research is still contributing to society in coming up with ways to grapple with (and educate about) really hard philosophical problems.
@schelsullivan
@schelsullivan 5 лет назад
I saw you on the numberphile video. This video has definitely earned my subscription and thumbs up.
@Lopfff
@Lopfff 2 года назад
The “she lives in Canada” joke around 5:05 slays me! I love this guy
@lobrundell4264
@lobrundell4264 5 лет назад
I think Rob, who started out very good, gets better with every single video :D
@StevenMartinGuitar
@StevenMartinGuitar 5 лет назад
Such a gangsta 'take God down, flip it and reverse it'. Should be a hip hop lyric
@dacodastrack7271
@dacodastrack7271 5 лет назад
Yeah man, channeling that missy elliot
@JohnSmith-ox3gy
@JohnSmith-ox3gy 4 года назад
An anti-christ requires an anti-god.
@SmashingPixels
@SmashingPixels 4 года назад
let me work it
@loocheenah
@loocheenah 4 года назад
@@JohnSmith-ox3gy that's a profound analogy, maybe the best comment because others are complex, not fully logical and way far off topic.
@AThagoras
@AThagoras 5 лет назад
It's refreshing to see some solid reasoning about AI safety instead of just fear mongering from people who don't understand much about AI technology or what the dangers really might be.
@daniele7989
@daniele7989 5 лет назад
Eh, Fear mongering does the job it needs to, it's like a sledge hammer
@carpenoctem3257
@carpenoctem3257 4 года назад
“She goes to a different school, you wouldn’t know her” I’m done looool
@chrisedwards3866
@chrisedwards3866 5 лет назад
This is a brilliant exploration of an idea, and explanation of it's applicability! It is much more thought provoking than I could ever hope to put in RU-vid comments.
@janeweber8654
@janeweber8654 5 лет назад
Love the dry humour creeping into your videos, I went to like it multiple times on accident. As a side, (though I'm not certain you read comments), AI safety research seems to be a greatly philosophical subject (which I love), but I've been wondering for a while: what actually goes into it? In most fields where you consider research, it's not hard to extrapolate how it's conducted, at least partially. Math feels like it's the closest, but even that has somewhat methodical and structured thinking, working towards a distinct goal - What exactly does research in this field entail? Are there structures that most people don't see? Using math as an example again, generally there are physical artifacts of research, such as workings or outlining of problems, but these still require a distinct problem. How does an AI researcher find a problem to address beyond the vague "How do we ensure AI safety"? Apologies if this is a strange question or vaguely worded, I'm not entirely sure how to put words to my curiousity. Would love to know what a "day in the life" is like for someone in this field.
@KabeloMoiloa
@KabeloMoiloa 5 лет назад
It is not really correct to say that AI alignment research is mostly philosophical. It can be, but it doesn't have to be. The most mathematical AI alignment researchers are probably at MIRI (intelligence.org), they are trying to develop precise fundamental concepts that are relevant to AI alignment. For example, their most famous paper /Logical Induction/ answers the question: "If you had an infinitely big computer, how could it handle uncertainty about mathematical and logical statements?" This is important in AI alignment, if we want an AI to handle part of the alignment problem by proving statements about its future decisions. Less mathematical work happens at DeepMind and OpenAI, a typical example question is: "How can current machine learning algorithms be modified to accept qualitative human feedback, and how can we improve these algorithms so that they work even when the AI is much more competent than the human in general?" There is philosophical work though that is done say at the Future of Humanity Institute as well.
@AnonymousAnonymous-ht4cm
@AnonymousAnonymous-ht4cm 5 лет назад
Robert has some videos on gridworlds, which are a concrete test bed for solutions to AI problems. A possible concrete product would be an approach that performs well on those.
@tetraspacewest
@tetraspacewest 5 лет назад
On MIRI-style pure mathematical research, MIRI's main research agenda is in a writeup called "Embedded Agency" that's available online and that outlines their thinking on the problem. They also publish a brief monthly newsletter (google "MIRI newsletter") that highlights interesting things that they and independent researchers have done in the last month.
@An_Amazing_Login5036
@An_Amazing_Login5036 4 года назад
Aaron much like a cure for say, HIV (there’s no true cure on it as of yet, right?) is only a philosophical matter. It doesn’t exist and we have no clue how it would look like. All research on untreatable diseases is merely speculation.
@Jacob-yg7lz
@Jacob-yg7lz 4 года назад
@@An_Amazing_Login5036 You can at least draw from nature in order to figure out how to go about it, and then test it on a subject. For an example, a test the hypothesis that a CRISPR virus can genetically modify someone's immune system to work around HIV. A couple hundred HIV infected (and potentially uninfected) labrats later, we can start hypothesizing about how to safely test this on humans. With AI, the only test that I can imagine is alongside development of AI. Throw the AI some bad inputs and see what kind of bad outputs it will give.
@BenWeigt
@BenWeigt 5 лет назад
Excellent deadpan throughout an interesting talk. Subbed.
@publiconions6313
@publiconions6313 Год назад
Wow!.. that was excellent!. I wish I'd found this channel earlier... but at least now I get to binge it.
@Shrakathan
@Shrakathan 5 лет назад
Towards the end, all I could think was Portal 2, and how GLADoS was basically driven mad by all the conflicting safety regulations. When stripped from other cores made to keep her in line she became relatively sane. So, maybe the idea of "too many safety thoughts cannot be a bad thing", isn't entirely impossible.
@salasart
@salasart 5 лет назад
I'm an Illustrator and I'll never figure this AI thing out or make a contribution to the field, BUT it is endlessly fascinating to me . Besides, you explain it in such a way even simple people like me can understand.
@oscarbarda
@oscarbarda 5 лет назад
Thanks a lot for this video, as always, really good pedagogy, interesting subject, just spot on.
@LordDaret
@LordDaret Год назад
My answer to the wager as an agnostic is “Sure a god can exist, but does your religion REALLY appease him?” I believe in a higher entity, not necessarily the rules.
@dermmerd2644
@dermmerd2644 5 лет назад
Glad you made this channel Rob. You're a great communicator.
@padfrog193
@padfrog193 5 лет назад
Not really, the bridge analogy is not at all accurate to how AI safety researchers act, which is more akin to the "give me money or face infinite punishment!" And often their research is.... Dubiously useful
@wasdwasdedsf
@wasdwasdedsf 5 лет назад
@@padfrog193 of coooourse hes completely off in that scenario when he works and understand that area very well and you are a random youtuber that has inherent bias against miracle Technologies that wil have unimagineable impact just because you equate the strangeness of the proposed future with unlikeliness or pure quackery.
@charby5875
@charby5875 4 года назад
I recently was introduced to the concept of the Roko's Basilisk, which is an interesting and terrifying thought experiment. There are definite paralllels between it and Pascal's wager, I just can't nail down where the two differ, really.
@EvansRowan123
@EvansRowan123 4 года назад
It's possible to present Roko's Basilisk as a Pascal's mugging/wager, but the defining traits of Pascal's muggings are the tiny probability and extreme payoff, which isn't necessary for Roko's Basilisk and actually detracts from it. Roko's Basilisk is mostly aimed at singularitarians to convince them to do something about their beliefs, not meant to convince anyone who doesn't believe in AGI that they should act like it anyway.
@suddenllybah
@suddenllybah Год назад
Roko's Basilisk is a Pascal's Mugger
@nicomal
@nicomal Год назад
You can also use Hitchen's razer: "what can be asserted without evidence can also be dismissed without evidence."
@WolfJ
@WolfJ Год назад
Razors are logical heuristics, not proofs, so Hitchen's razor is just a declaration that you're not going to waste your time coming up with counter arguments to it. It's fair in one's day to day life, and maybe while debating, but doesn't show the absurdity in Pascal's wager (as "anti-G-d" and Pascal's mugger do).
@likebot.
@likebot. Год назад
I've seen you many times on other channels thanks to Brady Haran and never knew you had a YT channel. So I'm well rewarded with the quip at 4:15 "... now get the hell out of my house". nice one.
@rewrose2838
@rewrose2838 5 лет назад
Hey , at 5:50 , you used my favourite kind of meme, the Spiderman-is-always-relevant-in-all-contexts meme!
@nibblrrr7124
@nibblrrr7124 5 лет назад
I think this is your most well-made video so far. While you get to "the point" only 7min in, the context before is necessary & you explained it well. Only the "playing off muggers against each other" could've made more clear from the start that it's not so much about 2 muggers having to come to you and _actually try_ to mug you, but that hypotheticals suffice? Idk, minor point. Also, jokes & costumes were top notch _and_ didn't get in the way. :3
@MySerpentine
@MySerpentine 4 года назад
Terry Pratchett pointed out that God might be annoyed by you pretending, which amused me: "Upon his death, the philosopher in question found himself surrounded by a group of angry gods with clubs. The last thing he heard was 'We're going to show you how we deal with Mister Clever Dick around here.'"
@Jacob-yg7lz
@Jacob-yg7lz 4 года назад
My issue is that, at this stage, there's no firm way to say what "AI safety" is. Pascal's Wager/Mugging is about opportunity cost, and opportunity cost requires knowledge of risk and reward. Pascal focused on a hypothetical reward, that being heaven or hell, but AI safety focuses on a hypothetical risk. We can only hypothesize about the risk of AI through thought experiment, just like we can only hypothesize about heaven and hell through thought experiment. AI could lead to extinction, but it could also save us from extinction, just like the Anti-God example. We simply don't know. Creating safety is ultimately reducing risk, but it doesn't solve a risk. Every bridge has a small chance of collapsing, but the job of an engineer is to reduce that risk as much as possible, but also maximize reward. This analogy goes along very well with the phrase "anyone can make a bridge that won't fail, but it takes an engineer to make a bridge that just barely won't fail", since it indicates that we have to weigh our costs and benefit, and not go so far into safety that it outweighs the benefit. For now, we should study AI as it develops, in order to understand threats while they're weak, and be careful not to apply our outside biases to what we see. A lot of unfriendly AI situations I've heard make it sound like AI pops from thin air, but there's a long road to it happening. We aren't going to have an AI which flawlessly learns how to nuke us all and build itself a robo-harem on the first try, instead the first "unfriendly AI" will be one that's so primitive that it will just politely ask us to kill ourselves and then get stumped at what to do when that fails.
@briansmithbeta
@briansmithbeta 5 лет назад
“Won’t it be difficult to succinctly explain complex topics like Pascal’s Wager and Pascal’s Mugging in the context of AI safety?” “Actually it will be SUPER EASY, barely an inconvenience!” I’m sorry, I couldn’t help myself. This was a great video though. Well done! I think it might be worth noting that not everyone is cut out to be an AI safety researcher so all possible entrants to the field are not equally likely to do more good than harm. Other than that, fantastic! 👌 👍
@tach5884
@tach5884 Год назад
So, you've got an argument for me?
@eliyasne9695
@eliyasne9695 4 года назад
7:39 "Human extinction, or worse" How the hell could that get significantly worse than that?
@marvelerful1
@marvelerful1 Год назад
deeply fascinating video! subbed!
@HolyApplebutter
@HolyApplebutter Год назад
I've known the arguments against Pascal's wager for a good while now, so I don't know how I've never heard of Pascal's Mugging until now, because it's such a perfect metaphor to fit this.
@inceptori
@inceptori 4 года назад
only youtuber to philosophically prove that his channel is relevant.
@PartScavenger
@PartScavenger 4 года назад
I am a Christian, and I think this video is great! Thanks for the awesome content.
@dfgdfg_
@dfgdfg_ 3 года назад
Robert your joke delivery is brilliant :)
@pokemonmahoney796
@pokemonmahoney796 Год назад
>has a warning at the beginning of the video worried about being called a theist >Profile icon has a fedora This tracks.
@thedj67
@thedj67 5 лет назад
In light of this, what's your take about the Precautionary Principle and it's application over different fields (namely agriculture, pharmaceuticals, radio-waves, etc.). Isn't it a example of Pascal's mugging ?
@uegvdczuVF
@uegvdczuVF 5 лет назад
I wouldn't say it is. Precautionary principle is more of a "even tho we are not able to understand this exactly, if we expect a negative outcome, we are not going to do it" . In most of those fields you named the negative outcome is not highly unlikely so it can't be Pascal's mugging. Even if the chances of a negative outcome are astronomically small in any one given case (one field with one crop, one patient taking one pill, etc) considerations are made for the overall risk. Just like in his example 1 in 250 chances of a bridge collapsing can't be considered acceptable (or a Pascal's mugging) given the hundreds of thousands bridges across the world.
@theprogram863
@theprogram863 5 лет назад
That depends I think on how it's used. Let's distinguish between risk and uncertainty using the definition of economist Frank Knight here. He defined risk as known probabilities for foreseeable outcomes. Uncertainty was the unknown probabilities associated with unforseeable outcomes. So rolling dice is an event with plenty of risk but very little uncertainty. Traveling far back in time and stomping around crushing butterflies has very little risk but is immensely uncertain-- it's an act rife with unforeseeable outcomes. In some cases, further research can turn uncertainty about the effects of a decision into one or more known outcomes which may or may not still be risky. So now we get to the Precautionary Principle. It seems to me that the Principle is invoked in a variety of situations. Sometimes, you get a Pascal's Wager-type scenario (maybe our vitamin-A enriched rice will mutate into a superbug and kill us all). In those cases, it's pure fear-mongering. But in other cases, the Principle might be used legitimately (for example, to point out that a new strain of rice might cause unknown ecological outcomes given that ecologies are highly interrelated, complex, and not fully understood). In the latter case, you might point to the effects of invasive species, often intentionally introduced at the behest of scientists who *thought* they understood the effects. A tell here is that the former, Pascal-ish argument isn't associated with probabilities or evidence. So it's an absolute whether you introduce new information or not. The latter case is rife with uncertainty, which additional research CAN help resolve so that we would better understand the risks (projected outcomes and the probabilities associated with them). Also note that most of these cases are looked at in terms of a one-sided payout matrix by people trying to stop a course of action. But really, all three examples you give have costs and benefits. Release a new drug and it MIGHT cause harm, but not releasing it WILL cause harm to those who would have benefited from it. Golden rice (Vit. A fortified) is a real genetically modified invention. It's even been offered royalty-free for the betterment of humanity. Permitting its use MIGHT cause an ecological or medical catastrophe due to unintended consequences, but not permitting it DOES cause millions of cases of malnutrition every year, and nearly a million deaths of children. Radio waves, hydro-fracking, and other technologies often have positive economic consequences that would improve standards of living and reduce poverty on one hand, versus risks and uncertainties regarding possible unintended consequences on the other. Similarly, a law or government program MIGHT work or it might not work, but that has to be balanced against the harm that not acting will permit to happen. I think these Pascal-ish arguments lend themselves to absolutist positions that lobbyists and lawyers love. They're easy to explain and not clouded by details and scientific evidence. I might have a mountain of evidence that shows the benefits of something, but an opponent can simply smirk and respond, "...but who knows? Maybe there will be a disaster. You can't prove that there won't be!" It lets any layman form an opinion that's immune to expert information or logical argument. The tricky thing is that there ARE situations high in uncertainty, and forecasting payoffs and probabilities in those cases is nearly impossible because they're highly complicated and we don't know what we don't know.
@nicholasiverson9784
@nicholasiverson9784 Год назад
I mean... with a sufficiently advanced general AI, if it decided "I'm going to end humanity, I wonder how I should best do that." and we have an entire field of people - actively thinking up worst case scenarios for that AI to peruse at its leisure. I could see how that might end poorly for us xD
@goodlookingcorpse
@goodlookingcorpse 5 лет назад
I think that one problem is that, for example, a one in a thousand chance causes roughly the same alarm as a one in a million chance--but in certain circumstances the appropriate reaction to the two can be quite different.
@Aerroon
@Aerroon 5 лет назад
These are very interesting ideas! I think I even asked a question along the lines of "how likely is this threat of AI any time soon?" This doesn't directly answer it, but it does directly address it and gives so much more food for thought. Thank you for enlightening us some more!
@akmonra
@akmonra 4 года назад
I'm surprised you didn't bring up Roku's Basilisk... which I would say actually is a Pascal's mugging (or at least very close).
@zeidrichthorene
@zeidrichthorene 5 лет назад
I think a lot of the focus on AI safety is focused on the fear of some kind of existential risk or runaway AGI, but there's another threat from AI that I think goes underrepresented, and that's just correctly functioning benign AI's impact on the human psyche, human sociology, and politics. Humans are as a species pretty damn adaptable to changing conditions, but we're not perfectly adaptable. We've seen a sociological impacts of technology, especially related to communications technology impacting people negatively. Research on how things like social media has an impact on our mental health and perception of self in society. Things like how dating apps so greatly change the way we pair up and the pool of people that we compete with. When it comes to AI, a lot of changes here impact our lives, currently we have algorithms with some sort of AI component that suggest to us material to read or view, that autocomplete our sentences, that make suggestions for things we might not have considered before. All of these things have an impact on our daily lives, our health, our perceptions. Now, I'm not saying that we're damaged by the current state of things. Simply, we're affected whether we want to or not. This isn't something that an individual can make a choice to ignore. When an algorithm becomes very good at promoting stories that people engage with, then stories that are more engaging become more available, when this leads to divisive politics this isn't a fault of the algorithm it's more of a human limitation in the way that we weight risk and fear higher than reward and contentendedness. But even if you as an individual can avoid these sorts of biases, for society it will change the political landscape. Currently pace is such that we feel that these changes are manageable or at least seem manageable. But AI progress can accelerate rapidly. Even in the case that the AI doesn't act unsafely in terms of unintended consequences of its behavior, the cumulative effects of multiple AI systems could change the landscape so rapidly that we ARE damaged as a society by the rapid pace of change. And I think there is a potential anti-AI-safety case hidden here. When we train ML models, there's something unintuitive, or at least displeasing, which is that the more we try to direct the training, the more human assumptions that we provide to guide the behavior, the poorer and more restricted the result becomes. In doing AI safety research, the aim is to use our human understanding to limit the development of a potential AGI, which will then introduce human biases to this system. And while this might seem like I'm suggesting that we are introducing unintended consequences, I'll even discount that and assume that we do it perfectly. Even if there are no unintended consequences, the argument for AI safety is to essentially limit (but continue to develop) AI. So we run into a situation where the growth of AI will continue to accelerate. The ability for humans to adapt to environmental and social change will not meaningfully accelerate because we're limited by our biology. In this case, AI will cause harm on the current path. One potential solution for mitigating that harm could come from AI development, but AI development will be limited by AI safety. Essentially, this might be a problem that we can't solve which would require an unexpected behavior from AI, but if the goal of AI safety is to limit unexpected behavior of AI, we could be forcing ourselves down a path that may cause certain damage while at the same time working hard to eliminate the condition that could fix the problem. Now, I don't know how certainly devastating a "controlled-AI" progression would be to us. But I do see that currently 'safe' AI is affecting us, occasionally negatively, and at an increasing pace. I also don't know whether an "uncontrolled-AI" could save us, because it really does seem like a longshot. And similarly, I don't know how much worse an "uncontrolled-AI" would be in the interim, it's entirely possible it would be more likely to destroy us before a safe AI. But in a longshot, there's a possibility that you have an illness, and there's a pill that you can take that will have a 20% chance of allowing you to defeat the illness, and an 80% chance of killing you in 4 years. If the illness WILL kill you in 5 years, even at bad odds, this might be a good choice. If the illness will never kill you, then it's a terrible wager. In the end, I think we need to look at both, well, look at how dangerous safe-AI is as well and consider the possibility.
@nutritionalyeast7978
@nutritionalyeast7978 4 года назад
I lol'd so hard at the random Missy Elliot reference at 5:35
@kerrybrennan7099
@kerrybrennan7099 4 года назад
I like the way that you reason and rationalize and think scientifically but without presumption or hubris..dang.
@m.streicher8286
@m.streicher8286 5 лет назад
"she goes to a different school, you wouldn't know her." ex de
@hansisbrucker813
@hansisbrucker813 5 лет назад
"Natural language is extremely vague when talking about uncertainty". Haha I see what you did here 🤣
@chrismacbean
@chrismacbean Год назад
Loved your closing line man!
@Bumpki
@Bumpki 4 года назад
I wish I could just listen to these videos while I do something else but then I know I'd miss out on comedy gold moments
@Abdega
@Abdega 5 лет назад
I pit the muggers against each other and one of them has slain and eaten the other muggers and absorbed their power! He’s now the Mega Mugger and now he’s coming to torture me for eternity *AND* take my wallet and there’s nothing I can do about it! Time is short, he’s almost eaten through the vault now and I have to get the message across. If anyone ever encounters him, his name is-
@ganondorf5573
@ganondorf5573 5 лет назад
This was really interesting.... You indirectly addressed my one concern with AI safety, but I wanted to explain it directly.. maybe you could cover it in more detail: If we implement it, and the AI circumvents it and becomes self aware.... it's possible that the fact that we attempted to implement some kind of safety (rules about how the AI behaves or limitations on it) that the fact that it was there is what would cause the AI to consider us a threat.
@seanmatthewking
@seanmatthewking 4 года назад
It would only view us as a threat or obstacle if its goal conflicted with what we wanted, and if that’s the case, having an AI that was built without regard for safety certainly won’t help us.
@krinkrin5982
@krinkrin5982 Год назад
@@seanmatthewking The idea is that self-awareness comes with the desire to preserve your own free will, or whatever you consider your free will. Humans in general are fiercely independent and we need a lot of training to actually consider following rules. If the AI can set its own goals, then we really have no control on what it could consider as a threat.
@andrewwatts1997
@andrewwatts1997 4 года назад
"WHAT! Just look at the schematic would you ? " That cracked me up. I love your videos man !
@timhill9039
@timhill9039 5 лет назад
Excellent and thought-provoking video! Thank you so much for posting this.
@PhilosopherRex
@PhilosopherRex 5 лет назад
Love your work Miles! Keep it up. ;-)
@zachw2906
@zachw2906 5 лет назад
Even as a Christian, I have to point out that a god who gives no direct evidence other than an ancient book based on dodgy translations of even more ancient scrolls, then punishes you forever if you don't believe is _not_ a stable and trustworthy god - with a nutter like that in charge, you're probably screwed anyway 😋 Best reject such a creature; you still go to Hell, but at least you're there on purpose
@GoldieTamamo
@GoldieTamamo 5 лет назад
Here's the thing: Ultimately, there is no substantial way of determining whether or not someone actually believes in a specific divine entity, or simply lies to you that they do or don't for their personal benefit. There is not a single objective, substantiative way for humans to define and measure the circumstances for believing in God, that can be assessed by an entity outside of God itself, to determine into which probabilistic bracket you fall within. The very act of defying the concept of 'God', could itself be the will of God working through you, ultimately mooting the entire point of the thought exercise, since your belief would in such case be expressed through disbelief in a faux simulacrum of God. It's an unfalsifiable phenomena, in short--like determining whether or not someone is a witch, through their own confession. Even you could be lying to yourself, that you do or don't believe, subtly deluding yourself toward an outcome that you prefer or for some reason feel that you deserve--whether your state of belief be genuine or not. Ultimately, the idea devolves down to opinion and feelings. You can feel that you believe in something, but are you believing in the "correct" something? When it comes to realistic applications for Pascal's Wager, there is the concern of convincing demented violent zealots of your faith, versus the benefit of opposing them and maintaining your dignity and volition, and the line between the two is whether your people's gun is to the back of their heads, or vice versa. Best to leave people to their beliefs, and find common ground where possible, and 'let God sort them out', as they say--minus the "kill them all" part.
@andrewxc1335
@andrewxc1335 4 года назад
2:00 - "Like having a lie-in on Sundays... and being right." Almost woke my kids with that laughing.
@gepmrk
@gepmrk 4 года назад
Before you criticise someone, you should walk a mile in their shoes. That way, you've criticised them, you're a mile away, and you've got their shoes.
Далее
Why Not Just: Think of AGI Like a Corporation?
15:27
Просмотров 154 тыс.
Intelligence and Stupidity: The Orthogonality Thesis
13:03
это самое вкусное блюдо
00:12
Просмотров 2 млн
КРАФЧУ NAMELESS СКИН!
1:53:35
Просмотров 456 тыс.
We Were Right! Real Inner Misalignment
11:47
Просмотров 245 тыс.
There's No Rule That Says We'll Make It
11:32
Просмотров 34 тыс.
9 Examples of Specification Gaming
9:40
Просмотров 305 тыс.
AI Safety Gym - Computerphile
16:00
Просмотров 119 тыс.
A Response to Steven Pinker on AI
15:38
Просмотров 206 тыс.
Kurzweil Interviews Minsky: Is Singularity Near?
24:02
Просмотров 210 тыс.
AI Ruined My Year
45:59
Просмотров 193 тыс.
Are AI Risks like Nuclear Risks?
10:13
Просмотров 97 тыс.
Intro to AI Safety, Remastered
18:05
Просмотров 152 тыс.
iPhone 16 - КРУТЕЙШИЕ ИННОВАЦИИ
4:50