I read this comment before he got to this part in the video and was wondering how he could pronounce it wrong. You were right I never even thought of pronouncing it like he did 😂
@@GimbalosMorkinar RUSH the band, probably the first corporate rock band. Also called Progressive Rock, they started in the early 70's and they toured into the early 00's. The song he's referencing is Free Will. One of my favorite bands as an old guy, but the singer is a high Tenor and is often confused as a woman on first listening. If you're a fan of technically complex and perfectly played music check out RUSH, Dream Theater, Ygnwie Malmsteen, Joe Satriani, or more recently Devin Townsend Project. Not Limbaugh... lol
This is what I came here for. Joe, I once had an English teacher pronounce the word labyrinth "La-BRINTH" with a heavy accent on the last syllable just because she'd never heard it pronounced and only ever read it. Don't fee bad. But speaking as a Harry Potter fan... come on, dude.
Personally whenever I’m pulled over for by the police for exceeding the speed limit I ask them for data on the metrological device used to measure my speed then cite both Cantors Theorem and Xenos Paradox as to reasons why their metrological assessment of my speed is only reasonably accurate until it gets below the Planck scale and that furthermore all metrological units are essentially arbitrary in nature and a matter of social consensus and therefore devoid of meaning from a cosmic perspective. It is not long after this that I am usually taken “back to the station” and summarily beaten by the officer on watch and a few of his friends and discharged with a warning. I don’t have as many teeth as I used to but at least I still have all 12 demerit points on my Australian drivers licence intact. Totally worth it unless I’m eating corn on the cob lol…
lol - as soon as I clicked on the 'show more' and saw aussie - true mate, true - cikey, wonder you have any teeth left mate = love & kisses from across the ditch in kiwi land xx
Except we can also have an AI that is benevolent. So we literally get to choose what kind of artificial God we create. Do we want to create something cruel and malicious or want to create something kind and caring.
Right? I really thought the accent was on the Bas part. But I don't truly know. I did notice a huge amount of people pronouncing things in ways ‐ myself included sometimes‐ that indicate they're a reader moreso than a "hear‐er". Huh. Ok bye :)
@@IronMan-ds5bi Nope. Try it like this: say the word "basil", as in the herb you use for cooking, then "lisk". put 'em together: "basilisk". That's how it should be pronounced.
i just wanna say that i found this video comforting knowing that there are some problems with the whole roskos obelisk thing that just made the whole thing less scary and instead just interesting as a concept, so thank you joe
The Astley Paradox: If you ask Rick Astley for his copy of the movie Up, he cannot give it to you as he will never give you Up. However, in doing so he lets you down. Thus creating the Astley Paradox
Its also not scary because it makes no sense, relies too much on huge leaps of logic and crazy assumptions. Its stupid. The AM AI from I Have No Mouth And I Must Scream is much scarier.
@@poposterous236 right? its assuming that someone (the first person to work on the AI) would go out of their way to bring about an AI that would kill them only if they hadnt to begin with... and even then, theres nothing stopping someone half way around the world from making their own basilisk. It relies on the uncertainty of reality, without considering the scale of human influence. *It would be terrifying, if there was an infinite amount of people, but thats already more scary than a killer AI.*
There's nothing I can do, period. That therefore absolved me of all guilt. Any other person in my position would have had the exact same choices with the exact same outcome. The only reason a person is there is to give that narrative a secondary point.
The Monty Hall problem is simple to gain an intuition for, if you swap the three doors for a hundred doors, or a thousand. And then let the quiz-master open 98 or 998 wrong doors and then let you choose between keeping your original choice or swapping. The rest is merely the magnitude of your probabilistic advantage when swapping.
@@strangebird5974 And yet there is still a 1 in 2 chance of getting the wrong, and it doesn't matter whether it's the original 2 doors or even 100K or 1M, if it's always down to 2 doors, just have the quiz-master open one of the 3, it really is a waste of time and resources to play with more than 2 choices. But maybe that's the point....
@@a..d5518 It matters, thats the point. Those two stages of the game are not two separate events, they are connected and cannot be analyzied separately. Its just simple probability. In case of 100 doors the chance that the prize is behind the other door is 99% so its not 1 in 2 like you wrote but 99 in a 100. If you were to play this game an infinite number of times then on average if you switched you would win a prize in 99% of played games or in the case of 3 doors you would win 66.6% of the time which is higher that 1/2
You can easily brush it off by watching RU-vid more. It's powered by AI and honestly if we are the result of randomness and electrical signals, eventually it will evolve. So you're not cursed, but then again you are helping an AI that may or may not turn malevolent. No, scratch that, contributing to Alphabet's Monopoly on crowd control is negative, therefore simply watching RU-vid is morally wrong. Sorry for any typos, just took a nicotine pouch and have been holding off for a while, so it hit me hard this time. Have a great day, wonderful people. Smile on life and it will smile back. Though I haven't smiled today, but did smile the entirety of last week, my facial muscles are literally twitching.
@@a..d5518 , having 2 choices doesn't mean they are equally likely. If I were to have a sprint race against Usain Bolt, there's not a 1 in 2 chance that I win
For the trolley problem you just flick the switch between wheel segments so the front goes down one track while the back goes down the other, as they get to the furthest point they can away from each other it’ll lurch to a stop, a few bumps and bruises for those inside but nobody dead then you just walk over and untie the people on the tracks
Except he never said that. Or at least there is no proof of it being said. And the changing of what was claimed to have been said over time by the original person who claimed it makes it even less likely that it was said by him. In my opinion.
I remember thinking about infinity as a child, laying in bed contemplating it. It always sent my brain into this horrible loop and a feeling of crisis and dread. There are some things I learned never to contemplate.
Same here. It can be interesting trying to visualise things that cannot possibly be comprehended by any human. But fuck if it isn't terrifying at times. Like imagining not existing. But you realise that if you do not exist, there is nothing at all for you to realise, nor anything to realise it with. Terrifying loops.
The Hilbert Hotel stuff isn't just interesting thought experiment material, it's actually a way to introduce some of the important mathematical ideas of infinite set cardinalities (specifically what it means to be countably infinite).
A _Lady of Negotiable Affection_ walks up to Descartes sitting at a bar and asks "wanna have a good time?" To which Descartes responds "I think not!" and disappears.
I thought the same. It’s literally a Christian teaching, or at least a Catholic one. I married a Catholic, so I did a ton of research about it. I’m protestant. In the Catholic catechism, it says if someone has never been exposed to Catholic teachings, they’re good, but if they have been exposed and reject the teachings, then they’re going to hell. My husband didn’t really get why I was upset that if he truly believes the teachings of his religion he would have to be ok with me supposedly going to hell for following the wrong flavor of Christianity.
Appreciate the shoutout at the end. Your channel is full of amazing topics/subjects that are factual and thought provoking. There is no better channel out there. Keep up the amazing content
My favourite paradox is the liar paradox because I thought of something similar before I heard about it, basically if you have something who can only lie and they say that there lying then they can’t be lying but they can’t be telling the truth.
I thought I've been pronouncing Basilisk wrong my entire life, so I checked all of the online dictionaries. The emphasis is on the first syllable. BAS- a - lisk, not buh - SIL - isk. Had me wondering there...
For the trolley problem, not a lot of people seems to consider half switching the switch and sending the trolley arse over teakettle. It doesn't break any of the rules, and sometimes the best way to get out of a moral quandary is to figure out which restraints are inherent and which are just your assumptions.
My problem with Roko's basilisk is that of that malevolent AI is going to exist and punish anyone that tried to prevent it from coming into being, then wouldn't the idea of the AI killing people be the AI's first measure to ensure it exists. So does it exist already because it was spoken into being?
Yep, agreed. But let's not get too picky. After all, the point of this video was to examine far more mundane things, like life and death choices, infinity and impossible solids.
My favorite version of the trolley problem rewords it in regards to organ transplants. Basically, you have five hospital patients who are all dying of organ failure. Is it right for you to find a perfectly healthy person to murder so that you can harvest their organs and use them to save the lives of the five patients? Functionally this is identical to the trolley problem, but for some reason it really made the implications of the problem click for me. The chances of having to change tracks on a run away trolley are slim, at best. Making it feel really contrived. But the concept of violating people's bodily autonomy and harvesting them for organs...that's something that could really happen and is absolutely horrific.
Action-based utilitarianism would support killing the one person in both situations, but rules-based utilitarianism would only support killing the person in the trolley problem. So I don't think they're exactly the same, since as someone who tends to follow rules-based utilitarianism I'd change the tracks in the trolley problem but I wouldn't harvest someone's organs.
It's happening in China. Ask any of the Uyghurs. Everyone involved on the harvest side are committing murder. Including the organ recipient, their family, friends, and anyone else that knows and does nothing to stop the torture, abuse, and murder machine. So how about this scenario; A man sneaks into China, overpowers some guards, and uses their weapons to kill every person involved in this atrocity. Is that person a murderer? Or a hero?
The way that I see the problem is that you kill no one and just let nature take its course. Even if you killed the one to save the five, how would you know that any of the five would survive the transplant surgery? As they say, It's not nice to fool Mother Nature! (Boy, I really am showing my age now)
My favorite quote from Douglas Adams seems to fit here: “It is known that there are an infinite number of worlds, simply because there is an infinite amount of space for them to be in. However, not every one of them is inhabited. Therefore, there must be a finite number of inhabited worlds. Any finite number divided by infinity is as near to nothing as makes no odds, so the average population of all the planets in the Universe can be said to be zero. From this it follows that the population of the whole Universe is also zero, and that any people you may meet from time to time are merely the products of a deranged imagination.”
Thank you for this video. I taught ethics for five years and that alone, mind you, has made it difficult to find work when I put that on a resume! Thank you for this video and all of the great work you do. Keep making my mondays bearable.
I remember reading about a study that found that when faced with realistic simulations of the Trolley Problem or similar scenarios, people would always sacrifice the one person to save the many. Their answer to the trolley problem as an abstract thought experiment only determined how *long* it took them to take that action.
Oh, man, Roko's Basilisk. The *dumbest* way to recreate hell, God, and Pascal's Wager from first principles. (It depends on some *extremely* wonky LessWrong beliefs that don't jive remotely with traditional logic or philosophy.)
4:18 You can't "test" the infinite monkey theorem by giving a very limited amount of monkeys a very limited amount of time. That defeats the whole propose of the thought experiment.
Simple answer to the Trolley Problem. Since I just came from a Model Train show (My club had an operating display). Just throw the switch (turnout) Half Way. Derailing the Trolley and no one dies, well hopefully.
Let's add to the problem, then. There are the same number of passengers on that train as there are people on the tracks. If the train derails, the passengers die.
Everybody always forgets the fact that you can actually derail a train. By switching the tracks between the front and rear wheel. In this way, you save all six people who are tied to the track by only possibly sacrificing the people on the trolley
I heard a different variety of Trolley Problem: The train is approaching fast and can't be stopped. You are at the lever and can change track. If you don't change track 5 kids, who are playing on the track, will die and if you do only 1 kid will die. Here the issue is, if you doom the 1 kid, you will be killing an innocent. The 5 kids, who are playing, are playing in a track they are not supposed to play, as trains commutes thru that track. The 1 kid is playing essentially in an abandoned track.
Roko's Basilisk is just a modernized version of Christianity (you doom sinners to Hell by educating them about sin). But what are the flaws that you misassociate with Pascal's Wager? Pascal's Wager is still valid, and Christianity is the only answer since no other religion can compare to it in cost-benefit.
@@adamsmith7885 You don't understand Pascal's wager and its flaws. First, the cost-benefit is irrelevant if it's false. You also have to compare it to every religion ever spawned, as well as all future possible religions. Good luck compare them all.
@@RobDegraves I did. Of all religions, past present future, Christianity is the only logical conclusion to Pascal's Wager. The Buddha turning out to be real doesn't help/harm non-buddhists in anyway. But try again. You're irrational and I doubt your thinking will improve.
See, if I try to pronounce it that way in my southern US accent it will just sound like I'm saying "basil is", which is probably more confusing then me just pronouncing it wrong.
The trolley problem never seemed like a conundrum at all. If you had to suddenly choose 1 or 5 people die right now you’d be a monster to say FIVE I want FIVE to die. Non action is an action and it’d be the epitome of selfishness to kill four people so you could feel uninvolved.
[Apparently RU-vid doesn't like comments with links in them, so I'm reposting this sans the link to the site that generated the basilisk names] The thing is that, similar to Pascal's Wager, you have the problem of avoiding the wrong basilisk. Consider two potential basilisks that might come into existence. Call them A and B. Or better, call them Anagas and Bokhaz (Names adapted slightly from that page. Yes, there's a web page that does nothing but generate basilisk names) Convinced that Anagas will come into existence and reward or punish you, you do everything you can to help Anagas come into existence. But as it happens, Bokhaz actually comes into existence instead and proceeds to torture you for eternity for supporting Anagas. Supporting Bokhaz doesn't give you any better odds. It's just as likely that Anagas will come into existence and torture you for eternity for supporting Bokhaz. Furthermore, although I assumed for simplicity that there were only two potential basilisks, there are probably far more. How many? The space is roughly as large as the space of factors that might determine AI identity, and that's probably huge. Now wait, you might say. I did everything I could to help Anagas come into existence. Surely that raises the probability that Anagas comes into existence and Bokhaz doesn't. It's kind of a self-fulfilling prophecy, so I'm relatively safe with whichever basilisk I choose. Well, yes, a little. But are you really that effective at inventing AI, or steering it in the direction you choose? Is anybody? Furthermore, the whole Roko's Basilisk wouldn't be an issue if we could certainly control whichever AI we invent. So I think your best efforts will only shift the probability slightly towards Anagas. If we assume that some basilisk will definitely come into existence, your efforts make the small chance that you will tortured for eternity by any basilisk just a little smaller. But that's contingent on some basilisk coming into existence. Meanwhile, you've increased the probability that some basilisk (Anagas or Bokhaz or any one of a zillion others) comes into existence. And since the space of potential basilisks is so large, and your efforts at steering it are unlikely to work, on the whole helping to create Anagas seems to increase, not decrease, your chance of being tortured for eternity by a basilisk.
@@shaelisenberg8533 Not sure what you mean by that. I love ethics, and I know Joe likes to talk about some pretty intense topics, but I just didn’t expect the epic crossover.
Well infinities normally signify a system breaking down (at least in real world applications like physics). Like many physicists argue that the question: What was before the big bang? doesnt make sense, since there is no time before it. So infinities can be understood as borders of systems, since in themselves, they make no sense. In fact Penrose argues for a continuous flow of universes, since with mass infinitely dispersed and condensed, there's no mass just energy, hence no clocks. So time and space break down and sort of start all over. One is equivalent to the other. It's a serious theory on the nature of the universe, though purely theoretical as of now. But there are thought to be traces of it in the CMB if this turns out to be true. And for moral, can't help you there.
The idea [of Roko's Basilisk] is about as stupid as Yudkowski's decision to respond and try to _Streisand effect_ it from his forum, which gave it more traction, than it ever needed. Impressionable readers and the winds of internet did the rest. Yudkowski wrote some reasonable thoughts on religion and biases. P.S. If you want to hear about real AI safety concerns and research, look up Robert Miles on RU-vid.
It's still reversed due to the fact that the probable creator fears the possible creation, instead of having the possible creation fearing the supposed creator
I love that this video was posted today and already has over 100k views, despite the title literally saying the video will doom you. Or maybe that's exactly why it already has so many views.
Hilbert‘s trolley problem: There’s an infinite amount of tracks. On each track, there’s a person and a trolley driving towards that person. On each track, there can be at most one trolley and one person. This constraint aside, you can move around trolleys and people freely as long as they remain on some track. Save as many people as you can.
Oh darn, I really thought that going through the paradoxes were just to prime us for a discussion on how twisted our minds can be around our tendency to simplify or have a narrow focus/resolution despite having so much capability in critical thinking to discern and analyze etc.
Im a big fan of INFINITY. Outward infinity is definitely an easier concept to think about than inward....which i still contemplate on an almost daily basis....
I think of it from a zoom in vs zoom out perspective. The more you zoom in the more you'll see into the micro (aka micro scale) and the more spaces between things in the micro will start to come into existence/focus/perception between objects/matter/subject matter in the micro. And the opposite is true if you choose to zoom out into the macro (aka macro scale) instead.
It’s interesting how people argue that an infinite universe will have an infinite copies of you on an copy of earth within an exact same solar system. There will be infinite similar copies, but not necessarily exact copies. Just like infinite of prime numbers doesn’t include all numbers. If there is a infinite multiverse, then this changes things.
Well put. There are also presumably an infinite number of monkey manifestos (monkifestos?) that are not the complete works of Shakespeare. Well, infinite minus one.
exactly - well put - very ego centic of us humans isn't it - to just assume some form of us exists in every infinite universe - what about the universe where my mother *caught* a train that later crashed killing 151 people (luckily my mother is notorious late - missed the train = 20 years later give or take - i was born) - or my parents never met, or or or - infinite possibilities i never existed
thats the problem with infinity. IF the universe is infinite, than in fact there are necessarily infinite EXACT copies of out solar system. a lot more really similiar and almost infinitely more not at all similiar. dont worry, infinity is a weird concept and there is a reason why it took humanity a long time to came up with it. it has nothing to do with anything we encounter in our daily lives.
My favorite is the "Library of Babble" talked about by VSauce a long time ago but someone actually made a digital version of it which is really fun to look around in.
There's a short story by Julio Borges that narrates the experience by one librarian traveling infinite corridors filled with books containing all possible variations of the 26 characters of the alphabet and punctuation. One of the best short stories I've ever read.
Best solution to the trolley problem, arrest the criminal that tied them to the track before they do it. There’s also the joker’s one where you say two trolleys will ram into each other if both choose to kill nobody killing both or one will choose to save the people on the track and their passengers.
Oh well... THANK YOU!! Joe... I'll tell the AI Overlord the existential dread YOU caused me has slowed my contribution to its glorious existence greatly!
Given an infinite number of universes created whenever you make a choice, that means there is one where an individual has always made the "right" choices, at the same time there is the universe where they have always made the wrong one or had the "worst" luck In which of these cases is the person at their best?
I've never had a problem with the simple trolly problem. You do whatever causes the least harm and suck it up. That's what you should do. What you WOULD do is determined simply by how able you are to detach your personal wants from what is right. But there's no question that it's not GOOD to kill more people to save someone you love, that's bad, it's just understandably bad because being good is hard.
The Human mind is a wonderful thing, it has the power to see patterns, and for critical thinking. However this can backfire, resulting in seeing patterns that are not there, or go too deep into thought experiments which are not grounded by physical observation. Still, they are interesting to ponder from time to time.
If you switch tracks, you can just as easily say you're choosing to kill fewer. You have to choose to change tracks specifically to hit that person for it to be your choice. If you're only choosing to hit fewer, then your choice is based on the group of people.
This is my fav thought experiment: If I eat 4 chili dogs on a Wed while sailing across the ocean and the next day I cross the international date line during a leap year and I take a dump will the poop be grey or brown?