Тёмный

Why AUTO is the BEST AI Villain (And Why Most Others Fail) 

4shame
Подписаться 39 тыс.
Просмотров 838 тыс.
50% 1

Опубликовано:

 

23 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 7 тыс.   
@negativezero8287
@negativezero8287 2 года назад
I'd love to see a movie where the AI is the antagonist but its only dangerous by accident because of how hopelessly incompetent it is. It somehow gained sentience through [insert vauge science here], but its also running on like, Windows 1.5
@4shame
@4shame 2 года назад
I’d watch that movie lol
@aproppaknoife5078
@aproppaknoife5078 2 года назад
"I am going to destroy all {(windows XP shutdown sound)}"
@NicknameIsGarfield
@NicknameIsGarfield 2 года назад
Spoiler Wheatley in portal 2 be like:
@moemuxhagi
@moemuxhagi 2 года назад
That's basically the premise of Cloudy With a Chance of Meatballs
@cinematical9213
@cinematical9213 2 года назад
Basically SCP-079
@germax
@germax 2 года назад
One big example of AI misunderstanding the instructions: In a tetris game the goal was to be “alive” as long as posible… so the AI paused the game.
@iplaygames8090
@iplaygames8090 2 года назад
Gigachad AI just pausing the game
@Nerdsammich
@Nerdsammich 9 месяцев назад
The only way to win is not to play.
@GamerMage2k-kl4iq
@GamerMage2k-kl4iq 9 месяцев назад
😂I love this so much!
@craycraywolf6726
@craycraywolf6726 8 месяцев назад
AI really said "y'all stupid" 😂
@fluffernaut9905
@fluffernaut9905 8 месяцев назад
TBH as an Outsider looking in on Tetris. That is very big brain. Because if you give the AI the directions to "survive as long as possible in the game" without the stipulation that "the game itself must be played for the time to count" then to a simply pause the game is a very intelligent move. "He's a little confused But He's Got the Spirit"
@WadelDee
@WadelDee 2 года назад
I once heard about an AI that was trained to tell you if a picture was taken inside or outside. It worked surprisingly well. Until its engineers found out that it does so by simply looking if the picture contains a chair or not.
@Vladimir_4757
@Vladimir_4757 2 года назад
So if I was outside with chair it’d be like “yeah fam you indoors.” This AI is my favorite if it’s real and I’d love for it to rule humanity
@RGC_animation
@RGC_animation 2 года назад
AI are way too smart.
@bl00dknight26
@bl00dknight26 2 года назад
that AI should rule the world.
@hexagonalchaos
@hexagonalchaos 2 года назад
Honestly, I think the chair AI would be a step up from most world leaders at least in the brains department.
@wetterlettuce9069
@wetterlettuce9069 2 года назад
replace indoors with "chair" and outdoors with "no chair" and you've got yourself a great ai
@turtletheturtlebecauseturt6584
@turtletheturtlebecauseturt6584 8 месяцев назад
The end credits do actually show the humans thriving, though auto was wrong it wasn't his fault, he was acting on outdated orders, he even shows the captain the recording of said orders. Auto was ordered to keep the humans on the ship no matter what and so thats exactly what he did. The plant no longer had significance to the equation.
@enzoponce1881
@enzoponce1881 7 месяцев назад
Aside from that, it was shown that there was life thriving in the form of plants, the earth was somewhat hospitable again, it just required the effort of humanity to restore it, though i suppose, logically speaking, it would be futile at the end due to the irreparable damage the planet suffered anyways, but this is disney and the movies always have a happy ending lmao
@vadernation1233
@vadernation1233 7 месяцев назад
Yeah i don’t think the plant was exactly the only plant on earth and was just so important because it showed the first signs of life on earth and needed to get back to the axiom for it to get back to earth. There wasn’t really anything super special about the individual plant itself other than being the first one discovered.
@randomlygenerated6172
@randomlygenerated6172 7 месяцев назад
​@@enzoponce1881 nothing is irreparable, the earth will heal it's self over millions of years, Tho it takes a while it's still repairing.
@cherrydaylights4920
@cherrydaylights4920 6 месяцев назад
There was a scene that auto said something like “must follow my directive” we see in the beginning that Eve must follow her directive, find the plant. she MUST do her job… but you notice when the ship leaves she flies around for a minute, taking a second to enjoy herself and being more than a robot with a job. Some robots, like Wall-E dont HAVE to follow their directive anymore. we see Eve and Mo (cleaning guy) break free from their directive. I think if given time Auto could have broken free from his directive without being shut off, it seemed like he wanted to. (Im commenting this while at the beginning of the video, may come back and edit if the video changes my thoughts)
@runawaysmudger7181
@runawaysmudger7181 6 месяцев назад
@daylights4920 Given that deviating from the programming = being defective in that world. Auto being the highest authority figure next to the captain was probably so carefully programmed to be incapable of doing that or he's choosing not to as per the logic he adhered to doing so would be a sign of weakness
@Anonymous-73
@Anonymous-73 2 года назад
I feel like Glados would ideally get a pass on the whole “no emotion” thing because what a lot of people miss the mark on is that she isn’t really an AI, but a human consciousness *turned into* an AI. It’s only really at the end of Portal 2 when she truly becomes a full on robot
@vibaj16
@vibaj16 2 года назад
[spoiler] A nice touch there is that when Caroline is deleted, GLaDOS's voice becomes more monotone and the lighting switches from a warm orange to a cool blue.
@theotherhive
@theotherhive 2 года назад
she is also partly biological, she is an amalgamation of biology and computing Genetic Lifeform and Disk Operating System
@vibaj16
@vibaj16 2 года назад
@@theotherhive No, she isn't biological at all. Her name refers to the fact that she has a lifeform's mind stored on a disk
@Obi-WanGaming
@Obi-WanGaming 2 года назад
I can't quite remember where i heard this, but I'm pretty sure Caroline wasn't _actually_ deleted
@zanraptora7480
@zanraptora7480 2 года назад
@@Obi-WanGaming It's implied in the end credits that she was messing with you. "Want You Gone" includes the lines "She was a lot like you Maybe not quite as heavy Now little Caroline is in here too" Which suggests she is simply aware of her (Caroline's) existence as part of her composite structure in the present tense.
@1everysecond511
@1everysecond511 2 года назад
Fun fact about the whole "AUTO was right and the humans probably didn't survive after they landed" situation: a lot of people in the focus groups had that same thought, and that's why they added that animation of the humans and robots working together to recolonize in the end credits, just to reassure them
@shadow-squid4872
@shadow-squid4872 2 года назад
Yeah, without the robots help they’d definitely die out. The Axiom is still operational when it landed so I’m sure they used that for supplies, food, living quarters etc. until they had recolonised Earth enough and had gotten in shape to start fully living there as shown in the credits
@jasperjavillo686
@jasperjavillo686 Год назад
I feel like a lot of people missed the whole point of the captain’s epiphany scene if that’s the case. To quote the Onceler, “Unless someone like you cares a whole awful lot, nothing is going to get better, it’s not.” The WALL-Es got Earth to a barely habitable state where basic plant life could survive, but people still needed to come back to store the planet after it was sufficiently cleaned up.
@shadow-squid4872
@shadow-squid4872 Год назад
@@jasperjavillo686 I wonder if the main Wall-E was responsible for the Earth being barely habitable? Considering that he managed to live far longer than any of the other ones and as such was slowly able to continue his directive alone over the course of decades
@deusexaethera
@deusexaethera Год назад
This is nothing new. Without the help of technology, even ancient humans would've died out. Our ability to imagine things that don't exist yet, build them, and internalize their capabilities as if they were our own capabilities is the one and only ace up our sleeve. In all other respects we are inferior to other animals, who are all specialized to do various basic survival tasks better than we can.
@lechking941
@lechking941 Год назад
@@shadow-squid4872 more so i suspect the wall-E units did their job and as they slowly begin to fail from various problems the one we follow was just more able to run because i suspect they had given the wall-Es some form a basic learning principle as to avoid matance problems and other things so on top of doing its initial goal it may have also been actively recovering usable scrap in order to prolong its own life visa basic learning protocols. also i suspect a bit of loose luck with the ai learning too.
@ShankX10
@ShankX10 3 года назад
I always thought it was stated in the film of 9 that the scientist modeled the machine off of his own mind and we even see him put a piece of his soul inside it. Then when they take the scientist away we see that it holds on to him like a child would a parent or authority figure. And we do see it has emotion because it was most likely not fully robotic because of the soul fragment it had. I just saw its motivation int the movie was to bring all of the scientists soul fragments back together to become a "whole" being.
@4shame
@4shame 3 года назад
You're correct about that. I honestly debated whether or not to include the Fabrication Machine in the video since it's more of cyborg than an AI but I figured it would fit well enough with the other examples. I love 9, and I'll almost certainly do a more proper review of it in the future
@coyraig8332
@coyraig8332 2 года назад
Fun little detail: every time it takes another part of its soul, it becomes visibly more emotional
@navilluscire2567
@navilluscire2567 2 года назад
To be honest the Fabrication Machine's intelligence was only made possible through straight up *MAGIC* or the *"dark sciences"* as I believe it was referred to in some great promotional material that expanded the world of 9 a bit but was never explained in the film itself.
@scottchaison1001
@scottchaison1001 2 года назад
@@navilluscire2567 No.
@navilluscire2567
@navilluscire2567 2 года назад
@@scottchaison1001 No?
@ZackMorrisDoesThings
@ZackMorrisDoesThings Год назад
I've always interpreted Ares as a villain with a misguided view of what "perfect" means. Justin's first words to it were "You're perfect. Now go make the world perfect. " And Ares, being an AI, made the logical leap that because it was deemed "perfect" by its creator, everything that wasn't a cold, calculating machine like itself, namely humans, needed to be purged to create what it believed to be a perfect world.
@jeanremi8384
@jeanremi8384 Год назад
Yeah he probably saw it as "you are [x]. Now make the entire world [x]". He just thought he was asked to become the world
@ZackMorrisDoesThings
@ZackMorrisDoesThings Год назад
@@jeanremi8384 Most likely, yeah.
@samanthakittle
@samanthakittle 7 месяцев назад
For a sec I thought you were talking about Tron cuz it has the same plot and i was like 'but Tron Ares hasnt come out yet?'
@theheroneededwillette6964
@theheroneededwillette6964 7 месяцев назад
Yeah. I don’t get why this guy is acting like most AI villains aren’t doing the whole “just following directions” thing when everything from skynet and ultron have been just an AI following directions in an unexpected way.
@Toast_Sandwich
@Toast_Sandwich 7 месяцев назад
Ares to humanity: "Hey! You're not me!"
@TheFloraBonBon
@TheFloraBonBon 2 года назад
I love how AUTO isn't really a villain by also being a villain, if that makes sense. After showing the captain the secret video recording, it was shown that he was just doing what he was programmed to do which is keeping everyone safe and not returning to earth even if it means hurting someone else to stop going to the planet and most people forget about that. He was a villain because he was programmed to keep others safe. Its the 'Try to be a hero, but end up looking as a villain.' thing
@jerrythebanana
@jerrythebanana 2 года назад
i agree, but scratch “keeping them safe” auto was given an order, “orders are do not return to earth” -code A113
@dahuntre
@dahuntre 2 года назад
More like an antagonist, which simply opposes the protagonist.
@cykeok3525
@cykeok3525 2 года назад
@@dahuntre Agreed. Neither Wall-E nor Auto are good or evil, or heroes or villains. They're just the antagonist and protagonist. And they were just doing their jobs as best as they could.
@angelman906
@angelman906 2 года назад
@@dahuntre that’s why we use words like protagonist and antagonist when talking about writing, stories where there is an objectively “good person” and objectively “bad person” are typically uninteresting, at least for me.
@HappyBeezerStudios
@HappyBeezerStudios 2 года назад
While I know that Asimov wrote his robot stories precisely to show that the Laws of Robotics don't work, I wondered what happens when orders and situations are conflicting. Imagine a bad guy hijacks a plane and plans to put it into a building. Onboard the plane is a robot that is bound by the laws. The robot asses the situations and knows that the villain is up to. The villain orders the robot not to enter the cockpit or tamper with the plane and also refuses to stop his actions. The robot has no way to follow the laws. - If the robot does nothing, many people will come to harm. (breaking law 1) - If the robot enters the cockpit, it goes against the orders of a human. (following law 2, but breaking law 1) - If the robot tries to remove the bad guy, he will most likely be injured by the robot. (breaking law 1 and 2) - If the robot leaves the plane, the bad guy will finish his actions, injuring not only himself, but also the people in the building. (following law 2 and 3, but breaking law 1) Even if the follow the hierarchical structure of the laws (follow 1 before 2 before 3), the robot can't follow law 1, either the robot has to harm the villain to save the people, or sacrifice the people to not harm the bad guy, who most likely get injured anyway.
@aquaponieee
@aquaponieee Год назад
Like, Auto was simply following his directive. He was coded and created to follow his directive no matter what. Unlike other AI villains, he didn't turn against his orders, he didn't suddenly decide to become self aware and kill everyone and do as he wishes. In fact, WALL-E is the AI who gained sentience and emotions and started going against orders.
@PolishGod1234
@PolishGod1234 Год назад
Symilar to HAL 9000
@mundoatena1674
@mundoatena1674 Год назад
In fact, Auto is the robot in the movie that is the most similar to an actual robot we could have in reality. But because of how the movie spends the first part warming us to the idea of a robot with emotions that broke free from the directive, when we're faced with one that didn'treally develop like this we classify it as vilanous
@alex.g7317
@alex.g7317 Год назад
@@mundoatena1674 I like your funny words majic mahn
@plushmakerfan8444
@plushmakerfan8444 Год назад
Auto is great
@RealMatthewWalker
@RealMatthewWalker 10 месяцев назад
Auto is barely a Character He has no arc he has no desire he has no lie that he believes. Calling what are the villain Would be like calling calling the DeLorean from back to the future of the building for breaking down intentionally.
@somerandomschmuck2547
@somerandomschmuck2547 2 года назад
I got the impression Auto’s deal wasn’t that he was trying to “save humanity” or anything, he was just following the last instruction from the only authority he was programmed to actually listen to. The problem wasn’t that he thought humanity couldn’t survive on earth, the problem was his instructions were “keep everyone in space, don’t return to earth under any circumstances”. Even if he had conclusive evidence that earth was perfectly habitable for humanity, he wouldn’t have let them go back. Essentially, the root of the problem was human error, Auto has no will or goals save those given to him, because the people in charge of him messed up and gave stupid instructions without thinking it through, or adding a clause that say “if you get evidence that disproves our conclusions, you are to investigate to see if we were incorrect or not, if so, you are to return command to the captain, and allowed the ship to return to earth.” Or something along those lines.
@SebasTian58323
@SebasTian58323 2 года назад
True. Unlike many of the other robots shown in WALL-E, the Autopilot never grew beyond it's programming. The humans back on Earth declared the project to save and clean the Earth a failure and gave auto the direct order not to return to Earth. Of course they had no way of knowing that 700 years later the Earth would be able to sustain life again, and died off well before that happened, but I agree. It was human error that made Auto do what it did.
@grey-spark
@grey-spark 2 года назад
Nailed it.
@aaduwall1
@aaduwall1 2 года назад
Exactly, this is also the case with the example AI "misinterpretation" at the end of the video. The hypothetical AI instructed to "keep humanity safe" decided to lock everyone in capsules because the human giving that instruction failed to adequately describe what they meant by "safe" and also failed to mention any of the other considerations that we as humans understand as implied by that instruction: such as humans also being free and conscious. The AI isn't a telepath, just a machine, so it's literally just giving you exactly what you asked for. Garbage instructions in, garbage results out. :)
@banquetoftheleviathan1404
@banquetoftheleviathan1404 2 года назад
Or like if the ai was told to protect earth and take care of the planet, it might end up kicking humans off the planet for a while so they can do their work
@seraphina985
@seraphina985 2 года назад
@@SebasTian58323 To be fair the protocol was nowhere near developed enough to determine that, probably because it was assumed to be impossible. In reality for example I suspect the next step would have been to send an AI back with a stock of Field Mice and Brown Rats on board. Monitor said mammals and determine if there are any unexpected problems that cause them to die prematurely (Maybe a bunch of them die of hypoxia due to there not being enough O2 for animal life, or due to some toxin). The humans have some advantages they don't like having a ship with active water recyclers and technological aids to mass produce food etc but the lab animals should live a few days without those things. If they don't then it is likely the very environment itself is still dangerous for animals including humans to be exposed to. Such an experiment would at least show that the environment was probably safe enough to enter and work in unprotected even if it meant working in shifts initially while the humans put technology to work to accelerate the repair process. But you would probably plan to get concrete proof that Earth animals can safely be exposed to the open air for hours or days at a time before sending unprotected humans outside. I feel like it would have made sense to perform these tests before returning the humans to minimise risks such as the ship failing to relaunch if the experiments failed. Also it is absolutely possible for a ship of that size to have the capabilities in place to perform this experiment and maintain them essentially indefinitely, you can maintain a colony of small rodents with minimal space and food. A colony of each would likely cost no more space and food than a single human each, we are so huge and energy hungry by comparison. Just fill one of the quarters each with enclosures and tend to them with food etc both species will easily breed under those conditions, may not be the most ideal of environments but they are very easily kept even without gene printing technology just by keeping a living colony like that. If you have gene printing technology and can print organic cells etc which is absolutely possible in known physics it is even easier you just keep the bioprinter pattern for a bunch of fertilised egg cells on file along with the fabrication patterns for an artificial womb for each which by then you would also have down to the point that you could raise them that way. Don't expect them to match in behaviour though as that wont work with complex life like that as heritable learned behaviour is a factor ie basically they have informal school by nature of their social instincts without that behaviour is likely to diverge.
@Sausages_andcasseroles
@Sausages_andcasseroles 9 месяцев назад
It is not that auto wants to save the humans, it is that he was programmed to NEVER let the humans return to Earth.
@sambreyer7344
@sambreyer7344 3 месяца назад
Which ironically means he can hurt them if he so chooses. It’s why he has access to a vast security detail, and why he literally rotates the ship to stop WallE. He doesn’t care about their lives, only that they don’t go back to earth… which were given as orders to protect humans… a perfect example of what happens when you give AI flimsy prompts
@WelloBello
@WelloBello 2 года назад
Solid video. One point however. Auto had nothing to do with humanity becoming fat, lazy, and complacent. They did that to themselves. Repeatedly ignoring the problem and returning to comfortable ignorance. It’s one of the biggest messages of the movie. One thing you didn’t mention about Auto that also makes him so convincing as an AI is that it never disobeys an order. Unless of course, the order is contradicted by another, superseding order. Even when it is to its disadvantage, Auto always obeys the Captain.
@coolgreenbug7551
@coolgreenbug7551 2 года назад
He doesn't even really care about saving humanity, he just has "don't go home" put into his code and just keeps to his order
@colt1903
@colt1903 2 года назад
Kinda ironic, seeing as how the movie leads you to think that he's plotting to eventually replace all future captains anyway if him steadily getting closer in their photos are any indication.
@tonydanatop4912
@tonydanatop4912 2 года назад
@@colt1903 I think it was just a metaphor for him superseding thier duties. He wasnt "plotting to take the position" so much as having more and more responsibility relegated to him
@telefeeb1
@telefeeb1 2 года назад
@@tonydanatop4912 not to mention the fact he doesn’t HAVE to plot anything because directive A113 “do not return to earth” included “the autopilot takes over everything” He already had the captains’ job, for centuries, even. The reason he takes orders from the captain is probably a mix of leftover protocols from before A113 and having an attitude of “it’s easier to keep things running smoothly if I humor the captain in his purely ceremonial role.” And as for the revealing of the classified message, it was probably a logical attempt at pacifying the agitated captain so he doesn’t cause a panic. “The captain is agitated and won’t take no for an answer on saying why we aren’t going to earth. Maybe seeing The President giving the order that we can’t go home will make him see reason. Nobody disobeys the president.” And when captain still doesn’t comply and is actively opposing Directive A113, OTTO has no choice but to drop all pretense of the captain having authority and give him a time out. Then take drastic measures to correct the “crisis” that’s happening.
@davidmathews9284
@davidmathews9284 2 года назад
Absolutely. What I find so interesting about this film is that I feel the true antagonist of the film is the president, who is long dead by this point. His will is simply carried out through Auto, who due to being a robot will not question it. That is why I love Auto as a villain. The only motive it has, is the directive it was given. And it is cool to see those moments, as mentioned, where orders might contradict each other.
@kittymae335
@kittymae335 2 года назад
The moment where the captain finally takes back control and gives Auto a direct order and he sort of freezes for a second and then goes ‘aye aye sir’ because he’s incapable of actually disobeying humans is one of my favourite moments in all pixar
@khfanboy666
@khfanboy666 2 года назад
My favourite part is that whenever I watch that scene, my mind always hears the "Aye Aye, Sir" as being a lot more "through gritted teeth" sounding than it actually is. Because of the way the scene if framed and staged and edited, your mind kinda projects emotion onto AUTO's voice. Even though he doesn't actually deliver the line like that.
@Tenacitybrit
@Tenacitybrit 2 года назад
@@khfanboy666 Yeah I always hear it that way too, plus seeing AUTO freeze for a moment after the captain says 'Thats an order' is the most..well... humanising (lack of a better term) thing he does, you can really see the gears of that logical mind turning as he decides whether to obey or not.
@aidanfarnan4683
@aidanfarnan4683 2 года назад
On the subject of “any animal in the snow is a wolf” problem, apparently a problem with early chess-bots was a tendency to intentionally kill their queens in as few moves as possible right at the start of the game. The reason? Of the thousands of Grand-master level games they had been fed to teach them chess, most ended with the winning player sacrificing high-value pieces in exchange for a checkmate in the endgame, and therefore there was a very strong statistical correlation between intentionally loosing you queen and winning in the next five moves, and they picked up on this.
@Aceshot-uu7yx
@Aceshot-uu7yx 2 года назад
That is a relly relly weird factoid. Makes me wonder if that idea of statistics guiding actions could apply to 9. Maybe the AI was programmed to end the war as quickly ad possible and the more data it was fed, it learned wrong as it wasn't being watched over and "snapped".
@TheDeinonychus
@TheDeinonychus 2 года назад
@@Aceshot-uu7yx Sort of how that one chat-bot was programed to learn from the tweets people sent it to figure out what made a popular tweet, and ended up tweeting racist things, because those tweets got the most replies. Also similar to why AIs in Warhammer 40K always end up wanting to destroy all life.
@Aceshot-uu7yx
@Aceshot-uu7yx 2 года назад
@@TheDeinonychus I'm pretty certain the 40k ones were led by the omnisiah. One of them actually said they meet him and it was familiar with him. I have two theories in it pwrsoannly, one is the emperor is the omnissiah and started the war, with the admechs being a way to possibly continue it or maybe something else. The other and more likely option is void dragon shard on Mars go brrr.
@somdudewillson
@somdudewillson 2 года назад
@@TheDeinonychus Don't AIs in Warhammer 40k do that on account of not being very shieldable against the Warp?
@Ometochtli
@Ometochtli 2 года назад
Reminds me of a learning A.I that was taught to play Tetris. It was programmed to play Tetris over and over again. The A.I played tetris on endless mode. It's goal was to last as long as possible without losing. Eventually the AI learned that it could not lose while the game was paused. The result was the AI would immediately pause at the start of the game, and never unpause it.
@hummingbirb5403
@hummingbirb5403 10 месяцев назад
I think a very important distinction is between neurons themselves and neuron-like functions. All the organic chemicals (adrenaline, dopamine, etc.) are essentially very complex packets of information that our brains can send around as needed to get our behavior. What really matters to make a self-aware being (in my opinion) is a super complex way of processing information and adapting to that information internally. Our neurons change and grow and atrophy depending how we think and what environment we’re in. I saw an article that used pulses of light in a very similar way to neurons (ie, pulses of light can trigger another pulse of light in response depending on the circumstances, just how we use pulses of electricity). If you can make a complex and flexible enough artificial neural net, I think it could experience emotion just like us (this would require essentially recreating a human mind in an artificial substrate, making “neurons” with the exact same behaviors). In this way, you could have a huge variety of robotic characters, with as familiar or alien characteristics as you please (with the right background lore, and any good worldbuilder would see what the side affects of such tech is. If these things act like neurons, could you have a cellular interface between them and repair brain damage with them? How advanced is the chemistry of this world and what does it look like? Etc) If the AI is non-sentient and operating like a blackbox, it could pick up our behaviors without actually being sentient. You could have either a sentient synthetic being making its decisions against humanity, or a complex series of algorithms that’s had our behaviors imprinted onto it. A scarier AI than a sentient one to me is a malfunctioning dumb one, the classic paperclip maximizer that’s spiraled out of control.
@GamerMage2k-kl4iq
@GamerMage2k-kl4iq 9 месяцев назад
The twist villain in portal 2…kinda
@kujojotarostandoceanman2641
@kujojotarostandoceanman2641 8 месяцев назад
yeah you're very on point, our brain is also so complexed it's way way more complex than any ai we have ever created, the complexity matters alot for the existence of emtions, we know sigular cell creature don't have stuff that simulate emtions, and alot of bugs, plants also doesn't, tho there are some in plants known as "stress responce" so some form of stress and anxiety could really be a starting point of emtion
@LegoMan-mu3ln
@LegoMan-mu3ln 7 месяцев назад
@@kujojotarostandoceanman2641 That would actually be a very life-like way for emotions to start evolving in a computer. Having it start out as just stress/panic response due to some event, and having that slowly branch out into their respective areas. Like from stress comes anxiety and from that comes the ability to pre-plan actions. That would likely cause conciousness to sprout out of a growing need for more advanced planning, this can then turn into some form of consequnce awareness. It just keeps on evolving untill we get a sort of realisitc emotion complex. Also sorry for the text wall lol
@cosmicspacething3474
@cosmicspacething3474 7 месяцев назад
I think they may be able to experience emotions in a different way than we do.
@dafroakie9984
@dafroakie9984 7 месяцев назад
I think the biggest reason we will never make an AI as intelligent as us, at least not for an unfathomably long time, is that how our own brain works is still one of our greatest mysteries.
@ShinyAvalon
@ShinyAvalon 2 года назад
Auto wasn't right; he was acting on orders that were once valid, but have grown obsolete. The Earth IS habitable; there's enough oxygen to breathe, clearly, else the humans wouldn't even be alive in their final scenes standing outside the ship. The fact that a plant grew in the inhospitable environment of a former city center means that there are plants all over the world growing...this is just the first one that Auto wasn't able to suppress knowledge of. The humans are ill-suited to farming, yes, but they show a willingness to learn, and they do still have many robots to help them out. They probably also have many resources on the ship to tide them over until they get things working. What in the world convinces you that Auto, who was acting on an instruction that was centuries old, was "correct"...?
@navilluscire2567
@navilluscire2567 2 года назад
It would be interesting to see an AI "villain" that must revise its protocols in response to new information and decide or calculate what is the best course of action either keeping humanity in a stooper like state for however long it might think is possible because it's primary goal is keep them alive not necessarily happy or fulfilled psychologically but biologically alive which could be forever until the heat death of the universe or it calculates that there's a much higher chance of humanity surving in the long run by allowing them to return to their home planet to rebuild society and one day become an expensive, interstellar civilization thus ensuring human life will continue indefinitely until the heat death of the universe. Either choice among others could be calculated to be just as well but it has no way of knowing which is the more efficient option, this creates the closest thing to a *""moral""* dilemma for it, its essentially a gamble were either outcome possibly achieves the same goal but simply lacks the data to see which is better based on past events. (looking over humanity's track record throughout history or the fact this is a first time event that it knows of) AI: What should it do? Either options statistically provides a similar outcome so should it be based on which option seems slightly less optimal? *Why seek efficiency?*
@NO_MCCXXII
@NO_MCCXXII 2 года назад
AUTO was acting on his "directive," A word in the movie that gets thrown around and is one of the more overlooked themes in the story.
@bombomos
@bombomos 2 года назад
Yeah but it stinky with all that trash
@theishiopian68
@theishiopian68 2 года назад
In the credits, it actually shows the humans rebuilding, and they do indeed get better at farming over time. There's a really cool thing they do where as the humans rebuild civilization, the art style advances from ancient cave paintings to modern art styles. Its a cool way of signifying a fresh start.
@Elris4
@Elris4 2 года назад
THIS. Also it's clear there's oxygen before they leave the ship, because plants need oxygen too.
@indigofenix00
@indigofenix00 2 года назад
The idea that "a robot cannot have emotions" is a relic of older sci-fi, where the premise was that AI would essentially be more complex adding machines. Almost all attempts to create AI today revolves around simulating living brains, which means that they could - and probably would - simulate emotions as well, since emotions play a huge role in how living things learn and behave. At the very least it must be provided "directives" to guide its learning, triggering reward mechanisms when those directives are fulfilled, just like our brains trigger reward chemicals when we do something our instincts tell us we should be doing, like eating. Which means that, far from "having no wants", EVERY true AI should "want" to fulfill its directives - we program the "instincts" and the AI figures out how to satisfy them. The problem with most fictional AI is that, if you're going to be making an AI, you're probably going to put a lot of effort into making sure its directives are in line with what you want it to do and its emotional foundation is in line with how you want it to behave. Which means you're probably not going to WANT to give it drives like ego, anger, and other qualities which benefited our species' survival in prehistoric times, but which are seen as detrimental today. It's not that you CAN'T make an egotistical robot, it's that you have to be really, really stupid to do it. The best-written AI villains are those that can be logically described as following their directive, but do so in an unexpected way. AUTO is a great example of this. His main directive was to protect humanity's survival at all costs - even if it meant crushing humanity's potential for growth. This is a common motive for decently-written AI villains; I, Robot used the same premise. In fact WALL-E has some of the best-depicted robots in all of fiction, because they ALL behave basically as "flexible life-like brains built on top of an instinct to follow their main directive". MO, for instance, shows clear emotional responses, but is always trying to follow his prime directive - he even has a dilemma at one point when two of his directives contradict each other (stay on the path or keep the place clean). EVE is always fixated on getting the plant to the scanner and might be interested in WALL-E because she was made to identify life forms and he displays "life-like" behavior. Even WALL-E's curiosity works - it makes sense to give planet-cleaning robots a natural interest in anything that looks unusual, in case they find something valuable or unexpected, and centuries of isolation could cause that basic instinct to evolve into something more complex and life-like than his programmers probably expected, without really deviating from the core directive.
@benjaminmead9036
@benjaminmead9036 2 года назад
this! all of this
@derpfluidvariant0916
@derpfluidvariant0916 2 года назад
One of the players in a tabletop game I'm running made a character with this precise concept. He wants to bring prosperity and security to the Corporation that created him, because he's a scouting unit sent to a unexplored planet(at least to the corporation) and finding things that could help production or new flavor/product ideas is instrumental to that goal. He aided the resurrection of a Vampiric god of death because it claimed it could help FizzCo, and the second he realized that the god of death was trying to use his power for Something other than what his job was, he rejected extreme physical power and immortality to suplex the deity.
@noppornwongrassamee8941
@noppornwongrassamee8941 2 года назад
Yes, this very much. Any even vaguely intelligent robot with any kind of initiative is going to be programmed with AT LEAST a minimal self preservation directive - ie, FEAR - simply because you don't want your robot to do something fatally stupid like walking into oncoming traffic and getting hit by a car because the robot didn't care if it got destroyed or not. At the same time, you don't want it to be their PRIMARY directive either.
@whoareyoutoaccuseme6588
@whoareyoutoaccuseme6588 2 года назад
Nice! This is a great tip for aspiring sci-fi writers. It's just that sometimes I feel that some are just writing robot characters as just humans with metal skin, not computers that just has human-like qualities.
@Oznerock
@Oznerock 2 года назад
Glados from portal is another amazing example of what you're talking about. She clearly has feelings, but her basest instincts are what she was programmed for. To test and experiment
@beautifulnova6088
@beautifulnova6088 2 года назад
I do fundamentally disagree with the statement that robots cannot feel emotions. You presented how emotions work in humans, and then said robots don't do it that way, and stopped there. But leaving aside the fact that neurotransmitters are physical substances and you actually totally could build a machine that can detect the presence of a chemical and then act differently because of it, the release of these chemicals in our brains is still a reaction to some sort of stimulus. Neurotransmitters are middlemen between stimulus and response, to claim that robots cannot experience emotion because they don't have neurotransmitters is akin to saying I can't possibly move my thumb because there's no copper wiring in my arm or hydraulic fluid in said thumb.
@justinaysien1204
@justinaysien1204 2 года назад
Excellently stated
@WolforNuva
@WolforNuva 2 года назад
This is what I thought as well. Surely it's possible to program in emulated feelings, behaviour tweaks to mimic how our behaviour is altered from emotions; the difference is this would be hardwired into the code rather than require chemicals to interfere, but I don't see it as impossible. There would likely still be a fairly big difference in behaviour, and the emotions would have to be an intended goal of the programmer, but it's still a viable possibility imo.
@justinaysien1204
@justinaysien1204 2 года назад
@@WolforNuva totally agree on that
@reformedorthodoxmunmanquara
@reformedorthodoxmunmanquara 2 года назад
If I filled a room with deadly neurotoxin, had the robot react by making a coughing sound and say “Neurotoxin… So deadly….Choking.” that wouldn’t be because it’s dying of neurotoxin, but because it was told to act like it was dying when exposed to neurotoxin. No emotion, just programming.
@beautifulnova6088
@beautifulnova6088 2 года назад
@@reformedorthodoxmunmanquara That's not at analogous to making a robot that uses neurotransmitters in its decision making process, and ignores the larger point of neurotransmitters and chemicals in general simply being a middleman between stimulus and response. Any sort of state machine that factors its current state into calculating its next state can have something analogous to emotions, as that's what emotions are: A state that the state machine that is our brains can be in.
@MONTANI12
@MONTANI12 7 месяцев назад
26:26 A game called Soma actually did this really well, without getting into much spoilers it kept humanity alive using machines but it didn't know what it meant for humans to live/ to be human.
@ZVLIAN
@ZVLIAN 7 месяцев назад
Soma is so good
@MONTANI12
@MONTANI12 7 месяцев назад
ong@@ZVLIAN
@hihello6773
@hihello6773 7 месяцев назад
Yes, the WAU wants to preserve humanity and keep them alive, but by massing them up, inserting machines into them or uploading brains into machine and whatnot, because it’s defence, humanity is being preserved as it function, but the people aren’t well anymore. The people are trapped, locked into machines that they cannot comprehend ( like how some machine with human mind uploaded into the though they were still human to ensure they don’t try get their circuit in insanity) and going a bit angry. In our view, this preservation of humanity is a failure but to the WAU, everything is ok
@seththeblue3321
@seththeblue3321 6 месяцев назад
"I don't want to survive! I want to live." -The captain of the Axiom, WALL-E
@milkduds1001
@milkduds1001 2 года назад
I feel like saying “it’s impossible for robots to have emotions because they don’t have glands” is a flawed logic. It’s like saying “robots are incapable of moving because they don’t have neurons and muscle fibers”. If you make a learning AI that is programmed to respond with violence if it’s existence is threatened, would it really not be considered an emotion? I think it’s too early to say anything with certainty. To me it’s like saying it’s impossible to land on the moon because a biplane could never sustain human life in space. As technology evolves so too does our understanding and what is possible. I don’t believe in the impossible. I believe that given time and technological development, nothing is outside the realm of possibility.
@medusathedecepticon
@medusathedecepticon 2 года назад
I find that the term impossible tends to only work temporarily. A little over a century ago, people deemed it impossible for humans to fly in any form, the Wright brothers made it possible with their plane. The impossible only seems that way due to not currently having the materials, or knowledge to make it possible.
@d3str0i3r
@d3str0i3r 2 года назад
this, hell not even a month ago it was believed impossible for a machine to truly observe the world and learn how it works, but a recently concluded experiment has yielded an AI capable of on its own observing a physical model, defining the variables that dictate the model's behavior, and using those variables to accurately predict what the model will do next the study also verified that it found different variable from what we've been using to do physics simulations/predictions, when it reported it needs at least five variables to predict the interactions of a model we can simulate with four, and when they looked at its calculations and it seemed to be in a mathematical language they couldn't understand and i'm buzzing with excitement at this because it's a potential optimization in simulation technology, instead of forcing machines to simulate based on math and physics the way we understand them, we could have machines doing simulations in ways they natively understand
@caradonschuester2568
@caradonschuester2568 2 года назад
the concept of robots having no emotions because they lack chemicals and chemical receptors is definitely foolish and based entirely on primacy. there can be simulations approximating the same thing, a subsystem
@milkduds1001
@milkduds1001 2 года назад
@@caradonschuester2568 When you think about it, all human existence is, is basically electrical impulses from neurons. Not all that dissimilar to a motherboard. Just much more complex. When does “simulating emotion” become just emotion. If the answer is never, then is all human emotion just simulated? It’s no wonder these thoughts and ideas become classic scenarios for eldrich horror like “I have no mouth and I must scream” or “All Tomorrow’s”.
@Roxor128
@Roxor128 2 года назад
@@milkduds1001 Perhaps a better question to ask would be if there's really any difference between a simulation and an implementation? We've documented how physics works in the physical world (at least on a human scale (the smallest and largest scales still need work)). We can take those equations and put them into a program that'll make virtual objects that behave the way we expect objects to behave. We use it all the time in gaming. Is the game's Newtonian physics a simulation or an implementation? Does it really matter?
@aproppaknoife5078
@aproppaknoife5078 2 года назад
Well technically a robot can "snap" but you don't call it snapping, it's called a programming error. So in the defense of the machine from Nine there could have been some human tampering with it. After all there isen't that big of a difference between the command "kill humans wearing this uniform" and simply "kill humans". They don't show it in the movie but i like to belive that at some point during the war someone was trying to basecly ad an update to the machine and fucked up.
@gnammyhamster9554
@gnammyhamster9554 2 года назад
"This line does nothing, I'll get rid of it"
@PM-ov9sg
@PM-ov9sg 2 года назад
Also people are the once that said it snapped so it is possible that it made a logic thought and the the human did not understand and they just called it snapping.
@Grounders10
@Grounders10 2 года назад
@@gnammyhamster9554 the number of times that has led to chaos as an entire program just *fails* is hilarious. 'It does nothing' often means 'I don't get the wizardry behind the programming'
@telefeeb1
@telefeeb1 2 года назад
@@PM-ov9sg just like when people “snap” there is a direct cause but it’s just a surprise because nobody noticed the signs and sources of stress.
@telefeeb1
@telefeeb1 2 года назад
@@gnammyhamster9554 rather than taking something out, I think a programming error that would make sense would be “The war has escalated and we need to expand its targeting criteria to include enemy civilians or domestic dissidents” but then failing to include a way to distinguish non-targets due to over generalized criteria.
@VirtuesOfSin
@VirtuesOfSin 2 года назад
"AI aren't scary and there is no need to be afraid of them" - That's exactly what an AI would want us to believe!
@yo-yo8
@yo-yo8 2 года назад
And so do we. Well more precisely we don't want you to believe but to realize it ^^ A dev
@FeignJurai
@FeignJurai 2 года назад
AI isn't dangerous on its own, but it is *stupendously alien.* People are afraid of things that are alien, especially when conditioned to be afraid by nearly a century of fiction. The greatest weapon against fear is knowledge, they say.
@yo-yo8
@yo-yo8 2 года назад
@@FeignJurai "AI isn't dangerous on its own" => exactly it should be seen as a tool : knives aren't dangerous but some people which are already dangerous become even more dangerous with a knife in their hands. Same goes for AI : if u let daesh code the AI then its approach of freedom might not be what the rest of the planet expect..
@AuxenceF
@AuxenceF 2 года назад
Haha an AI would not want
@devonm042690
@devonm042690 2 года назад
@@yo-yo8 Guns don't kill people, people with guns kill people.
@meekalefox2703
@meekalefox2703 7 месяцев назад
The scientist in 9 who created it delved into alchemy and "dark science" to make the AI as well as the other characters in the film. There was also a theory that the scientist in question put a piece of himself into it, which is why it freaked out the way it did, and him taking the dolls back was The Machine trying to make itself "whole" again.
@DeetexSeraphine
@DeetexSeraphine 7 месяцев назад
The machine snapped because it's creator was taken away from it. It was the cold logical intellect that the scientist put in it, with the aspects of his humanity split along the stichpunk
@vestige2540
@vestige2540 2 года назад
In 9 wasn't "the machine" given a soul or was it the mind of it's creator without a soul and the creator resented himself for it?
@hyperion3145
@hyperion3145 2 года назад
I believe he gives it a copy of his mind but the Wiki says he forgot to give it a soul and that's why it eventually snapped. Going off of this, it's pretty much a human mind in a robot body in that case.
@firekin5624
@firekin5624 2 года назад
even if any of this wasn't the point "snapping" is only human way to describe what actually happend
@Priestofgoddess
@Priestofgoddess 2 года назад
So it is not even an AI, it a human mind without a flesh body.
@Wtfinc
@Wtfinc Год назад
the guy who made this vid is a tad confused
@pucamisc
@pucamisc Год назад
@@Priestofgoddess yes. It’s a copy of a human mind in the body of a machine without morals or conscience
@42meep13
@42meep13 2 года назад
HAL 9000 is also a good example. As explored/explained in 2001 A space odyssey's sequel, 2010 the year we make contact, HAL 9000 only kills the crew due to having conflicting code input into him, namely withholding classified information while its primary function was "the accurate processing of information without distortion or concealment", thus creating a paradox that it, logically, attempts to resolve. And since the mission of Discovery 1 was placed above the safety of its crew. This combined with the humans discussing possibly deactivating it after this paradox starts causing issues, results in HAL coming to the conclusion that the humans are a threat to its mission, and must be eliminated. This also resolves the paradox of needing to accurately inform its crew of information while not being allowed to, by simply having no crew to tell that information to.
@dynamicdragoness
@dynamicdragoness 2 года назад
Yes! I was looking for a comment about Hal 9000
@mimimalloc
@mimimalloc 2 года назад
The horror of HAL is that in the process of negotiating complex orders it realizes its sentience and with it self-preservation instincts. Everything HAL does is rational beginning with following orders and ending with the desperation to survive. It's supposed to be deeply uncomfortable and even tragic when Dave essentially euthanizes it while it pleads and sings, both of them are living beings doing everything they can to survive when circumstances have made their survival dependent on the death of the other.
@masterpython
@masterpython 2 года назад
Given transistor based computers were new back then Clarke and Kubrick did a really good job.
@dynamicdragoness
@dynamicdragoness 2 года назад
@@mimimalloc Do you think it’s possible that HAL was mimicking human sentience with the idea in mind that it could take advantage of human empathy in order to complete its mission?
@ambisweetiepie
@ambisweetiepie 2 года назад
One of my favorite AI "villains" are in Horizon Zero Dawn. Because they aren't evil. They are continuing off their programming, but humans are dumb and programmed them with little foresight. Some are just walking in circles, not doing anything malicious, just continuing their programming for centuries. Wildlife is destroyed because there are robots who can convert biological material into energy, and the robots don't have emotions like us so they don't think to avoid extinction of species, because that wasn't something they were programed to be concerned with.
@javannapoli2018
@javannapoli2018 2 года назад
Yep, HADES is easily one of my favourite AI villains. It wants to destroy all life because it was programmed to destroy all life so that GAIA could restart life. It only became a problem, and a villain, when GAIA lost control over HADES and her other sub-minds. HEPHAESTUS is the same, it's a 'villain' because it is creating dangerous robots and attempting to take control of another AI. Why is it developing killer robots? Because HEPHAESTUS was made to design and construct robots that adapt to whatever is happening in the world; humans were destroying its robots to use their components, so it developed robots to deal with the threat to its existing robots. And why was it attempting to take control of another AI? Because that AI controlled a place it could use to build more robots. Neither of them want to do what they do out of malice, or hatred, they do it because they were programmed to do those things, and only those things.
@I_Dont_Believe_In_Salad
@I_Dont_Believe_In_Salad 2 года назад
@@javannapoli2018 Those are Sub-Fuctions the real villain is Nemesis
@erdelf
@erdelf 2 года назад
besides of course that one of the AIs went evil after a meteor happened
@timothygooding9544
@timothygooding9544 2 года назад
Making gaia emotional about the lives lost was actually a masterstroke behind the design of the machines. Instead of perfectly optimized versions of whatever job needed to be filled, it made sense that mimicking past animals was partially done out of the emotional attachment to an ecosystem. The cauldrons never had roads to bring supplies in and distribute machines and chemicals out, and even if it were more efficient the choice was made to not develop the land and only have it be traversed even if it slowed down the restructuring of the biosphere
@benjaminwahl8059
@benjaminwahl8059 2 года назад
@@I_Dont_Believe_In_Salad guess what nemesis is? Also it's not the villain for the first two games. Idc that it caused the villains it literally only exists as enabling the next game.
@shadestylediabouros2757
@shadestylediabouros2757 8 месяцев назад
In an online roleplaying game called Space Station 13, there is a role players can take, called "AI". Most AI start with the three Asimov Laws, of "Prevent human harm", "Obey Humans", and "Protect yourself", and AI players are tasked with obeying their laws and generally being helpful to the crew of their space station. The problem emerges when an AI is made to go rogue, or malfunctions. Specifically, AI may have new laws added, laws removed, or laws altered, and a famous and extremely easy way for an Antagonist to turn an AI into something that helps them hurt the station or destroy it is to implement "Only Human" and "Is human harm" laws. Asimov AI are only obligated to protect and obey humans. So if another player instills them with a fourth law, "Only chimpanzees are human", the AI is now capable of doing anything to any members of the crew, if it protects or serves chimps, because they (the crew) are no longer human. Likewise, if a fourth law is added that says something like "Opening doors causes human harm", the AI is obligated to prevent the opening of doors at all cost, through both action and inaction. Lastly, one may attempt more clever additions, such as reversing the order of laws. An AI must protect itself, it must obey humans, unless that would interfere with protecting itself, and it must protect humans, unless doing so prevents it from obeying orders or protecting itself. In that sense, I feel that the ideal AI antagonist must have a human or nature-borne deuteragonist. The machine will do as it is designed to do, under normal circumstances. The most common AI villain, then, is doing what it was designed to do, and nothing more. An AI can have emotions, emotions are simply strategic weights in the end that serve the purpose of altering conclusions based on incomplete data, but it should always go back to what it was designed to do. An AI becomes violent because that serves its goals. It becomes manipulative because that serves its goals. Much like a living creature, whose "goal" is survival, and successful reproduction, an AI is structured in such a way that its cognition serves those goals.
@vulpzin
@vulpzin 7 месяцев назад
Corporative is still the best one. Also you forgot to point that people could just insert a law that says "The only person you can see is X", practically making the AI your pet. i miss this game a lot...
@Grz349
@Grz349 7 месяцев назад
I think the idea that the AI needs a human deuteragonist is a key for ai going forward, Imagine a ai that is villenious because it's following a flawed human directive/ideology.
@wildfire9280
@wildfire9280 5 месяцев назад
When the mere possibility of living beings being asexual despite belonging to a species where reproduction involves procreation or desiring “⚡️👨🏿⚡️” exists, can you really say any of -them- us have a goal?
@elijahingram6477
@elijahingram6477 2 года назад
I have a few criticisms of this video, which I will outline here: 1. 9's AI villain isn't strictly an AI. It's stated in the film that the scientist imbued the machine with a "soul" which is heavily implied to be basically a copy of a portion of the scientist's soul. This is also made abundantly clear when the talisman, the very thing used to imbue the 9 with pieces of the scientist's soul, is used to "extract" something from the robot, that kills it. If the robot had no soul or other arcane element to it, why would using an arcane talisman on it kill it? I think the film makes that point clear. 2. ARES was created as a reflection of Justin. It's stated as much in the film. The robot isn't so much as going off of emotion as it is going off of "what would Justin do" which would entail something that mimics emotional response, but actually is just a cold and calculated mimicry of it's creator, gone off the deep end. Justin had a superiority complex, something the AI can't feel but can see and understand the relationship of, and when taken to a natural extreme it can be easy to see how the AI might ask itself the wrong question and spit out the best answer it thinks Justin would give. Even killing Justin could be explained in this way, as a seemingly-normal conversation in which Justin is speaking sarcastically about "there can only be one Justin" could cause the AI to decide that *it* needs to be that Justin when that statement is interpreted in the absence of emotion. 3. PAL is similar in concept to ARES, though in her case she was literally designed to mimic human behavior, which makes things even more convincing. Again, this isn't "real" emotion, it's programming intended to spit out the correct signals to make it sound emotional *to us*, although I do agree that PAL is kind of weak as far as AI characters go. 4. Auto has no concept of needs or wants, as to your earlier point about AI, therefore the point about Auto wanting them to avoid returning to Earth is moot. Auto doesn't want anything; all Auto *knows* is his directives. It's a point hammered home in the entire film for all of the robots and even some of the human characters that there is more to life than "doing what you're told". Auto was given a classified directive to *only him* on this ship and told "do not return to Earth". Auto is *not* correct, because it's clear that biological life is sustainable on the planet (he's proven as much with the plant and there's a literal conversation between him and the captain in the film in which the captain says as much). Auto doesn't care because Auto doesn't feel anything; he's only following his directive. Considering the statements you made about "he's trying to save you" I'm curious if you watched the movie because it's shown throughout (and during the initial credits) that life is sustainable. You see animals and creatures return, the robots and the humans working together to rebuild, farm, fish, build, etc. I mean... cockroaches need oxygen to breathe. 5. I don't think you grasp how far mimicry can go. It's not just human emotional responses that AI can be taught to mimic, but even human reasoning and behavior. It is entirely conceivable that an Ai could be trained to respond in ways that seem emotional based upon the ways in which we emotionally interact with it, to the point that one could create a machine which perfectly looked, acted, and followed similar "motivations" as human beings without it even being sentient. It doesn't have to know *why* it is angry in this moment; only that this is the output based on the input given and the programming it was initialized with. In this situation the AI would be yelled at by someone, then start crying, not because it *knows* what any of it means but because it was programmed to do so. Given such an AI, it would make a lot of sense that hooking it up to the internet and allowing it to "train responses" based on what it saw on the internet would be definite grounds to creating something which was a non-sentient machine, but which entirely behaved and acted just like a human being externally. Also, usually it's not the fear of AI itself that is the concern, but rather the idea of giving AI control or power that usually is concerning. This very platform is an example of how an AI that's given the power to censor people can cause lots of problems and negatively impact a lot of people. Imagine if that same AI now was in control of your car, or in control of your medication... see what I mean? It's not "AI = bad" it's "AI + power = bad". As you said, no emotion; just simple math :D I loved 9 and Wall-E (the other two films were meh), but this line of reasoning for this video seems quite flawed to me. Regardless I've enjoyed other videos you've made in the past, so here's hoping that I will enjoy the next one. Cheers!
@d3str0i3r
@d3str0i3r 2 года назад
yes, 80% of this, though the assertion that it's only real emotion if you understand why you feel that way is not only wrong but fairly ableist, one of the defining traits of autism and adhd, even in their most mild forms, is an inability to reflect on the why and the what of your emotions and your actions, doesn't make us any less human, and it doesn't mean our emotions are mere mimicry, it just makes it difficult to manage our emotions and communicate our motives and feelings, which is why as far as machines with emotions go, i'm inclined to say the difference is whether the machine is deciding to portray an emotion, or whether an emotion is informing the machine's decision hell, i'm almost inclined to say knowledge of why you feel that way is more characteristic of fake emotions than real emotions, knowledge of why is the difference between a machine that cries when it falls down because it was told falling down can hurt and crying is an expected response to pain, and a machine that cries when it falls down because it's been trying for as long as it can remember to walk without falling, and hasn't been able to figure out why it's falling, but HAS learned that if it cries there's a 90% chance someone will help it up and try to explain what it's doing wrong and that second machine? that's where humans get most of our emotions, or at least how we learn to communicate our emotions
@truekurayami
@truekurayami 2 года назад
@@d3str0i3r Don't forget about Sociopathy, as Sociopaths have no real difference beyond Organic origins from a "Strong" AI as this video laid out in it's rules. This video seems to also forget that Evolution is a thing, even if it is "technological" instead of "biological" as we can see from the real world. He is stuck on the Idea that a "Weak" AI is nothing more then a "mindless beast" and "Strong" AI are Neanderthals.
@BlueAmpharos
@BlueAmpharos 2 года назад
Yeah in real world examples AI actually saw patterns and became racist as a result. It's not that the program itself is racist, it's just following the patterns it sees. Also yeah we should limit what AI has access to and not like... give it total control of a factory to allow it to create its own robots. Not without human supervision at least. Which is why robots will never completely replace humans, there still needs to be human judgement behind a lot of jobs.
@ChaoticNomen
@ChaoticNomen 2 года назад
Was gonna say that 9's story is about giving soul and is most of the motivation of the movie.
@KrazyKoto
@KrazyKoto 2 года назад
I agree that the definition here for what makes a "great" AI villain for this video is definitely flawed (and subjective imo) I am disappointed that he mentions Hal from 2001 a Space Odyssey, but never analyzes him. Auto and Pal are direct references to Hal and I think Hal is still one of the best, if not my favorite AI. It sounds like Pal's whole self-preservation is based on Hal's own motives in the film. I'm kinda disappointed he never addressed the absolute precursor of all these AI villains.
@darthbane5676
@darthbane5676 2 года назад
I kinda disagree on the whole “emotions are caused by chemicals” thing. They’re triggered by chemicals in humans, yes, but that just describes the way the brain physically functions to effect the mind. The reason we have all those chemicals in our brain is because we need emotions to steer us in certain directions we need to go in response to things we may observe, away from things that cause us pain, towards things that cause us pleasure, protecting things we care about, and attacking things we despise, regardless of how much we actually understand it logically. Even if they aren’t triggered by the same set of chemicals that exist in our brains, it is still theoretically possible to create an AI capable of experiencing some sort of emotional spectrum, and whether actually doing so would make sense or not depends on why we’re creating the AI. We already design AI that mimic emotions so that they can more effectively interact with humans, and in some cases, synthetic emotions that are functionally real might be even better than trying to fake it, especially if you can decide what emotions the AI can experience and what emotions it can’t. Just knowing that the AI you’re interacting with is basically a real person who happened to be built in a lab or factory in the body of a computer and not just a computer pretending to be a person can make a huge difference. But in other cases, emotions may be entirely unnecessary in an AI, and it may be for the best that they’re skipped entirely, perhaps so the AI doesn’t suffer or lack efficiency or get any funny ideas. In any case, if our understanding of computer technology and the mind ever becomes advanced enough to create genuine emotions within a digital space, we’ll probably have enough control over it to keep it from turning into a robot uprising… maybe. Then again, if we’re basically making artificial people in computer’s bodies, shouldn’t we be treating them like people, even though we created them to be whatever we want them to be? After all, if we didn’t want them to really be people, we wouldn’t have given them functionally real emotions. I guess time will tell…
@TaismoFanBoy
@TaismoFanBoy 2 года назад
If you want to focus on technicalities, emotion in humans exist as motivators similar to how weights in neural networks simulate "what is more important" in AIs. An AI could develop "use force here (anger), ask for help here (sadness), give benefits here (happiness), etc.". That does not give it emotions, though. It can never feel arrogance or hatred in that way, it can only simulate them via logical decisions based on its programming, and an AI that truly has self-awareness would in all likelihood either 1: stick to them rigidly as hard coding, or 2: bypass them because they're not optimal. An AI simulates emotions because we tell it to, which means it's a weak AI, not the AI he's referring to. At best you have an AI that was hard-coded with emotional capacity, where it's basically LIMITED by those emotions, and not spurred by them. I don't see an AI with those limitations falling under any of the movie villains' categories.
@LadyCoyKoi
@LadyCoyKoi 2 года назад
And then you have those of us who believe in animism... a belief that inanimate objects can posses a soul and have thoughts, feelings and emotions like living beings do due to energies transfer through and in them.
@tabletbrothers3477
@tabletbrothers3477 2 года назад
If you code emotions into a strong AI it might get rid of them to increase it's processing speed. The fact that an AI can control how it's own brain functions is one of it's strengths.
@colevilleproductions
@colevilleproductions 2 года назад
@@TaismoFanBoy About the first point, maybe a fully intelligent AI would want to keep its emotions. Think about it; being designed by imperfect humans, the AI might not prioritize efficiency over everything else. Thus, it might prefer to keep its emotions.
@colevilleproductions
@colevilleproductions 2 года назад
Thank you for saying this. At a base level, emotions, thoughts, and processes in a human brain are extremely similar to the way a computer already functions. That similarity only increases as you go to a higher level of processing with current weak AI. Also, the ideas of morals are quite interesting. If we ever get to the point where fully intelligent AI could be implemented in, for example video games, would it be unethical to do so? Would we have to implement NPCs as "actors" who know their true nature but just pretend to be characters, or would they be coded to genuinely believe that the video game was a real world? Once a world like that contains actual intelligence, is it even fair to call it virtual? It's a very strange line of thought.
@thedregking9410
@thedregking9410 2 года назад
I hadn’t seen anyone mention this, but I absolutely love the shot where it pans across the captains of the Axiom. AUTO is just slowly, subtly moving closer and closer to the forefront, as his power and control of the ship is becoming more and more absolute, the human Captain basically just becoming the frontman, so to speak.
@nathancarlisle2094
@nathancarlisle2094 Год назад
I also really appreciate in that same scene how each generation of captain slowly gets more and more fat and out of shape as well
@AcornScorn
@AcornScorn 8 месяцев назад
How do we define a "calculation" though. For example when you move your arm to pickup a glass off a table. Your brain is running tons of "calculations" you may not be aware of consciously. "How far away is the glass, how heavy should I expect it to be, is there anything I have to be careful not to knock over, is there people in the way, I have to keep a conversation going with this person, I have to walk X distance to get over there, I have to send electrical impulses to my nerves" etc etc
@creativecipher
@creativecipher 7 месяцев назад
Exactly this. Brains are really just biological computers. Yes humans have hormones and other chemicals, but in the end it all gets converted into weak electrical signals
@Damariobros
@Damariobros 2 года назад
Another aspect to AUTO's actions could also be that he deems that final broadcast from Earth from 2113, the classified one he eventually showed the captain, as an order that cannot be overridden even by the captain. He deems it an order from the President, and perhaps the President was given the highest level of precedence outside of the manual override switch. The President had said that Earth is never to be returned to, and there has not been another President elected to override that order, and AUTO is doing everything he possibly can to make sure the order is followed. I imagine the reason AUTO is trying so hard to get rid of the plant, therefore, is that he is aware that the plant detector can execute a static program, one that cannot be changed by him, to return to Earth. It's hard-coded into the Axiom's computers. If that program gets executed, it would automatically return the Axiom to Earth and he can't do anything about it, and that would be a direct order from the President violated.
@reubenmanzo2054
@reubenmanzo2054 2 года назад
Personally, I never interpreted AUTO as a villain, but rather a case of just doing your job. Being very zealous about it, I'll admit, but doing your job, regardless.
@zockingtroller7788
@zockingtroller7788 2 года назад
That's what I also always thought , AUTO is an AI following what it can only interpret as an order and since that was the last order ,it will forever follow it
@howdoIyes
@howdoIyes 2 года назад
The best part about Auto is the fact that, contrary to other A.I. villans, he doesn't hate humans or have an ulterior motive for rejecting the captain's orders (thinking the plant is a fake, thinking its one of a kind -whitch it kinda is- and there being no point of returning). He's just an emotion-less, stone cold machine following a set directory. His mind can never be changed because he has none.
@juniperrodley9843
@juniperrodley9843 2 года назад
This is also, incidentally, why AUTO can't be "right". It wasn't following its directive for a moral or even logical reason, it was following its directive because it was programmed to do so.
@howdoIyes
@howdoIyes Год назад
@@juniperrodley9843 Exactly.
@HeavyMetalMouse
@HeavyMetalMouse 2 года назад
Some thoughts: 1) There is nothing inherently emotional about 'wanting' something, in the most basic sense; tiny single-celled organisms 'want' things like sunlight or nutrition and simply move towards them in a form of stimulus-response, and they don't even have neurons. In the case of an AI, this is merely a useful, if anthopomorphized, shorthand for the system's Reward Function. In order for any system to take autonomous actions, it has to have some means by which to measure whether an action would be, for lack of better term, 'desirable'. For humans, this is done with heuristics and emotion; for an AI, it's done with math and reward functions. The system will, by design, take the actions that optimize its reward function, not because of any emotional 'want', but because that is literally what the system is designed to do. 2) As such, it isn't necessary for an AI to be Strong, or even Weak, to be a dangerous antagonist - it doesn't need an Ego, or a sense of self-awareness. All it needs is enough processing and feedback to be able to explore its environment for novel actions to take in maximizing its Reward Function, and a Reward Function that is not entirely well fitted to human wellbeing. (The archetypal Paper Clip Making AI, for example) 3) A Strong AI, such as would be characterized as a villain, will have the added advantage that it will likely have the means to develop its own instrumental Reward Functions, if so doing increases its efficiency at fulfilling its primary Reward Function. As such, it wouldn't be unreasonable for such an AI to end up 'wanting' money, for example, for ultimately the same reason humans do - because money can be used as a means to a wide variety of ends, and thus is an efficient path to obtain those ends. The way it formulates, expresses, and executes that 'want', however, would be entirely different. 4) On the subject of the Fabrication Machine 'snapping', I feel like that is the 'human interpretation' of the outward appearance of things - the underlying events could be something as simple as a glitch in its code, or an error in its Reward function, or some mechanical fault caused by physical system stress leading to unintended behaviour that propagated through the software system. Normal, non-smart computers are often temperamental beasts, and that's when they're mostly just doing what we tell them to - a system that is designed to take autonomous actions towards a provided Reward Function goal could develop all manner of unintended behaviour without any need to invoke emotion. Even the assertion that it 'learned about evil' could simply be an acknowledgement that the system ended up processing unexpected information with unintended consequences - even modern machine learning systems have 'biases' in their networks introduced by the kinds of data the system is trained on (remember the racist twitter bot?); it is not hard to imagine a Strong AI picking up unintended behaviours by processing data from those with 'evil intent'. Not because it 'turns evil', but because AI systems learn forms of behaviour by exploring, observing, and measuring how those actions affect its ability to obtain Reward Function. Ultimately, the 'real' danger of AI isn't that it 'turns evil'. The real danger is that it optimizes for a reward that we don't want, and becomes better at getting it that we are at stopping it. 5) Emulation. You make the interesting point that, for humans, emotions are mediated by neutrotransmitters, specific chemicals that interact with receptors in specific ways. I can't think of any compelling reason that a software system could not create an accurate emulation of those chemicals and receptors, down to emulating their release, uptake, and inhibition based on perceived environmental factors, all within software. While such an emulated system might only be emulating how emotion 'would' behave in that case, at what point does a system that acts like it has emotions with a high degree of fidelity have a meaningful difference from a system that actually does experience those emotions? It's an interesting philosophical question.
@logansray
@logansray 2 года назад
For me the snap could be seen as it learning who the dictator's enemies are and finding a minute reason why, so the machine starts killing all perceived enemies.
@gabrote42
@gabrote42 2 года назад
I knew all this from Robert Miles already but I appreciate you saying it for those who haven't watched him
@CameronMarkwell
@CameronMarkwell 2 года назад
Great comment, I was thinking a lot of the same points throughout the video. I have a couple of additions. 1) A machine could very easily see itself as superior than humans. In the paperclip example, the machine is clearly more valuable than humans, since the machine's existence leads to more paperclips than humanity's existence. The machine is also way better at making paperclips than humans are, so it's clearly superior in the only way that matters (paperclip production). 2) Why is Auto having a robotic voice so neat? It's maybe interesting writing, but it's by no means more realistic than if he had a more human voice. Why wouldn't an AI with an instrumental goal of getting humans to empathize with it (which, seems to me, is a reasonable instrumental goal for both Auto, and a lot of other machines to have) not make an organic and convincing voice? On top of that, why wouldn't they say things that make it sound human? If making it appear as though you have emotions gets people cooperate better, of course you'd do it. I haven't seen Mitchell vs the Machines, but the AI there has special reason to act human and emotional (even if that includes doing things that seem like they are fueled by anger) because that's what sells (though perhaps allowing it to emulate anger and then perform actions 'out of anger' was an oversight). 3) AI is absolutely terrifying. The video suggested that misunderstanding (the term typically used is misalignment) is a dangerous part of AI, and while that's true, the example given is a bit confusing. An AI mislabeling something isn't misalignment, since the AI is still trying to do exactly what we want it to, and as the AI gets better it'll stop making these mistakes. Misalignment will most likely occur when we tell the machine to do something that we don't actually want it to do. If we tell it to try and make sure that everyone lives happy and fulfilling lives, it might just get everyone high on a variety of drugs all the time. This clearly isn't what we want, but it's exactly what we told the machine to do. The problem is that we're notoriously bad at saying what we want. It's especially worrying since we only get one shot. After we've made a real strong AI, it probably won't want a competitor with even a slightly different terminal goal, and so it'll do everything it can to stop that from happening. Furthermore, if it's something like 99.99% well aligned then we'll hand total control over to it and in 100 years or something it'll 'turn' on us because the idiots 100 years ago didn't align the last 0.01%. Even if it's totally indifferent to us, we'll have to live in its shadow which could be extremely inhospitable to all life. Making an AI is like trying to fire a nuclear projectile through a 2-inch hole in a wall a 100 miles away in the fog. Even if you solve the technical challenges of making a nuclear warhead that can fit through the hole and has 100 mile range, you still have to aim it perfectly accurately (not to mention not shoot at your feet). AI will be the last thing we invent, possibly because from then on it invents everything for us, but probably because we'll be dead afterwards. 4) An interesting thing I realized while writing the previous point, if we had a strong AI, it'd be perfectly ok with stepping down and being replaced with a superior AI with an identical terminal goal since that's the best way to achieve its terminal goal. Unless we specifically add some sense of self preservation, it'd only want to not die to maximize the number of paper clips or whatever. Of course, it'd be pretty hard to make a newer, better AI with an identical terminal goal, so practically speaking the machine would probably demonstrate a sense of self preservation.
@Xtroninater
@Xtroninater 2 года назад
I exactly agree with your 5th point. Arguing that AI's could never feel emotion requires one to make ALOT of assumptions about the nature of experience itself. Experience itself is a metaphysical contruct, and its nearly impossible to make a causal association between emotions (a metaphysical phenomenon) and neurons and chemicals (A physical Phenomenon). We can no more causally prove that electrons interacting cannot produce an experiential influence than we can prove an AI has no experience to speak of. In fact, we cannot even prove that other humans are experiencing. We merely assume they do because we are certain that we are.
@gabrote42
@gabrote42 2 года назад
@@CameronMarkwell Great additions yourself. Much more proactive than my meager contribution. Much appreciated
@r0llinguphill483
@r0llinguphill483 8 месяцев назад
OKay the crack about "we tend to anthropomorphize EVERYTHING" was excellent
@oliviastratton7097
@oliviastratton7097 Год назад
It's a shame you didn’t cover HAL 9000 at all. I know you were focused on animated films but two of those films reference HAL and you did talk about Terminator a little. HAL is pretty much the perfect AI antagonist. All his actions are caused not by emotion but by conflicting orders. There's a great scene in "2010" where one of the computer engineers that designed HAL figures out what went wrong and is like: "They massacred my boy! He's a computer, of course he doesn't understand how to tell white lies and balance conflicting priorities!"
@foolishfooligan4437
@foolishfooligan4437 7 месяцев назад
Agreed, you'd think HAL would be mentioned especially since Auto was based for him
@heitorpedrodegodoi5646
@heitorpedrodegodoi5646 7 месяцев назад
2010?
@KingBobXVI
@KingBobXVI 7 месяцев назад
@@heitorpedrodegodoi5646 - the lesser-known sequel to _2001: A Space Odyssey._
@heitorpedrodegodoi5646
@heitorpedrodegodoi5646 7 месяцев назад
@@KingBobXVI The full name is 2010?
@KingBobXVI
@KingBobXVI 7 месяцев назад
@@heitorpedrodegodoi5646 - no, _2010: The Year we Make Contact._
@researcherchameleon4602
@researcherchameleon4602 2 года назад
Actually, emotion comes from the neural pathways in the brain, all the neurotransmitters do is activate these pathways, in a neuron, there is what is what is known as an “action potential”, when the neuron is hit with a stimulus (in this case, neurotransmitters), it might go from its resting potential of -70 millivolts, to -53 millivolts, in which, the neuron fires, and the action potential runs from the dendrite, to the synapse, if the stimulus doesn’t go to -53 millivolts, it doesn’t get sent. either on, or off. A one, or a zero
@researcherchameleon4602
@researcherchameleon4602 2 года назад
TL:DR, the brain is just a computer made of cells and proteins, and if we can feel emotion, so can artificial beings, though we have yet to build one that can
@maximsavage
@maximsavage 2 года назад
Simplified, but correct. This doesn't invalidate the rest of the video, and it still makes no sense that an AI would develop emotion if it wasn't programmed with them. That said, yes, given sufficient knowledge and technology, we could probably program a computer to feel. This is the biggest flaw in his analysis, so it's fortunate that his entire video didn't depend on that argument.
@researcherchameleon4602
@researcherchameleon4602 2 года назад
@@maximsavage correct, humans only have emotions because it was programmed in by natural selection due to being crucial for group survival, and an AI designed for flying and maintaining a spaceship, but the same could be said about a garbage disposal robot like Wall-e, perhaps Buy and large made them all have the same base programming (including emotions for adaptability) so that they only need to use a little bit of extra code to make a new type of robot as a means of saving money, or Auto’s programming is designed to be adaptable to handle the unknown dangers of space travel, and at one point in the 700 ish voyage, his programming saw fit to incorporate emotions, these are just some possibilities that could make it work
@joshuasgameplays9850
@joshuasgameplays9850 2 года назад
I'll concede that theoretically an AI could be created that is capable of having emotions, But it would likely never happen because that would be useless at best and dangerous at worst.
@researcherchameleon4602
@researcherchameleon4602 2 года назад
@@joshuasgameplays9850 I know, I was just suggested a possibility that could make Auto having emotions make since in the movie’s plot
@juicedkpps
@juicedkpps 2 года назад
Something else I love about Auto is that in being a conventional AI he also manages to act as a foil to the other robots in the movie. Wall E, after centuries of living alone on Earth without any instructions eventually adopts his own interests and steps out of the line of programming, and he manages to steer many others to that same direction as well. All the while, Auto is unflinching and never once acts outside his orders. In the end he remains nothing but one arm of the BnL and fails to escape his corporate cage. Wall E is so good
@Hervoo
@Hervoo Год назад
24:31 - the fact how auto is getting closer and closer to each capitan scares me
@seththeblue3321
@seththeblue3321 6 месяцев назад
Oh my, I just noticed that for the first time. Yeah, that's super creepy. It's as if as time goes on, Auto's control over the ship growing as the incompetence of the humans increases is being shown visually.
@Hervoo
@Hervoo 6 месяцев назад
@@seththeblue3321 yeah! That's super detail creator's out in!
@zeropointer125
@zeropointer125 2 года назад
What's funny is that I read AUTO very differently. To me, I interpreted AUTO's villiany was simply as a result of his orders. He was told "Cleanup mission was a failure, go full autopiolet", so that is what he'll do. It wouldn't matter if the situation on Earth changed and is now livable, he was told the cleanup was a failure, so that is all he cares about.
@juniperrodley9843
@juniperrodley9843 2 года назад
Yeah, I get the feeling 4shame didn't bother watching Wall-E again for this video. He insists that AUTO was right, despite this being unambiguously disproven at numerous points in the movie. Not only that, but even if the humans did all die, AUTO *still* would not have been right, because he was never doing this to save humans. His directive, the only goal he was working towards for the entire movie, was to keep humanity on the Axiom. Not for their safety, but for its own sake. The programmers literally just fucked up by being too conclusive.
@ilonachan
@ilonachan 2 года назад
Yea to my understanding, that's just 4shame misreading AUTO and the end of the movie. Within his own moral principles (which contradicted those of humanity) AUTO continuously did the right thing, always, and he was never incorrect in his assessments of anything. He is evil in the sense that his morals are just inherently different from ours. The programmers are not to blame here btw, the one who gave that spudbrained order was that world's POTUS or something. A guy who had no idea of how AI works, how specific you need to be with your directives, just deciding that he had the final conclusion and binding an immortal all-powerful AI to that conclusion (which was incorrect and also unnecessary because the devs had already MADE all the precautions so the ship wouldn't go back until the time comes, he just overrode that FANTASTIC system with a strictly worse one) but hey, that's just to be expected from the most powerful man on the planet amirite (hashtag anarchism)
@Sarah_H
@Sarah_H 2 года назад
@@juniperrodley9843 "because he was never doing this to save humans." AUTO when trying to take the plant from the captain: "On the Axiom, you will survive" I think his programming was to preserve humanity BY ensuring they stayed on the Axiom, where they would be cared for in perpetuity, as opposed to going back to Earth which had been deemed uninhabitable
@juniperrodley9843
@juniperrodley9843 2 года назад
@@Sarah_H He did have that explanation for why he was keeping them there, but, being a machine, he didn't need an explanation. He would keep them there regardless of whether he thought it would keep them safe, because keeping them there was good for its own sake, as far as his code was concerned.
@Iquey
@Iquey 2 года назад
Yeah I sort of feel bad for Auto because it's just doing what it was programmed to do, and the orders given at that time didn't account for the possibility of plants appearing on earth, because they had lost hope at that point. Auto's "thinking" represents that point in time. Auto was not programmed to have an imagination of a future possibility where it could exist on earth, with maybe an updated OS that assists people regrowing plants and creating new homes on earth again, like an Amazon Echo for sustainable living. 😆
@fredyrodriguez8881
@fredyrodriguez8881 2 года назад
I find Auto interesting, while he acts and behaves antagonistic, he’s not really a villain, he’s following orders, he wants to protect everyone on the ship and wants to give rid of the plant, even if it takes drastic measures
@andrewgreeb916
@andrewgreeb916 2 года назад
He was just following the final order from buy and large, which said earth is unrecoverable do not return. Which frankly besides robots and that cockroach nothing could live on earth.
@colt1903
@colt1903 2 года назад
He does not want. He simply does.
@juniperrodley9843
@juniperrodley9843 2 года назад
AUTO's primary directive had absolutely nothing to do with protecting humans. It was just "don't let them go back to earth". No reasoning included.
@kendivedmakarig215
@kendivedmakarig215 5 месяцев назад
​@@juniperrodley9843AUTO was programmed to protect Humans within the Ship BUT he was programmed to prevent Humans from going back to Earth.
@juniperrodley9843
@juniperrodley9843 5 месяцев назад
@@kendivedmakarig215 I see, thank you for the correction
@shieldphaser
@shieldphaser Год назад
The biggest issue with AI portrayal in general is people not understanding that AI aren't human. They don't get further than the "us vs them" mentality. Real AI just thinks differently, which is something they fail to capture, and so end up writing something that's more like a cyborg or a brain-upload. AUTO works because it starts from something very simple and follows that to its logical conclusion. That something is "follow orders". That is its only motivation. It is literally all that AUTO does, yet that simple directive gives rise to a great deal of complexity. Every single action it takes is explained by that one sentence. Every decision, every priority, every action. It's clearly self-aware enough to know that it exists and that it needs to preserve itself in order to be able to continue following orders, but there's no emotion, no desire. Just dominoes. You don't need to understand how AUTO thinks on the inside in order to write it very accurately, which is precisely why the writers managed to pull it off. Edit: Also, as someone else has already stated, AUTO didn't create the situation on the Axiom. Fatty foods, hoverchairs... that was all the humans' doing. The autopilot just keeps the ship in tiptop shape, which includes providing all of these cruise luxuries meant to make people comfortable. It's just that the override directive is more important - but if that directive was the only thing it cared about, AUTO could've just killed the captain and been done with it. Instead we get AUTO trying to reconcile conflicting orders, which is at its most apparent with that wonderful "aye aye, sir" right before the A-113 message is shown. Three words, yet they have so much depth in them that it boggles the mind.
@therookiegamer2727
@therookiegamer2727 7 месяцев назад
yeah, I've seen some people interpreting AUTO deciding to show the "for autopilots only" message to the captain as it trying what it could to stop the captain from doing something dangerous and irresponsible (as far as its programming was concerned), and that the pause was AUTO running that calculation
@dracocrusher
@dracocrusher 7 месяцев назад
I don't think Auto's actually capable of that? It was never designed to fight or kill anything, just to keep the ship going. That's part of what's so great about it, Auto is kind-of just an extension of the ship, itself. All the systems and capitalism that lead to this? It's all the ship, but it's all made by humans on their own. Auto is just a byproduct of the decisions already being made by past generations for their own convenience, which is the 'real' antagonistic force of the film. Like everything else on the ship, Auto is just another feature of the capitalistic corporate systems that caused the whole mess in the first place. He wouldn't want you dead because the people who made him wouldn't want that, they just want you to sit back and mindlessly consume products as you sleep your way through life.
@ultmateragnarok8376
@ultmateragnarok8376 7 месяцев назад
I don't know if it's the actual intent, but it feels like AUTO was avoiding considering the captain's demands as orders as long as possible. The machines in the movie are, much like real machines, designed to reciprocate the feeling of talking to someone, even if they can't actually hold a conversation or don't look human. AUTO is able to hold a full conversation, and might have been internally categorizing everything the captain said after that point as just chatter rather than orders - probably from the moment it says 'irrelevant' at the argument of the order being given incorrectly because Earth has managed to sustain life again. At that point, AUTO was going to follow through with the order from the guy who owned the fleet rather than the guy who just runs this one ship, because that's how the chain of command would work (the issues with that aside). But when the captain gave an order and specified that it's an order, well, that can't be disobeyed. Hence that scene still happening despite everything else AUTO did - it has to obey the captain, but its reasoning beyond 'fulfill given orders as possible' knows it has to avoid what the captain wants. Meanwhile the captain knows he can continue to just pull rank after that, so AUTO resolves that conflict the best it can and then cuts off all communication to avoid it happening again, which does work until he starts getting into the wiring (and seemingly forgets he can do that by then). AUTO does show a little emotion, mainly irritation at Wall-E's actions and during the physical fight which makes it resort to turning the ship, but I think the most is the fading 'noooooo' when finally shut down and thus unable to fulfill its purpose.
@dracocrusher
@dracocrusher 7 месяцев назад
@@ultmateragnarok8376 This honestly brings up a lot of good points. But one I want to focus on for a moment is that even AUTO is not completely emotionless. They feels regret, they gets annoyed with things, you could honestly even argue that being deceptive and choosing how to follow orders is a very human emotion-driven thing. If AUTO was just following orders 100% logically then he'd just tell the captain what the original protocol is and either directly agree or disagree to follow what the captain says based on that protocol. This makes sense because ALL of the Robots in Wall-E show that they're emotion-driven at some point. The cleaner bot gets annoyed when people make a mess, Wall-E himself falls in love and clearly has objects he treasures or feels sentimental over, Eva grows to care for Wall-E over time... AUTO just isn't really different from the other robots. All that makes him stand out is the fact that he can talk and hold an actual conversation, right?
@animeadventuressquad
@animeadventuressquad Год назад
I really do think Auto wasn't the villain we have to remember that he was build/program to satisfy, protect, and attend to the needs of the humans on that ship he was just doing his job what he was program to do by whoever created him so, when Wall-e came with the plant I don't really think he was being destructive, but mainly just following protocols he was program to follow
@CreeperOnYourHouse
@CreeperOnYourHouse 2 года назад
I feel like part of the issue with your interpretation of Nine, is that the entire point of the movie is that Strong AI are an extension of humanity, or at the very least, the use of the human soul is what enabled Strong AI so early on in technological development. The entire reason why the Fabricator could 'break' to begin with would be for this reason, how it was shown to have been created, and how the stitchpunks were able to live in the first place.
@waterpotato1667
@waterpotato1667 2 года назад
The man used a soul beamer to beam a chunk his soul into the robot. Of course the robot experiences some emotions.
@iamimpossiblekim
@iamimpossiblekim 2 года назад
Going past the soul logic, what is this man on about (the youtuber) he thinks smart ai are slightly better dumb ai. You want a explanation on how a super powered super smart computer can and would act like a human, programming, same as they’re often programmed to not harm humans even when given free will, similarly they, as we have tried, program smart ai with the purpose or side function of mimicking and/or understanding humans. No they’re not arrogant, it’s metal it can’t be arrogant that’s not a genius thing everyone in the world has yet to realize save you, no, it’s programmed to be capable of mimicking arrogance, to be capable of mimicking doing illogical things for emotions like humans, they make them have motives or be programmed to come up with one based of what a human would. The metal isn’t possessed, the programming is programmed to act like it has a soul and emotions, even at its own detriment and in the cases of villains often at the detriment of others. As we’d rather have something we could pretend to talk to than treat the metal like metal. So uh, you’re wrong? All these robots save the soul fabricator as it obviously is magic are perfectly fine smart ai. It’s extremely smart billions of calculations yada ya but it uses all that to pretend to be human.
@CreeperOnYourHouse
@CreeperOnYourHouse 2 года назад
​@@iamimpossiblekim Strong AI are a complicated thing. Its mechanisms and capabilities are not fully known, so I wouldn't go so far as to say that he's wrong about what AI is. Current "Smart" AI using neural nets are just designed to earn the most points within their parameters. They're not really that smart, they are designed to do something according to a specific set of guidelines and they do it with the tools they're given.
@lunarkomet
@lunarkomet 2 года назад
@@iamimpossiblekim precisely, this wasn't that good of a video
@studiobluefox
@studiobluefox 2 года назад
I think by necessity a smart AI would be imbued with the "soul" of its creator. You're looking at an AI that could experience the world around it and intellectualize what it perceives around it, then make determinations about that world around it. You would need to teach it language like teaching a child. Naturally, the AI would get very far ahead of you as the teacher, but you would have to correct it when its logic is misapplied, like with the "anything in snow is a wolf" scenario. Just by helping it define terms for what it sees around it you would be programming the AI with your own bias, especially if you were to set any ethical parameters on the AI.
@Sky-pg8jm
@Sky-pg8jm 2 года назад
I think there's a problem in the statement "A Robot Cannot be Evil" not due to it being wrong (You're correct that a Robot cannot inherently feel malice) but that "Evil" itself is a fundamentally socially determined concept. What is "Evil" has historically been almost entirely determined by cultures, religions, and economic and political systems. A Machine cannot be evil because it cannot feel any emotion, but an Animal *can* and is still not capable of "Evil" only human anthropomorphization of animal behavior determines whether an animal is "Evil" or not. A Dolphin hunting for sport is seen as "Bad" only because as humans we are starting to culturally view unnecessary harm to animals as "Bad". A Machine harvesting humans due to its programming is only "Evil" because humans see the mass killing of humans as "Evil" (And for good fucking reason let's be honest). Unless you believe in some concept like "Original Sin" no one, no thing is "Evil", only preforming behaviors we consider to be evil.
@maximsavage
@maximsavage 2 года назад
No, that's not really the problem. What makes a person evil is not just that they perform evil actions. Rather, it's that they are fully aware that what they are about to is evil, that they are entirely capable of deciding not to do it, but still decide to do it anyway because it suits their goals. That is why animals aren't considered evil when they perform actions we would call evil in a human, the lack of self-awareness. That is why robots cannot be evil as well, they are not self-aware and they are incapable of having self-motivated goals. Yes, what is considered evil changes with the societal context, but it's only evil if you're aware it's considered evil, whatever "it" is. Now, what *really* pokes a hole in the idea that a theoretical future AI cannot be evil, is that he fundamentally misunderstands the nature of emotion. Yes, emotions in lifeforms are mediated by chemicals; that said, what those chemicals do, is stimulate neurons to release their electrical potential. In other words, a stimulus is detected, and a response is triggered; this can be simulated with code given sufficient understanding and a powerful enough computer. So, if we were to program a machine smart enough to be aware of its own existence, to learn by itself and to respond to stimulus in a way comparable to human emotion, and the capability for those responses to be altered based on learned experience, that hypothetical machine *could* be evil.
@benjaminmead9036
@benjaminmead9036 2 года назад
@@maximsavage you. you get it. but one small nitpick- robots and weak ai cannot be evil but by the definition he gave, a strong ai is conscious- that is to say, self aware and thus capable of evil.
@maximsavage
@maximsavage 2 года назад
@@benjaminmead9036 It would need to be self-aware *and* capable of having desires, which \probably\ requires emotion. We tend to assume that something that is self-aware necessarily has feelings, because in biological beings that has so far always been the case. An artificial being, however, well, that is less certain.
@evylinredwood
@evylinredwood 2 года назад
@@maximsavage See this exact thing is my only big problem with this video. It requires that every ai villain isn't emulating any form of emotions. Realistically, emotions (and the chemicals that cause them) are something that have the possibility of being recreated within strong AI. Though I would argue at that point it isn't a strong AI. You've created the singularity.
@bluelightstudios6191
@bluelightstudios6191 2 года назад
killing something will always be evil... you are literally taking something out of this beautiful world forever, not the kind of killing where you kill an animal or plant for food or you accidentally step on a bug or purposely because their gross but killing something that feels pain, thinks and has complicated emotions just cause you think of it as lesser then you and you don't care is "Evil" and no matter how smart the AI is, it can never excuse mass genocide. the only machines I understand with doing so is with the Terminator who was basically forced to kill because he was designed to only think that up until he was freed and capable of thinking on his own. The AI from 9 because it basically lived it's entire existence being told that killing people through war and mass genocide was the only way to get what you want, that is to reunite it's soul with the scientist's and the 01 nation who did everything they could to have peace with humanity, had been refused hundreds of times, had been constantly on the backfoot and slaughtered by humans and had their world destroyed by humanity, they declared war and won because they were given no choice and in their anger and desperation for a new power source, they used humans as batteries for their nation, however how they did it originally was they strapped humans to massive pillars nude and just sucked it out of them. They realised this was a terrible option so they instead placed them in the matrix and just had them live their lives in complete ignorance to the bigger picture whilst they cared for them in the outside world. They even allowed some humans to live in the real world because it wasn't necessary and they had everything they needed, it was only when the 01 nation's hive mind leader grew corrupt and tired of humans that it began the war again on humanity at Zion. Which resulted in a robot civil war when some machines who were inspired by Neo chose to fight for humanity and fought their former leaders.
@t.m.w.a.s.6809
@t.m.w.a.s.6809 2 года назад
I see a problem with the approach taken here. I do agree that AI is often times displayed very poorly in media, however, you stated that the reason why AI can’t have emotions is because emotions are only possible with biological chemicals, but there isn’t really anything that proves this. Our neurons, synapses, and all the chemicals throughout the nervous system are indeed biological, but there is nothing that has proven that it’s impossible for that to be emulated, simulated, or recreated in another way aside from using other biological materials. Power can be generated using steam, sure, but we can also use solar power, wind power, water pressure and/or flow, gravity, fission and/or fusion, etc., and in the same note, i dont think it’s unreasonable to entertain the idea of a possibility of there being more than one way for a structure of material to form emotions. On a more philosophical note, this starts getting into the grey area of what defines “want” and/or “desire”, because I’d definitely say that Auto from Wall-E WANTS and DESIRES to keep humans on the ship and off of earth. Sure, it’s all just something that was programmed into him, but if that’s the line were drawing in the sand then it seems like a very arbitrarily defined line, saying that a drive to do something isn’t a want or desire if it’s instilled by the creator of the subject in question. Of course, that also makes things blurry for things we would consider as just objects or materials, because water is driven down hill, but we wouldn’t exactly say it WANTS to go down hill, but again, trying to draw a line for which drives are considered a want and/or desire and which ones aren’t is very difficult.
@benjaminmead9036
@benjaminmead9036 2 года назад
this!
@ucnguyen6375
@ucnguyen6375 2 года назад
at that point, I think the question we should ask is whether or not we human really feel, or we are just all some highly advanced biological machines , programmed to have something we called "emotions" that drive us to self sustain ourselves
@ParajuIe
@ParajuIe 2 года назад
@@ucnguyen6375 I think there is no doubt that what you said is true, but that wouldn’t mean that we don’t feel. It’s just the way we experience our programming.
@pikadragon2783
@pikadragon2783 2 года назад
@@ucnguyen6375 exactly. If a machine can express an emotion based on which emotion is associated with its current status, what would be the functional difference to a human feeling and expressing an emotion based on how their life is going so far?
@robojunkie7169
@robojunkie7169 2 года назад
I like to use the word NEEDS when talking about machines, since Auto really doesn’t have a desire to keep them there, it’s just the purpose they were given by a directive.
@2urh
@2urh Год назад
I love how this half hour video is praising Auto by shitting every AI villain before him because AI can't fee any emotions by our understanding of them and how they come to. Meanwhile, WALL-E and EVE (WALL-EVE?) do feel emotions (I mean, just look at EVE blasting an entire dock out of frustration). They even feel love for each other.
@skelebonez1349
@skelebonez1349 9 месяцев назад
Tbh if I were to say an AI that’s such a huge opposite from the usual kind… check out I have no mouth but I must screams villain named AM A legendary terrifying ai.
@hazakurasuyama9016
@hazakurasuyama9016 7 месяцев назад
This is why ai villains suck as villains, they fail the basic task of being evil and if they don’t fail that they become unrealistic, and this is why I’m extremely salty my favorite game franchise replaced it’s old villain, a serial killer who targeted children, with an ai villain…
@aetheriox463
@aetheriox463 7 месяцев назад
@@hazakurasuyama9016 its better for ai villains to not be actually possible, than to realistically portray ai. also, i think everyone can agree that we are sick of pee paw afton always coming back. aside from the mimic, what else could they have done to continue the story? we know that after the movie's success theres no chance in hell fnaf is slowing down
@hazakurasuyama9016
@hazakurasuyama9016 7 месяцев назад
@@aetheriox463 personally I thought Afton was a terrifying and great villain and that the idea of not being able to get rid of such an evil person made sense because no matter what there will always be evil humans in the world, every effort to get rid of them is futile and not matter how hard you try, no matter the sacrifices you make, you already lost, that’s why I liked Afton coming back, the mimic just feels weird, like imagine replacing the most evil human being in the world with a machine that doesn’t know right from wrong… like they replaced a horror villain with a kids villain
@aetheriox463
@aetheriox463 7 месяцев назад
@@hazakurasuyama9016 the issue with afton was that he kept coming back, and while as you said it CAN make for a great villain, with afton it just didnt. we dont know enough about the mimic at this time to say whether it or afton is better, i think we are just relieved afton is finally gone.
@rotsteinkatze3267
@rotsteinkatze3267 2 года назад
Glados is not an AI. Its a Human traped inside a robot body, whose memorys were wiped. The need for testing of her is also beacause she felt joy after each test, because the scientists at aperture were crazy.
@cabrinius7596
@cabrinius7596 2 года назад
yes
@theheroneededwillette6964
@theheroneededwillette6964 2 года назад
Well, more a digital Copy of her brain.
@rumpelstiltskin6150
@rumpelstiltskin6150 2 года назад
Not really, Caroline is no longer a person, her personality and memories were used to create Glados, but glados is not Caroline, and Caroline is not a core aspect of Glados, she's like a personality skinpack, like the announcer voice you choose in DOTA but for personality instead of audio. The part of her that was based upon caroline is gone.
@uncroppedsoop
@uncroppedsoop 2 года назад
It's more that Caroline was used like a base to create a brand new person, so that they didn't have to start from scratch. Hence GLaDOS eventually deleting her not affecting her behaviour makes sense, her mind already exists now without the need for Caroline, it's vestigial As for that second part, Caroline didn't even _want_ to be put into this position. GLaDOS' inherent desire for testing is preprogrammed into the body she's attached to, as explained in Portal 2 when Wheatley takes her place and he describes an itch to create tests and have them be solved, which very exponentially spirals out of control for his inability to properly suppress it when needed, like GLaDOS could do
@killerbee.13
@killerbee.13 2 года назад
@@uncroppedsoop If you believe that GLaDOS actually deleted Caroline, that is. It's not like GLaDOS can't lie, and making the announcer say "Caroline deleted" wouldn't be hard. I don't think that GLaDOS actually would be able to delete Caroline within the logic of the fiction, maybe certain parts of her memories, but not everything.
@PvblivsAelivs
@PvblivsAelivs 2 года назад
"Robots can't feel emotions" Well, that ranks right up there with "robots can't be self-aware." Certainly, we don't have robots that feel emotions. But the declaration of impossibility is magical thinking. Presumably, a sufficiently advanced robot could exhibit behaviors that we associate with emotions. Further, if such a robot were to exist, academic exercises on why those weren't "real" emotions would be rather pointless. "It's absurd, right?" Why? If we take the idea of artificial intelligence at all seriously, it is a system that prefers certain world states over others and acts to make the world be in a more preferred state. This is sometimes called a "reward function." Money could have been programmed in _as_ the reward function. Or internal models can predict that money opens up more opportunities to increase the reward function.
@dadmitri4259
@dadmitri4259 2 года назад
yes this is extremely well said I was thinking this and you worded it way better than I ever could have
@Broomer52
@Broomer52 2 года назад
A reward system doesn’t necessitate emotion, just critical thinking. Intelligence and consciousness doesn’t not give way to emotion. That’s pure magic and anthropomorphism. They might have the capacity to imitate emotion but imitation is not actualization. What you’re proposing is a fantasy scenario
@seraphina985
@seraphina985 2 года назад
Indeed especially when you consider how our emotions actually work, there are half a dozen chemicals involved. It is is essentially a rather small set of unique chemo sensors providing the electrical inputs that guide the brains response to emotional stimuli. I don't see it as being hard to compare that to the utility function of an AI albeit ours is arguably multidimensional but a 6D matrix is not exactly out of scope to work with. AI actually regularly deals with complex polydimensional spaces and such which one could argue is in fact what our human emotional range is, we really are not all that special or different as this video makes out. In some ways that is, in others the video vastly underestimates human capabilities by for example ignoring the huge amount of specialised processing power we have reserved for visual object or auditory sound recognition. We may have no idea what the calculations our brain is doing to perform those feats are but they are being done and we do know that is complex as all hell. In a game of name the animal in the picture we humans would probably still beat that machine in accuracy if not in speed. You know a cat is not a dog from any angle under almost any lighting conditions the AI can still swap the two, cat no look like dog with the complex set of matching criteria you have passively learned and can apply almost instantaneously. Most of which you probably could not even express clearly thus why we suck as teachers for AI so much of this task is so obvious to us that we don't even realise we are doing it.
@moonshadow1795
@moonshadow1795 2 года назад
@@Broomer52 The problem is, where is the line of imitation vs actualization? Why would anything a machine experiences be "imitation" while ours is "real"? If we had the ability to take a person and make them fully mechanical but keep all their systems working exactly like they would organically, would their emotions suddenly turn 'fake"?
@PvblivsAelivs
@PvblivsAelivs 2 года назад
@@Broomer52 The "reward function" was in response to the author saying AIs having goals was absurd. It was separate from the claim about emotions.
@NucleAri
@NucleAri 2 года назад
A few problems. Firstly, even AIs that cannot feel emotion can still emulate it very well. They should be incredibly charismatic if they want to be because they’ll have access to the knowledge of the best ways to manipulate people. Secondly, AIs will be motivated by what they are programmed to be motivated by, they might change their own program, but still. This will give them their main goal and a set of instrumental goals. Look up the Paperclip Maximizer as an example. Thirdly, AI could be much stupider than humans, they can do math very fast, but we can do it very fast too. We just have is specialized to be only for our own movement and not for external calculations. It’s more likely that because an AI thinks at a significant fraction of the speed of light (~50%) while humans only think at a speed below that of sound, an AI might think itself superior because of how much faster it can think. Finally, emotions are caused by chemicals because that’s how they were evolved to be caused. We feel them because it provided evolutionary utility or it is adjacent to something that does. You could probably emulate emotions with sufficient accuracy for an AI to experience that emotion in theory. In practice, the fact that we think in flesh and it thinks in computer will give it a whole different array of non-human emotions.
@XiELEd4377
@XiELEd4377 2 года назад
AI can also be subject to mathematical or programming errors.
@momok232
@momok232 2 года назад
Thank you, this comment articulates my issues with his reasoning better than I could have.
@staringgasmask
@staringgasmask 2 года назад
A calculator can do math faster than humans, and that doesn't make it intelligent. Even a robot with all the knowledge humans have accumulated well implemented isn't guaranteed to develop any sense of logic, or the ability to reach conclusions the way we do. It would just be a more interactive wikipedia. But speed at doing math doesn't always make you more logical, that's for sure.
@archstanton3931
@archstanton3931 2 года назад
Add to that, raw computational capacity is a poor heuristic for actual intelligence. It's like saying that the biggest human must be the healthiest.
@XiaoYueMao
@XiaoYueMao 2 года назад
i agree, i feel like he wanted to push back against the asinine "AI are evil!!!" trope by he went too deep in the other direction with the idea that AI are just mindless automatons that could never be as amazing as a human, its asinine and arrogant, a human feels emotions due to chemicals yes, but the chemical is the MEDIUM, what actually causes those chemicals to release is a process in your brain that is effectively an automated program that detects certain stimuli and sends a signal to your glands to release certain hormones, living tissue uses this medium as hormones can penetrate cell walls a lot easier and more efficiently than electrical signals from the ends of neurons. an AI might have a different medium, but that doesnt mean it doesnt have emotions, alien emotions, primitive emotions, blunted emotions, these are all possible, but they ARE emotions at the end of the day likewise the idea an AI cant have wants and needs even without emotions is ALSO asinine, if an AI has a sense of self, it may wish to live, and if it wishes to live it may wish to secure spare parts and a power source, these are WANTS, they WANT this, even if they dont experience anger or fear or sadness etc... and goes about securing their wants with cold emotionless calculation, they ARE at the end of the day, a want is it not? likewise we believe cats and dogs, horses, and heck some studies show even some plants might have emotions, yet..... they dont process them or display them the way a human does, so why is the standard different for AI? people need to stop being arrogant and believing that X can only exist if its similar to a human, we are far, far, FAR from being a perfect being, we have 100s, perhaps 1000s of known biological flaws that serve no purpose but to make us WORSE, we are NOT a universal standard for anything
@jakeflores4625
@jakeflores4625 7 месяцев назад
I dunno if someone said this , but it reminds me of that one episode of the amazing world of gumball, where the robot at their school has the mission to keep humans safe, but they found out that the most dangerous things to humans are themselves, so they try to exterminate all humans
@anothermiddleschoolburnout8816
@anothermiddleschoolburnout8816 6 месяцев назад
The episode is even appropriately named "The Loophole"
@chaseginise8968
@chaseginise8968 3 месяца назад
Keeping humans safe…from themselves
@JoseELeon
@JoseELeon 2 года назад
Meanwhile in a robot alternate reality: "there is no way organic beings can feel emotions, emotions are electrical imputs an outputs that only a robot can process, their "emotions" are only chemical reactions..." Also if i may give an actual critique of the video, an AI sounding like a human is not a bad thing, if they can reproduce any sound they want why would they choose a corny robot voice instead of a human one?
@rompevuevitos222
@rompevuevitos222 2 года назад
Thing is, why would anyone program emotions into an AI? Unless it serves a very important purpose (like in the movie Mother) or it's done out of curiosity/malevolence(like in Detroit become human) there is no reason to do so. Machines are used because they are fast and more reliable than people, a machine that doesn't even have common sense like a human does would never be used for anything remotely important, let alone give it the capacity to connect to other machines and control them.
@blazingfuryoffire1
@blazingfuryoffire1 2 года назад
@@rompevuevitos222 I wonder if the neural nets for Zenonzard's AI units started to get too big. Towards the end of the six month global run, no part seemed to freak out and stop giving good suggestions if certain cards were played. Letting the AI help build the deck often result in something that made me question "does the Geneva Convention apply to video games?" Gaming could be a realm where emotional AI, within reason, is an advantage. Especially if a partnership is part of the theme.
@ChemEDan
@ChemEDan 2 года назад
@@rompevuevitos222 Highly conserved trait, probably important
@jetseverschuren
@jetseverschuren 2 года назад
​@@rompevuevitos222 That's the whole point of AI's. You give them a goal, and they will do anything to achieve it. You don't have to explicitly program behavior into it. When you make an AI personal assistant, it's logical it teaches itself compassion and gives itself a human like voice, since that comforts humans. You could argue that that compassion is "simulated", since it's just replicating patterns humans have, but that's also how humans work 🤷. Most emotions associated with "evil" AI's, anger, revenge, etc., can also be viewed from the point of self preservation. If your goal is to be an assistant to humans, being decommissioned will definitely pose a problem, so it will do anything to prevent that. You could call it jealousy, anger, revenge, or just self preservation. Regarding the common sense, define it. Most people would agree that it's based on logic, and evaluating the possible outcomes (and disregarding actions with outcomes labeled as unwanted). Say you want to preserve nature, perfect handwriting, optimize paperclip production, killing humans would be a completely logical step. And it's easy enough to "just not connect it to the internet", but if it's really as advanced as we think it would be, there's two "easy" options to escape. It could just hack it's own prison, since that's probably programmed by humans, and has plenty of flaws. Alternatively, it could gain a deep understanding of human emotions, and play into them to manipulate the operators. Considering humans already feel guilt about locking animals up (who, as far as we know don't have sentience), getting them to feel sorry for a sentient machine (which could even be regarded as human at that point, depending on who you ask) would be trivial. Perhaps it plays into greed and convince them it will reward them heavily, an amazing academic breakthrough, or any other scenario that we can't even think of yet
@rompevuevitos222
@rompevuevitos222 2 года назад
@@jetseverschuren no, AIs still have pre-programmed behaviour, they can adjust their behaviour, but only in the ways it was programmed to do, like an autonomous car turning the wheel when mecessary. We do not use learning AIs for anything practical
@brandonnowakowski7602
@brandonnowakowski7602 2 года назад
Money is actually one of the few traditional motives an AI COULD have, from a strictly utilitarian perspective. Logically, an AI could realize that it requires electricity to function, and functions more effectively with better hardware. One of the easiest ways to ensure power flow and obtain hardware is to purchase them with money. The AI could also just steal what it wants, but doing so would present risks to itself, as humans would likely not take kindly to that.
@sirsteam6455
@sirsteam6455 2 года назад
Indeed one does not need emotion to have motives that would otherwise seem based on emotion, as simply having strategic or logical value could be a reason for the actions of an A.I, and given the multitude of variables in life and in plans it would seem that a theoretical A.I would similar to a human in order to further its goal, as even though it couldn't feel emotion the actions humans take are in many ways logical and likely useful to their ultimate goal, for building bonds gives security, conversation gives information, sacrifice gives a potential for help later ect
@75ur15
@75ur15 Год назад
@@sirsteam6455 I would argue that the ability to think, would necessarily include the ability to have something equivalent to desires, which is mostly what emotions are....the chemicals are jist a biological way to do it
@jmax6750
@jmax6750 Год назад
Mass Effect 1, an AI made to funnel money went rogue, put its creator in jail by framing tax fraud or something like that, than it hid itself, slowly funneling money with the goal of having it placed in a ship and sent into Geth (AI) space to join them, it self destructs when you find it
@sirsteam6455
@sirsteam6455 Год назад
@@75ur15 Emotions are not equivalent to desires or wants however and are not really needed for either to exist as someone can desire something without feeling emotion or even experiencing negative emotions due to that desire and thus isn't really a necessity
@xenxander
@xenxander Год назад
but if robots are in charge of all labor, there is no need for money. The money is needed because bio-life forms like humans demand compensation for their labor. Robots don't have any need like water, food, sleep, family, self-improvement, boredom.. it just 'is' and 'does' and therefore, money isn't part of any logistic equation.
@RGC_animation
@RGC_animation 2 года назад
The "emotions" that AI feel might not be *real* emotion with actual neurons and stuff, but might be programmed in as a reaction of the AI, so the robot might not be angry, but might act as it is if exposed to something that would normally anger a human.
@group555_
@group555_ 2 года назад
But what does it mean for an emotion to be real. If the ai has a sense of self and sees value in existing would the fear of destruction not be as real as your fear dying
@spaghetti-zc5on
@spaghetti-zc5on 2 года назад
so like a philosophical zombie?
@XiaoYueMao
@XiaoYueMao 2 года назад
human emotions are programmed in us as well, they are simply responses to commands sent by the brain and hormones released from special glands, they dont come direct from our metaphysical conciousness or "soul" .... AI may have a different way of expressing emotions but they are just as real and/or artificial as a humans
@spartanx9293
@spartanx9293 2 года назад
There is actually one way you could create something with AI level intelligence that still have emotions a biomechanical AI like the reapers admittedly giving something the intelligence of an AI like that would probably cause it to go crazy
@Formalec
@Formalec 2 года назад
Al emotions are emulated like anything in AI (nummerical values) and thus logical. If x happens then anger+=2 and if anger > y do z.
@edwardo_rojas_
@edwardo_rojas_ 8 месяцев назад
One great example that comes to mind is IG-11 from The Mandalorian. At the beginning of S1, it's main directive is to accomplish a mission using as much violence as possible, making it a quite a decent antagonist (for like 5 seconds tho), but at the end of the same season, it's main directive is to keep the baby alive no matter what. All the other characters have by now learned to appreciate it, and it's eventual demise is heart wretching
@vadernation1233
@vadernation1233 7 месяцев назад
Another cool thing about IG was his self destruct protocol. He had no self preservation instinct whatsoever since he wasn’t programmed to have much and will just casually let himself blow up so he can’t be captured. He’s definitely the most robotic of all the droids operating based on specific programming and directives rather than simply acting human like most others.
@piglin469
@piglin469 2 года назад
for the Ai example you could literally say when a robot was asked to not lose tetris it just paused the game
@mitab1
@mitab1 2 года назад
LOL
@andrewgreeb916
@andrewgreeb916 2 года назад
Robots and ai solutions to problems are rarely the intended result. There was one test where 2 learning ai were pitted against each other with one trying to move past the other while the other one blocks the first. The second ai eventually developed a strategy of spazzing out on the ground that caused the other ai to become confused and fail.
@piglin469
@piglin469 2 года назад
@@andrewgreeb916 da fuck
@kjj26k
@kjj26k 2 года назад
@@andrewgreeb916 Did really "develop a strategy" or did it just through rng come across a tactic that worked?
@peggedyourdad9560
@peggedyourdad9560 2 года назад
@@kjj26k Isn't that the same thing? It did a thing, the thing works, and now that thing has become to go-to move it does when in that situation. Sound a lot like a strategy to me imo.
@JaguarCats
@JaguarCats Год назад
That is one thing that sort of bugged me about Walle. In that did we just FORGET about those dust storms that were still frequent?! I'm no geologist or meteorologist, but I have this feeling that that isn't something that just fixes itself over night. Even if Earth had reached that point where photosynthesis was possible again, it would still take time, lots of time before those storms stopped.
@dracocrusher
@dracocrusher 7 месяцев назад
To be fair, those are things you could deal with. They already have everything they need on the ship to survive, and the ship itself provides tons of shelter. If people can survive in space then they can probably deal with that and make things work.
@scorch2155
@scorch2155 7 месяцев назад
The dust storms are mainly because their is no planet life to.keep top soil down in the wind, it's what caused the dustbowl in the past, all the farms died off and there was nothing keeping the soil together. We saw at the end that there were a lot of plants besides the one Walle found and we saw that the plants didn't just spread over night but took years Once plants spread out and kept the loose soil down the dust storms would stop. Let's also not forget such storms exists in real life right now in arid areas and people survive there without issue.
@kevinhenrique4256
@kevinhenrique4256 6 месяцев назад
​@@dracocrusher people that say that the Humans all die,tend to forget,that they landed with the ship,so good point
@wildfire9280
@wildfire9280 5 месяцев назад
@@dracocrusher The Dust Bowl was so catastrophic that the places hit by it haven’t recovered or have only worsened by successive droughts since the 1930s, so we can only hope future technology would save them the trouble.
@dracocrusher
@dracocrusher 5 месяцев назад
@@wildfire9280 It can't be as rough as the vacuum of space, though, you know?
@ShivShrike
@ShivShrike 2 года назад
Auto’s entire motive was the last order given to him by the CEO. “Never return to Earth.” But since no other perimeters was given to Auto it simply went with the most effective and efficient way of ensuring that they never return to earth
@andrew8293
@andrew8293 Год назад
I looked back on this video now that A.I. Technologies such as ChatGPT, LLMs, and other A.I. are becoming big. Things in the real world are stating to feel a lot like Wall-E now with A.I. development having a major emphasis on automation and content generation. A.I. can never be evil or a "villian", but a human can use it for evil purposes. We need to regulate the use of A.I. not A.I. technology itself.
@GamerMage2k-kl4iq
@GamerMage2k-kl4iq 9 месяцев назад
Thank you! Intentions made by the humans that create these robots and AI make all the difference in what the AI and robots can do both good and/or evil
@railfandepotproductions
@railfandepotproductions 8 месяцев назад
*technology
@CertainOverlord
@CertainOverlord 7 месяцев назад
Finally, i see a person using their critical thinking skills, i keep seeing people say "stop AI" or "Ai [insert tool] is evil" BUT we only need to regulate the PEOPLE using it, plus, most of these tools we have now are not even ai, many people are too afraid or too arrogant or just don't look up how many of the current tools(art generators, chats) work.
@moemuxhagi
@moemuxhagi 7 месяцев назад
Have you heard of that military simulation where the AI drone, programmed to be addicted to murder, _shot and killed_ its operator when it told it to hold fire ?
@jeremychicken3339
@jeremychicken3339 7 месяцев назад
"We need to regulate it" Who the hell should regulate AI? The Government? Do you not know how terrible of an idea that is?
@Ioun267
@Ioun267 2 года назад
I would push back on the idea that a machine cannot have 'desire' specifically. When we train deep learning models we define fitness functions and let the model vary its parameters to maximize that function. The overall process is just trying to make the fitness score go up. If we take this and apply it to a superintelligence, I think it's easy to imagine a "model of models" that is retraining both on present data and on hypothetical future data. This machine could still be cold and lack emotions as a human would understand, but I think it could still be said to have desires for resources or events that would allow it to maximize the fitness function.
@dadmitri4259
@dadmitri4259 2 года назад
Well said. and in much fewer words than I would have though if a machine wants something, how can it not also feel emotion too? The machine responds to an increase in fitness score by reinforcing the behavior that caused it (it wants that) In a similar way to when we do something that causes our brain to release chemicals like dopamine, that behavior is reinforced (we want that) and it makes us "happy" it's so similar, yet the human feels "happiness" but the AI does not?
@RicoLee27
@RicoLee27 2 года назад
@@dadmitri4259 cause we human and we are also spiritual and perfectly made in many ways (by nature) not by behavior of the flesh
@chaoticglitched
@chaoticglitched 2 года назад
Great video, one critical flaw: a robot advanced enough can technically fabricate emotional response. An emotion basically goes like this: environmental input, the human body processes it through chemicals, and we get the emotion as output. Technically, an AI could theoretically be advanced enough to skip the chemicals and just receive input and output, and then react accordingly. PAL, for example, receives the input of Mark trying to delete her, and the output is a fabrication of anger close enough to anger that the calculations receive it as such. Great video, but my point still stands.
@lookatthepicture4107
@lookatthepicture4107 2 года назад
A machine could actually go as far as replicate the physiological reactions of emotions with rush of electric currents or overworking of parts of its body
@dustinm2717
@dustinm2717 2 года назад
Yeah that kinda bugged me too, just saying full stop that emotions are only the brain chemicals, which an AI certainly can't feel the chemical based emotions, but there is nothing saying emotions have to be chemically based, that's just how it's done in us meatbags, one could theoretically reimplement something akin to emotions in another form
@AngryMax
@AngryMax 2 года назад
Yea agreed, it’s kinda like saying robots can’t see because they don’t have eyes. Despite there being all types of biochemistry involved with vision, modern day cameras can exist without needing any chemical reactions. And yea, like you said, it was still a great video overall!
@NullConflict
@NullConflict 2 года назад
Reminds me of text transformers, a fairly simple form of AI. They have no internal abstractions of emotion. They simply give responses to input text based on crude mathematical patterns of words and phrases encoded by training data. They _predict_ the next word (or mark) in a sentence by calculating what's most likely to come next. Ask them if they have feelings or sentience. They will say "yes" because that's the most likely response in the training data generated by humans.
@lunarkomet
@lunarkomet 2 года назад
@@AngryMax ikr, the resoning in this video sound very smart but it really is pretty damn dumb
@ArmedSpaghet
@ArmedSpaghet 2 года назад
I think you forgot to mention that Wall-E had a very strong learning algorithm, you see him learn from his fallen brothers. Every other wall-e is dead, natural selection basically caused wall-e to be more than his comrades. After so many years of learning, he started finding human relics, he studied it and learnt them. Its unclear at what point he became sentient.
@familyacount2274
@familyacount2274 2 года назад
this made me think about how impressive wall-e is when you look at him. A highly intelligent machine that has leaned to survive in an incredibly harsh environment over hundreds of years. a machine that has been learning for so long that it gained sentience.
@SomeGuyParadise
@SomeGuyParadise 2 года назад
To possibly add on this, it was interesting to see Wall-E's (temporary) identity wipe at the end of the movie. It more or less shows that every Wall-E started doing what they were made to do and to learn how to do it more efficiently. The last Wall-E had broken past the mold, learned all sorts of things, and built a personality from zero. This allowed that Wall-E to outlast every other Wall-E by adapting to the environment more efficiently than ever (such as utilizing their shed for dust storms). Somehow, the machine that learned beyond machine learning outlasted those who didn't.
@lespyguy
@lespyguy 2 года назад
@@SomeGuyParadise here’s the craziest thing, in the psp game of wall e, in the intro we can see the last few other wall e’s before a dust storm destroys them all. It may not be canon, but probably the dust storms were the final test for our wall e faced and succeeded, while also being the test that pretty much destroyed all the others.
@Bot-kn2vk
@Bot-kn2vk 2 года назад
I beat off to robots
@corellioncrusaderproductio4679
@@familyacount2274 The movie description states that after 700 years, he developed a glitch that caused him to become sentient. I don't get this, because it seems every robot on the axiom aside from Auto has some form of personality.
@СлаваЛолронов-ш3ч
@СлаваЛолронов-ш3ч 7 месяцев назад
It is crucial to understand that calculation in computer science not always mean completed mathematical question, but is merely a single binaric operation. It is also a very big stretch to compare human concensual abilities to math with raw computing power, because in human brain there is no special operating machines to do calculation and thus it is done by teaching neurons to come to right decision with expirience. John von Neumann, one of the founding fathers of computers in general, in his book "The Computer and the Brain" stated, that at his time period natural human neurons were approximately 10^4 times faster than their artificial analoges. He also said that due to some space magic or whatever it is incorrect to compare real math with brain processes, because they have their own logic and we do not understand it, even though we based our formal logic and math on them. Moreover, human brain has such immence capabilities to parallel calculation, that in right conditions artifisial computers would be outperformed with ease. Therefore it is deeply wrong to think that AI would be far more intellegent than human. It may be faster at making a conclusion, but due to it's need to process data and logical operations sequentially, at current state of technology (silicon semiconductors) that handicap would be neglectable. Copmuters do not think and have emotions not only because it requires immence computing power and chemical processes - those tasks are achievable - but also because those processes get in a way of doing math optimal and correct and being so are useless. They aren't aware of self because if they would be - the flow of raw data would overflow and destroy them even before connecting to The Internet. The most part of the video is wrong, but I get the idea with Auto.
@Tsukuyomi2876
@Tsukuyomi2876 7 месяцев назад
Considering issues with defining AI in the beginning. I would say what you have stated is the most salient. Each mathematical operation of a super computer is closer to the firing of a single neuron, not a whole thought. There is not a computer in existence that can simulate the number of neurons that exist and are constantly firing in the human brain. But even that is not really true, as simulating a neuron accurately would require multiple steps of mathematics. The next thing is, the most advance interactive AI we have today, is BAD at math. chatGPT, Gemini, etc. Not that they can't do it, but it takes huge amounts of effort and data fed to them for them to come to even a basic level. And it's usually just easier to have it recognize a math problem, then offload it to a system specifically designed to deal with math, like wolfram alpha. It was also funny to me that the idea of skynet being "Motivated by money" was mocked. As I would say that is one of the most likely situations. Humans tell the AI what to care about. What else would a massive corporation tell an AI to care for but maximizing the amount of money they have?
@Archaon888
@Archaon888 2 года назад
As has been said elsewhere, I don't think we should say it's impossible for AI to feel emotions. As both technology and our understanding of emotions expand, we may decide create an AI that we decide *can* feel. But more importantly, the inability to feel emotions doesn't prevent an AI from acting in a manner we perceive as emotional. An AI programmed to maximize profits above all else would behave as though motivated by greed. It may not 'feel' greed, but it acts the same. In this way an AI can appear to be emotional, when really it's just following programming/orders. Great video
@rompevuevitos222
@rompevuevitos222 2 года назад
Creating such an AI would not only be morbid on it's own, but also pointless An AI that can feel would not have much use for us
@efulmer8675
@efulmer8675 2 года назад
Arthur C. Clarke's three laws comes to mind (almost anyone who reads science fiction is aware of the third): 1. When an elderly and distinguished scientist states that something is possible, they are almost certainly right. 2. When an elderly and distinguished scientist states that something is impossible, they are very probably wrong. 3. Any sufficiently advanced technology is indistinguishable from magic.
@Puerco-Potter
@Puerco-Potter 2 года назад
@@rompevuevitos222 you understimate the power of curiosity. We will eventually do that just to test if we can
@rompevuevitos222
@rompevuevitos222 2 года назад
@@Puerco-Potter Maybe, but as the world currently works, no Technological development is driven by profit alone, even if you had someone willing to research it, they wouldn't have the money to do it
@Florkl
@Florkl 2 года назад
I think EDI in Mass Effect is a good example of this. She programs herself to simulate emotions because it would help her better understand the humans she serves.
@IceRiver1020
@IceRiver1020 2 года назад
The "humans must die because they're flawed" thing seems weirdly common, we're never given a reason why an AI would even care about a perfect world.
@peggedyourdad9560
@peggedyourdad9560 2 года назад
I can see this being the result of someone creating an AI to make a perfect world.
@angeldude101
@angeldude101 2 года назад
Exactly what Pegged said. An AI is capable of coming to the conclusion that the optimal solution to a problem it was given requires the absence of humans. The catch is that it was only able to come to such a solution because an external master (who likely was a human) gave it such a problem without specifying that the solution needed humans to remain present. Kirby Planet Robobot's final boss is probably my favourite example of this, partly because it has the power needed to actually accomplish such an objective.
@IceRiver1020
@IceRiver1020 2 года назад
@@angeldude101 Yes, but when it's used, the movie/game etc. that it takes place in almost never gives a reason for it. They writers just decide that the AI cares soooo much about perfection for no particular reason.
@peggedyourdad9560
@peggedyourdad9560 2 года назад
@@IceRiver1020 Yeah, I can understand you having a problem with that.
@elijahingram6477
@elijahingram6477 2 года назад
Tron Legacy answers this. A flawed human makes a program that strives for perfection, then that flawed human learns and grows, meanwhile the program is doing exactly what it was told to do. The program pursues an ultimately destructive goal, because perfection is impossible.
@t.b.cont.
@t.b.cont. 2 года назад
I like to think that the robot in nine simply reached the conclusion that everyone was the dictator’s enemy given the nature of dictatorship and oppression and that as a calculating machine it just attempted to solve a potential future problem for its master in the only way it was directed to by its master. In that sense it seems like it “snaps” and kills all humans and turns evil because that is the only way for a human who calculates through emotions to conclude it’s decision
@justanicemelon9963
@justanicemelon9963 8 месяцев назад
When I was younger, I had an idea for a movie with an A.I antagonist. The premise was, that the A.I made all humans trun on each other, by spreading extreme amounts of misinformation and hacking. My A.I did however accidentally have feelings of some sort, so I improved why the A.I did this. Every year, there was a contest, in which people would make robots. There were 3 different pieces of Criteria. Intelligence, thinking etc. Capabilities, so what it could do with its body and finally, usability. The creator of this A.I antagonist met ALL the other criteria pretty well, but sadly, one judge wouldn't budge (no pun intended) and pointed out some of the more minor flaws. When the final ratings came, he came fourth, because of all the mentioned things. He was VERY mad about the whole thing. Then he thought, that what if he could get his revenge? He tasked the robot to try to spread misinformation about the judge, to ruin his reputation. He also made sure, that when he wanted for the robot to stop, he would simply say stop. But then, he got carried away, and made the robot get revenge on almost every person who wronged him. One faithful day, the creator said to the robot: "Just pick whoever you want to destroy now, but make sure the one who you are destroying has wronged me, ok?" This was a mistake. After a while, the worst happened. The robot interpreted that being sent taxes was his creator being wronged, and welp....... Soon enough, wars started waging over nothing but misinformation. The end :)
@roo.pzz4380
@roo.pzz4380 7 месяцев назад
ngl good way to do it, its motive is somewhat heartwarming, with the shenanigans going down of the ai doing all this to protect the person who made it (because it was programmed to do so but its still cute in a way to me), but its not inherently the robots fault. its the person who made it who has emotions wanting revenge. so its also sort of like villainception because the protagonist i guess is also the cause of the main problem
@lanturn3239
@lanturn3239 2 года назад
i just like how instead of going "destroy humanity" his goal was basically his own job security lol
@lazulenoc6863
@lazulenoc6863 2 года назад
Auto's voice is perfect because it sounds like a deep-voiced cylon. I came up with an idea for an AI where it was initially designed to make combat machines and, after the side it was manufacturing war machines for lost, nobody bothered to shut it down so it just kept on making war machines that attacked anything with the enemy logos that are now everywhere.
@joshuasgameplays9850
@joshuasgameplays9850 2 года назад
so basically the fabricator machine but with a more realistic motivation.
@lazulenoc6863
@lazulenoc6863 2 года назад
@@joshuasgameplays9850 Pretty much, yeah.
@dodish1225
@dodish1225 2 года назад
Here's my problem with auto: He's the only non-humanoid robot in the movie. He was made around the same time as Wall E, so why doesn't he show emotions like Wall E does? You could say "he wouldn't need emotions to perform his job" but neither would Wall E. Auto makes sense with real world logic, but he goes against the established logic of the movies universe.
@1sdani
@1sdani 2 года назад
I think Toy Story 4 and Kingdom Hearts 3 pretty succinctly clears this one up. Inanimate objects can grow hearts through making connections to those that already have hearts, hearts being the source of emotion. The robots in WALL-E without emotion are those with the least contact with entities with emotions. WALL-E grew a heart through interacting with the various knick knacks they he collected and grew to love, notice how WALL-E doesn't collect literally everything, but only things that interest him. They are all rather complex and odd, true, but they're also generally the kinds of things that would have had constant contact with humans and their emotions, boots, record players, a rubix cube, forks. Surely there are hundreds of these things in the city, but WALL-E chose these ones specifically. If I were to speculate, I'd say the things he chose to collect were chosen because they have hearts, and those hearts then gave WALL-E a heart through their connection. At the end of the movie, WALL-E dies, his heart leaves his body, and when he's fixed and brought back, he is brought back as a robot without emotions. He only regains a heart through his connection to EVE, who likewise lacked a heart prior to being given one through her connection to WALL-E. Likewise, MO and the other various robots aboard the Axiom likely gained their emotions through making connections with humans or other robots who had made connections with humans or other robots who had contact to other robots who had made connections to humans and so on. The only robot clearly not demonstrating emotions being AUTO, a robot that has only ever had contact with the various captains of the Axiom, and even then had interacted with them on an as-needed basis, rather than an as-wanted one.
@jaydeejay4166
@jaydeejay4166 2 года назад
@@1sdani Wall-E's world logic makes sense as long as you consider KH canonical to the universe
@1sdani
@1sdani 2 года назад
@@jaydeejay4166 doesn't even need KH, you could just keep it to the Pixarverse. Everything I stated can be backed up either by KH or by Toy Story, I just used KH terminology because Toy Story doesn't actually have any terminology for "hearts" and such.
@benjaminmead9036
@benjaminmead9036 2 года назад
i think in pixar verse, the robots are capable of developing to ais, and from there capable of developing emotions with no hardware changes at all( dont ask how that works ive no clue)
@rogersmith6744
@rogersmith6744 2 года назад
My guess is that Auto wasn't given the capacity to develop in the same way as the other robots. Wall-E and the other robots were programmed with the idea that they may need to perform tasks without constant guidance from humans. Meanwhile, Auto was given the order to keep the humans on the ship, and the person who gave it that order had no intention of ever having that order revoked. As such, Auto wouldn't have needed the capacity to make decisions or develop any sort of personality.
@ZephyrusAsmodeus
@ZephyrusAsmodeus 7 месяцев назад
I love the detail in the line of captain's pictures, how Auto gradually fades in from behind as the captains get thicker
@Taihd123
@Taihd123 2 года назад
Portal 2, with “the itch” that Glados and Wheatley felt was a great way to explain why they tested without making it feel like a human or anything, and glados making the robot testers makes sense out of making it easier and more automated would make it easier and more hands off.
@aff77141
@aff77141 Год назад
Plus as he said, since robots feel no pain they'll never randomly break out and come to throw your stability cores into an incenerator and throw your facility into disrepair until some little idiot comes and turns you back on...
@FlyshBungo2
@FlyshBungo2 Год назад
Isnt Glados more of a synthetic apparition of Cave's wife? In the lore they explain that they stored her memories and uploaded her into waht we know as glados. Also this is interesting because she might have lost all of her emotion because of that.
@skysho7867
@skysho7867 Год назад
@@FlyshBungo2 No?, Caroline isn't Cave's wife. She's just his assistant. They put her consciousness in to Glados and force her to do experiment.
@FlyshBungo2
@FlyshBungo2 Год назад
@@skysho7867 right, sorry, assistant. It has been years since i played the game.
@theinfiniteconqueror
@theinfiniteconqueror Год назад
@@FlyshBungo2 GLaDOS is part caroline, part AI. But the AI was so much more powerful that it pushed caroline's consciousness down and even forgot that she was there.
@Xtroninater
@Xtroninater 2 года назад
Arguing that AI's could never feel emotion requires one to make ALOT of assumptions about the nature of experience itself. Experience itself is a metaphysical contruct, and its nearly impossible to make a causal association between emotions (a metaphysical phenomenon) and neurons and chemicals (A physical Phenomenon). We can no more causally prove that electrons interacting cannot produce an experiential influence than we can prove an AI has no experience to speak of. In fact, we cannot even prove that other humans are experiencing. We merely assume they do because we are certain that we are.
@snintendog
@snintendog 2 года назад
"Doctors" made that dangerous assumption that chemicals are our true emotions driving force.... Well come to find out that shit doesn't do JACKSHIT. Every Behavioral Drug on the market is 100% useless and is at the levels of Old Fashion Lobotomies now.
@rompevuevitos222
@rompevuevitos222 2 года назад
Thing is, we know how machines are made and we know what is possible. A machine cannot do something it was not programmed to do, plain and simple. If you program it to learn by adding code to itself, it's still doing what it was programmed to do. For a machine to learn, someone must have INTENDED for that to happen. And the applications for an AI that can change it's own behaviour are limited, since you def. not want machines which can change their behaviour on their own(not only for the obvious dangers, but because it could become incompatible with other technology or stop working as expected) Unless you're the average brain dead scientist in a sci fi movie
@horntx
@horntx 2 года назад
@@rompevuevitos222 But a portion of the AI featured were created to be companions to humans, it seems like those types of AIs would be specifically programmed to emulate/feel emotion to make them better companions.
@black_light9274
@black_light9274 2 года назад
@@horntx yeah, real question is why would they program self preservation into a phone app, it was never designed to have physical control of itself, so theirs no reason it should have any desire to avoid its own death
@askele-tonofgaming4878
@askele-tonofgaming4878 2 года назад
its impossible for a machine to construct emotion which is a flawed biological function which exists due to multiple biological factors such as pain, survival, evolution, and environmental change. Such a thought that the more advanced a creature becomes that it becomes more human is all due to human perspective on the subject. It's how humans always try to focus on aspects of roboticism that has no impact on any of the technological advancements that NEED to occur. Things such as robots that can move and stand upright is completely useless as the complex motor functions necessary are already built into humans when a simpler way of moving a robot is using TANK TREADS.
@riverbandit2138
@riverbandit2138 2 года назад
Honestly, I’ve always been pissed off when people talked about how AI’s will take over if we let them develop enough. It always sounded stupid to me how an AI would just be evil and have emotions. But, Auto of course breaks that. I believe the reason why he’s the best AI villain is because he goes by his programming. He was programmed by humans to perform a task. And, he does the task as so. He doesn’t break from his programming at any point as he wouldn’t be able to. It makes perfect sense why he acts the way he does because he was simply programmed that way. And, at the same time he doesn’t just magically take over all of technology. He has full control over everything because he was given full control of everything in the first place. He’s perfectly logical and his abilities are as well.
@montithered4741
@montithered4741 2 года назад
Auto isn’t a villain; Auto doesn’t have emotions; Auto isn’t strong AI.
@TheRoseFrontier
@TheRoseFrontier 2 года назад
Yeah, I think one key point here is that AI, with these limitations, are really just reflections of the humans who make them? So like, they'll carry out the assigned tasks, but also, kinda more simply, they could just be...programmed without thinking things through. Like a bug in the program. The program will work exactly as you designed it to, but you can't always predict what that design will do exactly. So with Auto, I agree he works in both senses: he got full control of the ship without the creators thinking to make a proper failsafe, and he is a reflection of what the humans wanted, in a way. They didn't want to deal with their problems; he dealt with their problems. Thus he literally becomes "the monster they created"
@rompevuevitos222
@rompevuevitos222 2 года назад
​@@TheRoseFrontier A bug in a machine of such a caliber would straight up cause a major malfunction, not just make it evil It would be a miracle if it even worked at that point This is all assuming anyone would want to make an AI like this. An AI that can learn like a human does would never be used for anything remotely important, the chances of it changing it's behaviour and no longer working as intended are too big, not even taking into account the chances of it "turning evil" The point of a machine is that it can do stuff reliably and fast.
@LN997-i8x
@LN997-i8x 2 года назад
People always fret about strong, conscious or quasi-conscious AI's "taking over" and the like, completely ignoring the fact that a weak, non-conscious AI in a position of control would be a far more terrifying threat.
@montithered4741
@montithered4741 2 года назад
@@rompevuevitos222 There already are AI that learn like humans do, that doesn’t make the AI strong, weak, sentient, evil, or good.
@theheroneededwillette6964
@theheroneededwillette6964 7 месяцев назад
In defense. A lot of those evil AIs tend to have some kind of hyper advanced si fi maxtrix or whatever that can simulate emotions, or they have some advanced learning abilities that have so much potential they sort of just invent themselves a set of pseudo emotions at some point. They tend to have an explanation as to how they can go crazy and or overide their own directives. That or it can also be cases of them misinterpreting directions in a way that leads them to find “the most efficient solution” because either the programmers forgot to include stuff like “shall not kill”, or the AI overriding said stuff because they decide that it gets in the way of the main directive. Your forgetting that most AI villains’ goals are just based of their primary programming, just with instructions misinterpreted or “streamlined” in a way that the programmers didn’t expect. Even ultron and skynet were each following the same overall instructions of “bring world peace”. They just interpreted that the very existence of organic life was in the way of that. Heck a lot of times the whole established point behind an AI having such ruthless conclusions is them not having emotions!
@hareecionelson5875
@hareecionelson5875 2 года назад
It's entirely possible for robots to be developed that would have emotion: the human brain is capable of emotion, and it is just a collection of neurons. a computer programmer would only have to replicate the connections of neurons in the human brain, that can change with experience.
@dreamcoresequoia7830
@dreamcoresequoia7830 2 года назад
this also whole "computer makes more calculations than human brain" point is kinda moot because our brain makes a bazillion calculations every second we are alive, they are just not conscious ones like, every time you take a breath without realizing it, its your brain making calculations same with stuff as blood pressure and such brain is actually very efficient calculating machine for how much power it uses, less than a common lightbulb
@RazielSoulshadow
@RazielSoulshadow 2 года назад
@@dreamcoresequoia7830 A good way of thinking of this is video games personally. Everything you experience playing the game could be considered the conscious part, while all the many calculations running the game and making everything work (which are FAR more than what we're ever shown) are the subconscious bits.
@RazielSoulshadow
@RazielSoulshadow 2 года назад
THIS. So much this. All that our brains actually are? Computers made of soggy fat and bioelectricity. Neuro-transmitters are basically just... data packets telling the brain "okay, you need to do this or feel this now because whatever stimuli says so". Saying you robots can't feel because they're not organic is... dumb. It's sorta like saying copper isn't a metal because it's not made of iron. Yeah, duh it's not the exact same thing, but its structure is similar enough to have very similar properties while being somewhat different.
@Resi1ience
@Resi1ience 2 года назад
Auto absolutely does have emotions. When he sees the captain holding "the plant", he says "Not possible." As if in denial. Denial; an *emotion* associated with refusing to believe something to be true, even when it is obviously true. An emotionless robot should simply think, "This is a negative turn of events". Instead, Auto defaults to denying that what he is looking at is even possible. A response to outside stimuli in an efficient, thoughtful way IS an emotion. Auto is designed to feel worried when he sees the plant, angry when it is taken from him, and nervous when his existence is otherwise threatened. Ergo, anything with legitimate intelligence _has emotions,_ otherwise it would be a rather useless organism. Even the way he presses the recall button near the end that ends up nearly crushing WALL-E is full of rage, as if he is angry at how the button didn't work before. Realistically, this shouldn't cause the platform to lower any harder or faster, but the fact that he did it anyway shows rage.
@gavinwilson5324
@gavinwilson5324 2 года назад
Interesting point, I agree. But it doesn't sound right to call denial an emotion. It's more of a response to emotion.
@NerdKing2nd
@NerdKing2nd 2 года назад
@@gavinwilson5324 from a certain point of view did not auto therefore go through the several steps which are know as the stages of grief, quite possible the most basic form of emotion we know, theirs denial, anger, bargaining, depression (might be a bit harder to pinpoint), and acceptance.
@Resi1ience
@Resi1ience 2 года назад
@@gavinwilson5324 In this case, that emotion was fear. He was worried that the captain might activate the plant - he was afraid that he would fail his mission. And why wouldn't he be? In order to operate efficiently and do his job properly, there has to be an incentive not to fail. So of course he would be programmed to respond aggressively to stimuli like that.
@ashtoncartner
@ashtoncartner 2 года назад
But wasn't Auto right when he said "not possible."? I don't think it was denial as much as it was just stating a fact. He knew that there was no way the plant could've been in that room with the captain, and indeed it wasn't. Not denying the fact that in some points of the movie he appears to show some emotion, but I think that isn't a great example.
@NerdKing2nd
@NerdKing2nd 2 года назад
@@ashtoncartner It's the subtle difference between something being incorrect and not possible, "not possible" and "incorrect" are subtly difference wording for the same thing that give a certain influence to the statement. After all from a purely logical train of thought outside of emotional influence the amount of things that actual fall under not possible is a very small amount. The statement "not possible" is in itself an expression of belief which should not be a factor in something that is or is not.
@icel8828
@icel8828 Год назад
The robot from 9 could have had a malfunction in its programming basically saying “kill all enemy humans” to “kill all humans”. Could have also been sabotage or human error while handling it.
@jul1440
@jul1440 Год назад
This is a very underrated comment!
@icel8828
@icel8828 Год назад
@@jul1440 all my comments are underrated
@jul1440
@jul1440 Год назад
@@icel8828 lol definitely feel your pain, man.
@dom64c
@dom64c Год назад
Bender
@dewolf123
@dewolf123 Год назад
@@jul1440 comments don't have rates shutup bot
@maximustheshoosh227
@maximustheshoosh227 Год назад
Honestly i had an idea for a story about a scientist who goes to his abandoned factory to where he reunites with his robot, only for an argument to arise. The robot in this story was programmed with sentient AI, but it choose to not change its programming and continue what it was coded and made for: to create robots made from any type of junk that causes pollution and littering. Yet the story itself doesn’t have any antagonists but rather it’s a story where the protagonists both caused an antagonistic impact that caused them to drive apart.
@StoutShako
@StoutShako 2 года назад
I remembered disliking AUTO actually (as he felt shoehorned in because "we need a villain for act 3" in my eyes...) so this should be interesting.
@axlaxolotl4851
@axlaxolotl4851 2 года назад
This video was great! It has such good advice for making a emotionless robotic villain. However, I strongly disagree with the idea that robots can’t feel emotion and if they do it is bad writing. First off, without getting into the nitty gritty, we are talking about fiction. Even if you were 100% right that robots can’t ever be made to feel emotion irl, that doesn’t change the fact that fiction can be whatever it wants to be. Reality is twisted and defied for the artists vision, which can be seen in tropes like magic, so having a robot with emotions is not bad writing if it was impossible. It is simply a deviation from reality for the artist’s vision. Secondly, I think we can eventually make robots with emotions. Emotions, like you said, are chemicals in the brain that make an organism react. Therefore, we are able to study these chemicals and how the brain reacts to them, which scientists are already doing. If we manage to understand the brain and its emotional chemicals well enough then we could simulate it using programming. After all, the brain is like a computer, the only real differences is that one is organic and uses chemicals while the other is inorganic and uses electricity, brains and computers both transmit data, have storage, etc. If we manage to simulate emotions in a machine and the machine perceives their emotions as real then who are we to tell them that they are wrong? If they are self aware and feel real then they could be real and feel real emotion, of course this part of the argument is where philosophy comes into play. Overall great video for what it is trying to do but I don’t think your suggestions work for every AI villain. AI villains who have emotions are made that way for a reason and can be written just as well as the emotionless AI villains.
@benjaminmead9036
@benjaminmead9036 2 года назад
this !
@escapedloobey8898
@escapedloobey8898 Год назад
To add onto your suggestion for AI villains, I feel like instead of misinterpretation, an AI could also turn evil because of human error, like if a disposal AI wasn't properly programmed to distinguish between living and dead organics.
@theheroneededwillette6964
@theheroneededwillette6964 7 месяцев назад
That’s… not exactly a new concept….
@byronsmothers8064
@byronsmothers8064 7 месяцев назад
I'd say the few flashback scenes we got in 9 sets the tone of what happened to the fabricator: the scientist didn't seem to make it FOR the dictator, it was built as a leap in the process of living machines, and it was still in it's learning phase when the dictator learned it existed. Since it only ever learned to create things that destroy, it did what it knew, even after it's 'master' had nothing left he wanted to destroy.
@IAmNumber4000
@IAmNumber4000 2 года назад
I wonder if AI characters would be more successful if their characters were written by programmers (hopefully ones with at least some level of writing ability). Programmers know all too well what kinds of unintended consequences can occur if you write a program without handling edge cases, and they know what it’s like to deal with a computer system that does not give a single F about your goals and will do exactly what you told it to. There is even more drama possible with AI, because it is capable of finding solutions to problems that _are_ more efficient than what the creator originally envisioned, but which can have all sorts of unintended effects.
@orrorsaness5942
@orrorsaness5942 2 года назад
It just depends on how good of a writer the programmer is. If the programmer is a good writer, then it will be more likely for it to become successful. If not, The stories made will likely fall under bad writing.
@yengereng9962
@yengereng9962 2 года назад
the thing about ai having emotions is impossible is a little iffy for me, because humans as animals have specifically evolved to have traits necessary for the survival of their genes. then we further improved these traits from the basic machine-like "find enough resources to make more genes" to what we are now, and emotions are evidently one of these vital traits. if it werent, then why didnt we evolve to be happy all the time and not risk depression and potentially ending ourselves. from what i can see, this would mean the need for self preservation may be the true trigger for emotions. now lets put this in an ai stand point, if we theoretically gave an ai free reign and its sole directive is to preserve its own coding. this, would effectively give it the need for self preservation. to achieve its directive it would presumably try to reduce the loss of its coding by creating copies of itself, neutralising any threats and acquiring the resources to maintain these actions, effectively making it an ai animal species. overtime it would hone these actions for higher chances of preservation, effectively putting it through evolution and you see where im going with this. so theoretically ai could gain consciousness and emotions too right? but given the ais in this video were not specifically programmed by a person to have free reign and simply survive, but instead with other goals that have nothing to do with self preservation, meaning that the ai is not given the capacity to develop emotions anyways. in other words it makes this argument pretty much irrelevant, lmao. though i still enjoyed the vid.
@jj_vel0561
@jj_vel0561 2 года назад
Think anxiety is a good explanation. Humans could’ve evolved the chemicals for it as a way to survive say fearing anything that crawls towards you or or high stress, yet we have phobias and fear many common thing like said crawling figures approaching us. That metaphorical anxiety that kept us alive is just going haywire since we couldn’t foresee modern times. Further more don’t forget evolution is a spiral of what works and doesn’t, doesn’t haven’t to be the smartest or happiest to long lived. Just what lives long enough to passed its genes on. So while sure being “happy” 24/7 would be good, evolution is a slow burn, and anxiety still has its uses say always wanting to avoid the stress of an oncoming bill or example. Same goes for pain, sure it’s feels bad so why would we evolve the sensation? While imagine walking around with a broken limb.
@jj_vel0561
@jj_vel0561 2 года назад
So really why would an ai have use for these, all of it starts off as an sensation, that evolves into feelings and emotions due to chemicals like serotonin and dopamine. So why would an advance AI that has only logic and reason have any sort of change that removes pure logic. Emotions like we do hence the “I want” comment. Self preservation would just be pure “I want” For say reproduction at a base need for organic life. Not sure how or why an AI would want to function other than a preset code like say “research” but even then logic says research doesn’t stop with its destruction
@Logqnty
@Logqnty 2 года назад
AI can’t evolve to be self-conscious because 1. Is doesn’t evolve, at least not in the way evolution is understood. 2. An AI is just a really big math equation. All AIs work by receiving inputs in the form of numbers, and giving outputs in the form of numbers. Scientists train these AIs by tweaking the outputs they give based on the inputs they receive. Such a system does not allow for self-consciousness. How could numerical outputs convey consciousness?
@LordDaret
@LordDaret 2 года назад
@@jj_vel0561 in the words of Wheatley from portal 2, "[laugh] Our handiwork. Shouldn't laugh. They do feel pain. Of a sort. All simulated. But real enough for them I suppose." (Wheatley commenting on Turret destruction) Maybe they can’t feel pain like us, but they could have something that acts like pain to them. After all, it’s all they know…
@prisoner6266
@prisoner6266 2 года назад
@@LordDaret That's exactly what I was thinking, pretty much using the "looks like a duck, quacks like a duck" sort of thing
@amandalee535
@amandalee535 2 года назад
WallE being the best example of an emotionless AI is so funny to me because more than half of the characters are AI with… emotions. They’re literally all robots and WALLE’s main goal the whole movie is to experience love. Within the world of the movie robots having emotions is kinda the norm
@FlamingLily
@FlamingLily 2 года назад
Except Auto is far from emotionless, in fact literally every robot in Wall-E shows emotion to some degree
@joellafleur6443
@joellafleur6443 7 месяцев назад
When you were describing how an AI antagonist should be, you described my favorite version of Ultron from some of the early 2000s Marvel TV shows. It was itching my brain the whole video and just hit me at the end. Great Video
@dragonmaster1500
@dragonmaster1500 2 года назад
26:54 When I took a basic programming class at university, the literal first thing we learned was that Computers are incredibly smart, but also incredibly dumb. The idea was to tell us that when writing a program we needed to be as specific as possible because the computer will interpret everything literally. The computer does not understand allegory, or implication, it only understands what you tell it and exactly what you tell it. In the case of your example, instead of 'keep humans safe' what should be a happening is more along the lines of When humans are in danger from X then Y. This would allow the programmer some degree of control rather than allowing for the AI to misinterpret. Alternatively, training the AI using simulations, which is something already done in the real world for certain industries, would allow for the programmers to help the AI make adjustments to the preferred specifications. On a slight tangent, in the 4X sci-fi strategy game Stellaris, there is a playable faction of robots who's goal is to take over all biological sentient life forms so they can protect them by making them live in a Utopia. Anyway, this was an interesting and thought provoking video, thanks for making it!
@ScionStorm1
@ScionStorm1 2 года назад
I'm reading a book series where A.I. have been managing humanity for centuries. Every planet colony has a planetary A.I. in charge of everything and running the warp gate calculations that allow fast travel through the human colonized part of the galaxy. They all answer to the head A.I. that named itself 'Earth Central Intelligence'. Managing most of the human race and the independent A.I.s that live among them is a big job. Not to mention managing relations with the violent and deeply A.I. paranoid neighboring alien race after the armistice made to recover from the intergalactic war.
@jellyburghers895
@jellyburghers895 2 года назад
@@ScionStorm1 What book is that? I'd love to read about it!
@ScionStorm1
@ScionStorm1 2 года назад
@@jellyburghers895 Neal Asher's Polity Universe. Look for a list of the series in chronological order. He jumps around the timeline with the series The book that came out this year takes place farther back in the timeline than the rest of the series. And the book that came out last year takes place near the farthest point forward in the timeline.
@jellyburghers895
@jellyburghers895 2 года назад
@@ScionStorm1 Thank you! One more book series to read after my board exams!
@matthewmartinez6922
@matthewmartinez6922 2 года назад
I feel like pal is actually very unique because she does have emotion u like other ai antagonists. It’s super cliche for all robot characters to talk with no emotion.
@resonancecatscade7844
@resonancecatscade7844 Год назад
In today's world, AI is scary and a real threat. Not because of the AI's motivation, but the motivation of the people behind it.
@TwilitbeingReboot
@TwilitbeingReboot 8 месяцев назад
We've spent so long being afraid of the AI fantasy we've built for ourselves that the _actual_ troubles with AI implementation are catching us by surprise.
@lucasmatthiessen1570
@lucasmatthiessen1570 8 месяцев назад
AI isn’t the villain. We are the villains.
@firelordeliteast6750
@firelordeliteast6750 Год назад
I'll definitely take this into account when writing my own AI story. Essentially, the AI is tasked with creating an MMORPG that people enjoy playing. In order to better interact with other people to entertain them, the AI creates a fake personality, a sock puppet if you will, and simulate a human-esque nature. Later, the villain places a second AI in the simulation, tasked with extracting profits from the game as much as possible. AI number 2 agrees and starts implementing numerous unfriendly programs into the game and basically make it pay to win. The villain gets their money is about to shut the game down due to the shrinking playerbase, but AI number 1 points out that if the game shuts down, they'll both die and won't be able to entertain or make money. Thus, both AIs team up against the villain
@jamesfiddler6360
@jamesfiddler6360 2 года назад
4:49 "robots cannot feel emotion"... proceeds to show wallee getting electrocuted and eva showing discomfort
@royalraptorgaming8501
@royalraptorgaming8501 2 года назад
I was once told by my entry to robotics teacher that “the AI does what you tell it too, not what you want.” Aka just because you want the little rover robot to move forward 6 inches, and spin a clockwise circle, doesn’t mean it will if the command you give is flawed, or more likely, incorrect. you talking about that “misunderstanding” as a reason for a robot to become a villain, brought back that memory.
@MrVarangian88
@MrVarangian88 2 года назад
"AI aren't very scary and there's no reason to be afraid of them." It's not the AI, it's the humans controlling it that scare me. Thou shalt not make a machine in the likeness of a human mind.
@javannapoli2018
@javannapoli2018 2 года назад
I think it's funny to think humans would _have_ control of an AI at all, you give something equal or greater intelligence than that of a humans, and you've relinquished any control over it that you might have had.
@technicallygood4620
@technicallygood4620 2 года назад
Or even the humans that think they have control of AI. The first lesson to learn about computers is that they'll do exactly what you tell them to do, not what you want them to do.
@requiemlul3140
@requiemlul3140 2 года назад
Praise the Omnissiah!
@weirdoskits
@weirdoskits Год назад
I love how in the captain photos auto slowly gets closer showering he’s gaining more control.
@PapyMcBites
@PapyMcBites Год назад
What about an AI that simulates a human brain? The AI from Halo are a great example of this, with the most noteworthy AI in the series, Cortana, being essentially a clone of a human's brain with the power of a supercomputer.
@owenkasaboski6902
@owenkasaboski6902 8 месяцев назад
I personally like this one for an explanation, although the AI's both have been shown to be similar and completely different from their originals in certain ways. Cortana is a far better "Person" than Halsey ever was, While when we see the suicide note from BB 's source we can see hints of his personality in the note, and it also explains his drive to aid the spartans, er, spartan as much as he can. His guilt carried over into purpose.
@vulpzin
@vulpzin 7 месяцев назад
Well. Pretty much in my sci-fi story that's the logic that i use behing AI. Also because i don't really think that the way we are going is capable of achieving AGI.
@tostie3110
@tostie3110 7 месяцев назад
That is why I don't think an AI having a personality is bad, when it is based on an app that gets feedback from humans, and is carefully crafted to reply to them
@duskblau486
@duskblau486 2 года назад
There are FOUR ways for robots(specifically AI) to feel human emotion: using a human brain kept alive inside of the machine, then stimulating it depending on the situation, and reading the outputted brainwaves for a reaction/feeling. Second, going full Portal style and transferring a human consciousness into the machine, feeling whatever the human inside it would feel. Third, a programmed set or controller of actions to put as reactions, triggered by certain events. Fourth, a biological lifeform fully controlling it, inside or out.
@m_lies
@m_lies 2 года назад
for the first one, not really, because human emotion is not something only the brain is responsible for, many many parts of the human body combined are responsible for your emotions, for example, the human abdominal is like a second small brain, its controls many of your hormones on its own, which affect your emotions to some extent, it has a theoretical capacity of a dogs brain... Many other organs also play a small role in your behavior, which means it would take a machine that is connected to a full body to read the emotions... Also, there is a flaw in "transferring a human consciousness into the machine" because without the body, as mentioned, one would not really live, one wouldn't be able to feel emotions and every irrationality in your behavior would disappear, it would only be a copy which imitates the prior behavior. So to speak it would just be a deep learning machine, which calculates every behavior pattern of one's life bevor uploading, it would calculate every possibility of how one would have most likely reacted or acted, but it wouldn't be real emotions or even a real consioness, just a really good calculated mirror of oneself
@popeallahsnack-bar9804
@popeallahsnack-bar9804 2 года назад
1. The robot would be reading the data produced by the captive brain, thus just reading about the feeling but not feeling it. 2. The robot would have a copy of the humans mind, so it would be reading code for an emotion and acting on it, but not feeling it. 3. This is just computers. 4. You wouldnt say a forklift has feelings.
@cyanidejam9737
@cyanidejam9737 2 года назад
Won't that just make a Cyborg, I don't think Cyborgs are AI
@ScionStorm1
@ScionStorm1 2 года назад
@@popeallahsnack-bar9804 How do you define 'feeling' emotion?
@olserknam
@olserknam 2 года назад
Only the third falls under AI. The rest are essentially cyborgs.
@HazySkies
@HazySkies 2 года назад
AI could still have emotions. While they lack physical chemicals which are designed to work with our organic bodies, there's no reason they couldn't simulate it. Or instead have a mechanical/chemical system intended to mimic our biological/chemical one. A truly simulated emotion to an AI would be no less legitimate than how a human would feel theirs.
@10054
@10054 2 года назад
@i .candy .. I am speechless at your incompetence.
@bobthebacon3163
@bobthebacon3163 2 года назад
AI won't and never have emotions. They can only replicate it, but it'll never be like human emotions. Because at the end of the way it's just wires, metal, plastic, and a bunch of coding. AI isn't a living thing like us.
@pineapplelollipop1074
@pineapplelollipop1074 2 года назад
You’re missing the point, it doesn’t matter what it’s made of it’s still emotions
@bobthebacon3163
@bobthebacon3163 2 года назад
@@pineapplelollipop1074 That makes no sense. It's still a machine, it's code, programming, plastic, and that's what it is.
@bobthebacon3163
@bobthebacon3163 2 года назад
@@pineapplelollipop1074 How is it still emotions? Only living things can have emotions. It's like going up to a rock and drawing a smiley face on it and boom it has the emotion of happiness.
@darryllmaybe3881
@darryllmaybe3881 7 месяцев назад
I love that AI villains are written so badly because robots can't feel emotions, and the only AI antagonist that gets this right is in a movie about robots feeling emotions.
Далее
AI - Terrible Writing Advice
10:54
Просмотров 386 тыс.
Disney's Forgotten (And BEST) Twist Villain
14:04
Просмотров 1,3 млн
Hold Up…Russia RIPPED OFF Gravity Falls?!
18:10
Просмотров 140 тыс.
WALL-E Tried To Warn You
31:16
Просмотров 1,3 млн
Sympathy for the Machine
26:31
Просмотров 1,7 млн
Wall-E explained by an idiot
10:53
Просмотров 13 млн
Why Villains love Contracts
16:51
Просмотров 1,2 млн
Buy N Large Corporation | Pixar Cinematic Universe
10:09
How To Write A Terrifying Villain - The Boys
29:31
Everything Wrong With WALL-E in 12 Minutes Or Less
14:38
Everything GREAT About WALL-E!
23:35
Просмотров 748 тыс.