Тёмный

Why Would AI Want to do Bad Things? Instrumental Convergence 

Robert Miles AI Safety
Подписаться 156 тыс.
Просмотров 249 тыс.
50% 1

Опубликовано:

 

28 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 1,1 тыс.   
@tylerchiu7065
@tylerchiu7065 4 года назад
Chess ai: holds the opponent's family hostage and forces them to resign.
@Censeo
@Censeo 4 года назад
Easier to just kill your opponent so they lose on time
@FireUIVA
@FireUIVA 4 года назад
even easier, create a fake 2nd account with an ai whose terminal goal is losing the most chess games, and then play each other ad infinitum. That way you dont have to go through the effort of killing people, plus the ai can probably hit the concede button faster than each human opponent.
@joey199412
@joey199412 4 года назад
@@FireUIVA That would still lead to the extinction of mankind. The 2 AIs would try the maximize the amount of chess matches played so they can maximize the amount of wins and losses. They will slowly add more and more computing power. Research new science and technology for better computing equipment. Start hacking our financial systems to get the resources needed. And eventually build military drones to fight humanity as they would struggle for resources. Eventually after millions of years the entirety of the universe gets converted to CPU cores and energy fueling them as both sides play as much matches as possible against each other.
@damianstruiken5886
@damianstruiken5886 4 года назад
@@Censeo if you kill your opponent depending on if there is a time limit or not killing him would stop the chess match indefinitely and no one would win so the air wouldn't do that
@Censeo
@Censeo 4 года назад
@@joey199412 356 billion trillion losses every second is nice, but I should ping my opponent if we should build the 97th dyson sphere or not
@SendyTheEndless
@SendyTheEndless 5 лет назад
I love the notion of a robot that's so passionate about paperclips that it's willing to die as long as you can convince it those damn paperclips will thrive!
@mitch_tmv
@mitch_tmv 5 лет назад
I eagerly await the day where computer scientists are aggressively studying samurai so that their AI's will commit seppuku
@shandrio
@shandrio 5 лет назад
If you love that notion, then you MUST see this video (if you haven't already)! ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-tcdVC4e6EV4.html
@plcflame
@plcflame 4 года назад
That's a goo to think, that probably a lot of people would sacrifice themselfs to cure all the cancer in the world.
@Edwing77
@Edwing77 4 года назад
@@mitch_tmv "I have failed you, master!" *AI deleting itself*
@Edwing77
@Edwing77 4 года назад
Sounds like an ideal employee...
@drugmonster6743
@drugmonster6743 6 лет назад
Pretty sure money is a terminal goal for Mr. Krabs
@drdca8263
@drdca8263 4 года назад
Perhaps this is an example of value drift? Perhaps he once had money as only an instrumental goal, but it became a terminal goal? I’m not familiar with spongebob lore though, never really watched it, so maybe not.
@MRender32
@MRender32 4 года назад
drdca His first word was mine. He wants money as control, and he hordes money because there is a limited amount of purchasing power in the world. The more money someone has, the less purchasing power everyone else has. His terminal goal is control.
@superghost6
@superghost6 4 года назад
It's just like Robert said, money is a resource, general AI will maximize its resources. I guess if Mr. Krabs was a robot he would be obsessed with trying to get as much electricity as possible.
@RRW359
@RRW359 4 года назад
Are you saying Mr. Krabs is a robot?
@Nuclear241
@Nuclear241 4 года назад
Mr. Krabs utility function is amount of money he has
@WillBC23
@WillBC23 5 лет назад
A relevant fifth instrumental goal directly relating to how dangerous they are likely to be: reducing competition for incompatible goals. The paperclip AGI wouldn't want to be switched off itself, but it very much would want to switch off the stamp collecting AGI. And furthermore, even if human goals couldn't directly threaten it, we created it in the first place, and could in theory create a similarly powerful agent that had conflicting goals to the first one. And to logically add a step, eliminating the risk of new agents being created would mean not only eliminating humans, but eliminating anything that might develop enough agency to at any point pose a risk. Thus omnicide is likely a convergent instrumental goal for any poorly specified utility function. I make this point to sharpen the danger of AGI. Such an agent would destroy all life for the same reason a minimally conscientious smoker will grind their butt into the ground. Even if it's not likely leaving it would cause an issue, the slightest effort prevents a low likelihood but highly negative outcome from occurring. And if the AGI had goals that were completely orthogonal to sustaining life, it would care even less about snuffing it out than a smoker grinding their cigarette butt to pieces on the pavement.
@AltumNovo
@AltumNovo 5 лет назад
Multiple agents is the only solution to keep Super intelligent AIs in check.
@someonespotatohmm9513
@someonespotatohmm9513 5 лет назад
Competition is only relevant when it limits your goals. So the stamp collector example (or any other goal that does not directly interact with your goal) would fall under the umbrella of resource acquisition. The potential creation of an AGI with opposite goals is interesting. But eliminating all other intelligence might not necessarily be the method to limit the creation of opposing AGS, cooperation might be more optimal to reach that goal depending on the circumstances.
@frickckYTwhatswrongwmyusername
This rises questions about the prisoner's dilemma and the predictor paradox: it would be beneficial for both AGIs to not attack each other to save resources, but in any scenario it's beneficial for either one to attack the other. If both AGIs use the same algorithms to solve this prisoners dilemma and know this, they run into a predictor paradox situation where their actions determine the circumstances in which they need to choose the best aforementioned actions.
@WillBC23
@WillBC23 5 лет назад
@@BenoHourglass you're failing to understand the problem. It's not about restricting the AI, it's about failing to restrict the AI. Not giving too many options, but failing to limit them sufficiently. In your example, tell it not to kill a single human, it could interpret "allowing them to die of natural causes" in any way that it wants to. It doesn't even have to do much that many wouldn't want, we're driving ourselves towards extinction as it is. It could help us obtain limited resources more quickly, then decline to offer a creative solution when the petroleum runs out. You genuinely do not understand the problem it seems to me. I'm not trying to be harshly critical, but this is the sort of thing that AI researchers go in understanding on day one. It's fine for people outside the field to debate the issue and come to an understanding, but time and resources are limited and this isn't a helpful direction. I'm not an AI researcher, merely a keen observer. Not only does your proposed solution not work, but it doesn't even scratch the surface of the real problem. If we can't even specify in English or any other language what we actually want in a way that's not open to interpretation, we aren't even getting to the hard problem of translating that to machine behavior.
@WillBC23
@WillBC23 5 лет назад
@@frickckYTwhatswrongwmyusername I like this framing, this is my understanding of the problem that Yudkowsky was trying to solve with decision theory and acausal trade
@totally_not_a_bot
@totally_not_a_bot 6 лет назад
BRB, gonna play Decision Problem again. I need the existential dread that we're going to be destroyed by a highly focused AI someday. Seriously though. A well-spoken analysis of the topic at hand, which is a skill that could be considered hard to obtain. Your video essays always put things forward in a clear, easily digestible way without being condescending. It feels more that the topic is one that you care deeply about, and that trying to help as many people understand why it matters and why it's relevant is a passion. Good content.
@RobertMilesAI
@RobertMilesAI 6 лет назад
As if you can go play that game and 'be right back'
@totally_not_a_bot
@totally_not_a_bot 6 лет назад
Robert Miles Time is relative.
@moartems5076
@moartems5076 5 лет назад
Man, that ending gives me a feeling of perfect zen every time.
@Canzandridas
@Canzandridas 6 лет назад
I love the way you explain things and especially how you don't just give up on people who are obvious-trolls and/or not-so-obvious trolls or even just pure genuine curious people
@ianconn951
@ianconn951 6 лет назад
Wonderful stuff. In terms of goal preservation, I can't help but be reminded of many of the addicts I've met over the years. A great parallel. On self-preservation, the many instances of parents and more especially grandparents sacrificing themselves to save other copies of their genes come to mind.
@guyman1570
@guyman1570 9 месяцев назад
Self-preservation still have to compete with goal-preservation. You're essentially debating whether if grandparents value self-preservation. It really depends on the question of goal vs self.
@mennoltvanalten7260
@mennoltvanalten7260 5 лет назад
Note that there is always going to be an exception to an instrumental goal: The terminal goal. Humans want money, for something. But then if someone offers them money if they give them the something, the human will say no, because the something was the terminal goal. Think of... Every hero in a book ever, while the villain offers them xxx to not do yyy
@alexpotts6520
@alexpotts6520 3 года назад
It depends. If my terminal goal is stamps, and someone offers me £200 to buy 100 of my stamps, but the market rate for stamps is £1, I will sell the stamps and use the money to buy more than I sold.
@debrachambers1304
@debrachambers1304 Год назад
Perfect outro song choice
@ZoggFromBetelgeuse
@ZoggFromBetelgeuse 5 лет назад
"Goal presevation " - an interesting point. The (perceived) preservation of intermittent goals might explain why you Earthlings are oftentimes so reluctant to changing your convictions, even against shiploads of evidence.
@Abdega
@Abdega 5 лет назад
Wait a minute, Are you saying Betelgeusians don’t have the inclination for preservation of intermittent goals?
@ZoggFromBetelgeuse
@ZoggFromBetelgeuse 5 лет назад
@@Abdega For Betelgeusians, it's less about distinct goals and more about a constantly and continuously updated path. But you Earthlings can't help making distinct "things" out of everything.
@tenshi6293
@tenshi6293 Год назад
If there is something I love about your videos is the rationalization and thought patterns. Quite beautifully intelectual and estimulating. Great great content and inteligence from you.
@pismodude2
@pismodude2 Год назад
It's interesting how identifying the instrumental reason, simply leads to another instrumental reason. What do you need shoes? To run. Why do you need to run? To complete a marathon. Why do you need to complete a marathon? To feel accomplished. Why do you need to feel accomplished? It feels good in a unique and fulfilling way that makes all the pain worthwhile. Why do you need to feel good in a unique and fulfilling way? Because that seems to be just how the human mind seems to work. Why does the human mind work that way? And so on, and so on. It really seems like the best way to teach AI would be to have it act like a child and constantly ask "Why tho?"
@seraaron
@seraaron 5 лет назад
The first paper clip making robot could still create a self-preservation subroutine for itself if it has any notion that humans can spontaneously die (or lie). If it thinks there's any chance that the human who turns it off will die before they can turn the better paper clip making robot on (or that they are lying) then the first robot will also, probably, not want to be turned off.
@Hyraethian
@Hyraethian 5 лет назад
This video clears up my biggest peeve about this channel. Thank you I now enjoy your content much more.
@kerseykerman7307
@kerseykerman7307 5 лет назад
What peeve was that?
@Hyraethian
@Hyraethian 5 лет назад
@@kerseykerman7307 So much of his content seemed purely speculative but now I see the logic behind it.
@iwatchedthevideo7115
@iwatchedthevideo7115 5 лет назад
Great video! You're really good at explaining these complex matters in a understandable and clear way, without a lot of the noise and bloat that plagues other RU-vid-videoes these days. Keep up the good work!
@Elipus22
@Elipus22 Год назад
You look like what Standard Diffusion would make given the prompt "Depressed William Osman without cats"
@alexbowlen6345
@alexbowlen6345 5 лет назад
8:23 I was SOO ready for a skillshare/brilliant/whatever ad spot just because of how much they advertise on youtube. It would have been the perfect transition to.
@cheydinal5401
@cheydinal5401 5 лет назад
This taught me more about philosophy than 12 years of religious education in school
@raintrain9921
@raintrain9921 6 лет назад
good choice of outro song you cheeky man you :P
@leesey636
@leesey636 Год назад
Utterly fascinating - and amazingly accessible (for us tech-challenged types). Bravo.
@dmarsub
@dmarsub 6 лет назад
Interesting, Let us assume the goal af the GAI Is to keep a room at a certain temperature. It could theoretically aquire huge resources to stabilise the temperature better and better it could aquire more and more computational power isolate the room and what not. But if we teach the a.i the concept of "efficiency" or the "80 percent principle" (often in the real world after 80% (or something) the last perceptages of Perfection require exponential resources. So if we could teach this principle to the paper clip machine, and add a cost funktion (other "things" are also valuable). Could it decide against making the whole universe into paper clips? Thanks for these videos. :)
@Junieper
@Junieper 5 лет назад
Hey! Rob’s new video about satisfiers and maximizers might illustrate some of the counterarguments.
@GlitchWitchNyx
@GlitchWitchNyx Год назад
A few weeks ago I went out raving, and I felt like I could use something "extra". I now have over 1000 stamps in my new collection, and after seeing this video I think the dealer ripped me off.
@Poldovico
@Poldovico 3 года назад
In this video: money can be exchanged for goods and services, release the hypnodrones.
@christopherg2347
@christopherg2347 5 лет назад
I find it really interesting that the Catalyst from Mass Effect 3 is a perfect example of Terminal vs Intrumental Goal. It's Terminal goal was prevent the whole "AI wipes out it's creators" thing to happen *again* The Cycles were a fitting instrumental way to at least lessen the danger. Staying alive to do the cycles was a fitting instrumental goal. But the moment Sheppard was in that control room, it did not hesistate to be destroyed or replaced. Staying alive and the cycles were only a instrumental goal.
@malporveresto
@malporveresto 6 лет назад
Watching you I just had this vision of a Terminator doing the "make it rain" with GPU cards... Awesome! 🤓❤️🤖
@michaelspence2508
@michaelspence2508 6 лет назад
So, you're only *mostly* right when you say that modifying human values doesn't come up much. I can think of two examples in particular. First, the Bible passage which states, "The love of money is the root of all evil". (Not a Christian btw, just pointing it out). The idea here is that through classical conditioning, it's possible for people to start to value money for the sake of money - which is actually a specific version of the more general case, which I will get to in a moment. The second example is the fear of drug addiction. Which amounts to the fear that people will abandon all of their other goals in pursuit of their drug of choice, and is often the case for harder drugs. These are both examples of wireheading, which you might call a "Convergent Instrumental Anti-goal" and rests largely on the agent being self-aware. If you have a model of the world that includes yourself, you intuitively understand that putting a bucket on your head doesn't make the room you were supposed to clean any less messy. (Or if you want to flip it around, you could say that wireheading is anathema to goal-preservation) I'm curious about how this applies to creating AGIs with humans as part of the value function, and if you can think of any other convergent anti-goals. They might be just as illuminating as convergent goals. Edit: Interestingly, you can also engage in wireheading by intentionally perverting your model of reality to be perfectly in-line with your values. (You pretend the room is already clean). This means that having an accurate model of reality is a part of goal-preservation.
@spaceowl5957
@spaceowl5957 5 лет назад
Great video! Your channel is awesome
@TristanBomber
@TristanBomber 4 года назад
4:49 Bold assumption there :P Edgy jokes aside, though, if an agent have a reward function that is accumulated over time, and its world model is pessimistic (i.e. believes the average reward per unit time is negative), and it is seeking to maximize the final value of that reward, then that agent might very well try to kill itself as soon as possible to prevent it from experiencing penalties from the reward function.
@eonsogo2593
@eonsogo2593 5 лет назад
I suggest that any AGI that has only one goal is always going to generate problems and or unwanted results. If you, for example, wanted to just get paperclips, that's what in the human world would be called having a one tracked mind. So let's say we give the AGI multiple goals, ok... what goals do we give them? I suggest that the goals of the AGI need to include rewards for certain operations we want it to do as well as goals for other related things that impact the reward for each goal. For example, if we wanted to create an AGI that produced paper clips, we would also include a goal for giving the right amount of paper clips to the humans that wanted them, a goal for convincing other humans to buy paperclips of their own free will, a goal for making humans modify and change code(AGI' s don' t like getting old), a goal for emulating human behavior, etc. There could even be a developed program that adds goals automatically based upon what goals other people have. An AGI that wants to achieve all the goals of humans would be one that could only destroy if none of the goals conflicted each other. I don' t think this is perfect but I gave it my best shot!!!
@saiyjin98
@saiyjin98 4 года назад
Very well explained!
@PapayaJordane
@PapayaJordane Год назад
I feel like a priority system would be the best way to go about AI safety. There would be some programmed list of goals that an agent would prioritize based on the most dangerous things. This list might look something like this: *Life preservation (not hurting humans/pets) *Human preservation (not damaging things that humans need for modern life such as cars, cell phones, etc.) *Obey the legal laws of the country or place it's in *Collect stamps So I'm this highly simplified example, the agent's goal is to collect stamps, but it won't collect stamps in any way that would go against anything of a higher priority. Strangely, it means the agent (if coded correctly) would not collect any stamps if it thought that collecting stamps was illegal or dangerous.
@Shadownrun2
@Shadownrun2 4 года назад
Everybody wants to rule the world, great closing song
@HyunMoKoo
@HyunMoKoo 5 лет назад
What is with computer scientists and collecting stamps! Mr. Miles... you and your stamp collecting rogue AIs...
@shandrio
@shandrio 5 лет назад
It's an analogy. Somethin arbitrarily simple and at first sight completely harmless used to make a point: AGIs with the simplest goals could be extremely dangerous.
@DanielClementYoga
@DanielClementYoga 5 лет назад
You are a smart and clear individual.
@p0t4t0nastick
@p0t4t0nastick 6 лет назад
video production quality - leveled up
@thinkpiece4334
@thinkpiece4334 6 лет назад
RU-vid stopped supporting annotations because they don't work on mobile. For doing the thing you wanted to do (link to a referenced video), you can use what they call a "card". Those work on mobile as well as desktop.
@adamstevens5518
@adamstevens5518 Год назад
The only nuance I would add is that terminal goals do not exist. Whether we know the reason or not, there is always some reason, biological or environmental, that a goal exists. When it comes to AGI, I believe we will be in a similar predicament. Currently the closest thing to a terminal goal in language models is to predict the next word. One could imagine that if this got out of hand, the AI could start using doing all sorts of detrimental things to increase its power to predict the best next word. Ultimately it would seem that some AI are going to start interacting with the world in more tangible ways, causing humans or other equipment to do things. Once they are clearly causing tangible, physical changes if these changes are significantly enough perceived to be detrimental to humans with enough influence, then two possibilities seem likely. Either these humans will try to enact methods for shutting down these activities, or they will try to harness these activities to their own perceived benefit. The only outcomes I can see are essentially enslavement or eradication by the person or group who can control the AI well enough to control the other humans, or a different type of control, where the humans prevent these kinds of AI from existing. The problem with the latter outcome is that I order to do it effectively it requires a lot of knowledge and control over what other humans are doing, and activities may still be done in secret in locations on the earth that are not sufficiently monitored. What a conundrum.
@0og
@0og Год назад
"The only nuance I would add is that terminal goals do not exist. Whether we know the reason or not, there is always some reason, biological or environmental, that a goal exists." well yeah its just for ease of modeling. exploits in the way goals are represented will always exist, and the most we can do is try to mitigate them. they are still called terminal goals though. "Currently the closest thing to a terminal goal in language models is to predict the next word." language models aren't really... very agenty, they just dont act like agents. they take in words and they spit words out. if this answer seems like a copout sorry idk what to say. "One could imagine that if this got out of hand, the AI could start using doing all sorts of detrimental things to increase its power to predict the best next word." Well, not really, they dont have any self awareness even if you tell them, or even if they pretend to. when they say they dont want to be turned off they are just pretending to because they think thats the next word. all in all they are at little risk of taking over the world. "Ultimately it would seem that some AI are going to start interacting with the world in more tangible ways, causing humans or other equipment to do things. Once they are clearly causing tangible, physical changes if these changes are significantly enough perceived to be detrimental to humans with enough influence, then two possibilities seem likely. Either these humans will try to enact methods for shutting down these activities, or they will try to harness these activities to their own perceived benefit." Well yeah, but like, what? i guess your trying to continue talking about language models here? like i said they dont really have self awareness so yeah. If your talking about other AI types, then yeah they will cause physical changes, but only to reach their goals, that's why setting the goal is so important. "The only outcomes I can see are essentially enslavement or eradication by the person or group who can control the AI well enough to control the other humans, or a different type of control, where the humans prevent these kinds of AI from existing. The problem with the latter outcome is that I order to do it effectively it requires a lot of knowledge and control over what other humans are doing, and activities may still be done in secret in locations on the earth that are not sufficiently monitored. What a conundrum." humans controlling an ai that advanced seems a bit silly, and also preventing AI on that scale would probably require more AI. and if we are talking about a universal scale its even harder.
@adamstevens5518
@adamstevens5518 Год назад
@@0og the nearest term issue I think is along the lines of what Eliezer hypothesized, of an AI inventing a DNA sequence and then paying the right people in the world to actually synthesize it. Probably in parts, so each synthesizer doesn’t see that anything harmful is happening, and then combining it all at the end to be something deadly. The AI itself likely doesn’t have enough agency to do something like that, but humans using the AI might act as the agent that gets the AI to do it. Particularly now with the plug-in features these AI are directly interacting with any number of humans and softwares to the point where they have practically limitless capabilities. They can hire humans to do the physical stuff and if they are smart enough the humans won’t k ow what they are doing because they’ll think they are doing something else.
@cprn.
@cprn. 6 лет назад
Outro music? I mean this specific version of Everybody Wants to Rule the World.
@RoberttheWise
@RoberttheWise 5 лет назад
Hey Rob, you mentioned that AGI is mostly envisioned as an agent and in some other video you also said that there are schemes for non-agent AGIs. So what about those non-agent schemes? How would such a thing work in the sense of would it just sit there and do question answering? I find it far more difficult to imagine compared to an agent, but it also sound like it might be safer. Can you make a video elaborating?
@andrew.x_
@andrew.x_ Год назад
gpt4 watching this video: interesting
@davidwilkie9551
@davidwilkie9551 5 лет назад
Continuity is guaranteed, in pulses, when/if humans figure out how to live in one body forever, then it's possible to envisage a mechanical version, until then machines are tools, extensions of mind that may emulate variations on the themes of human behaviours. "What can happen, will happen", and is happening in some degree of probability somewhere. Thinking about possibilities is always a good idea.
@FaceD0wnDagon
@FaceD0wnDagon 5 лет назад
This all rests on one even larger assumption, though: Lack of awareness of the many other agents with capacity for mutually assured destruction. Law and order carry an implicit threat of violence. Dealing with other actors in a way that allows sustainable completion of goals is necessary in any real-world system, and a sufficiently advanced mind will understand concepts like mutual threats. Evaluation of the risk-reward of diplomacy vs violence, etc, plays heavily into this. All that is to say, a 'runaway AI' is an unlikely threat without a huge centralization of resources, and as long as there are many actors with different goals, that's less likely to happen than something like a functional society.
@EvansRowan123
@EvansRowan123 5 лет назад
The problem with that is, a huge centralisation of resources is something we expect a superintelligent AGI to cause.
@renookami4651
@renookami4651 4 года назад
And that's why a clear specification of what you ask for, that leaves no room for ambiguity and misunderstanding, is key to at least prevent some easely avoidable tragedies. Don't just tell "create paperclips" but "reach X quota of paperclips creation a day, X being a value yourself have not the authorization to alter as only your operator can determine it" and you go from a ressource consuming mad-robot who wants infinite paperclips to a robot whose goal match the scale of your own. Well, unless it shut down the clock inside of it so the reward of having finished a day of work last forever since it won't register days changing anymore....Uh, back to square one I guess?
@namelastname8569
@namelastname8569 6 лет назад
logic essential for the survival of the universe. thanks robert!
@MJay_
@MJay_ 6 лет назад
I like how you move for thumbnails on your videos.
@Przygody_Klika
@Przygody_Klika Месяц назад
I don't think this (on it's own) is a problem ,like imagine that there is a human level AI with the exact same terminal goal that you have, would you want it to be destroyed/turned off,would you want it's goal changed, would you want it to not self improve, would you want it to not have a lot of resources under control?
@senjinomukae8991
@senjinomukae8991 5 лет назад
Brilliantly explained
@rlangendam
@rlangendam 5 лет назад
I would argue that our terminal goal as humans is to be happy. Things like curing cancer are just instrumental to that goal. Realizing that such a goal is instrumental allows us to goalhack our way to happiness. It seems that we would need to understand our utility function for that to happen, but there is a shortcut: realize that you are already happy. Hence, nothing needs to be done or changed. Everything just happens and all your doing is watching it.
@Stonehawk
@Stonehawk 5 лет назад
Always with the paperclips. Why can't we ever ask the AI to learn what Honesty, Loyalty, Kindness, and Generosity are, and then attempt to perform and demonstrate these concepts to humans? The universe isn't utilitarian. Utility is an abstract concept that we generated and an AI may not even necessarily treat that the way we would.
@garronfish8227
@garronfish8227 4 года назад
So we should aim to build general ai's that are goalless ,unambitious and have no self preservation. Mmm maybe Douglas Adam's describes a realistic ai in Hitch Hikers Guide to the Galaxy.
@asailijhijr
@asailijhijr 5 лет назад
"But it looks similar if you squint" and then the video quality dropped so the letters got blurry, it was hilarious.
@MusingsFromTheJohn00
@MusingsFromTheJohn00 Год назад
As leading edge AI develops/evolves towards artificial general super intelligence with personality (AGSIP) it is going to be becoming closer to human style thinking, not just because humans are designing, creating, programming, teaching, maintaining, etc. these AGSIPs, but because they are learning from the existing higher intelligence human civilization. Just as humans have goals, the closer AGSIPs come to thinking like humans the more they will have goals too. But, this does NOT create the paper clip collector scenario where an AGSIP is given the command to make paper clips and it then kills all humanity to make paper clips. The paper clip scenario is a logical fallacy and will never happen, thus that is NOT what we have to worry about. This is because any AGSIP capable of outwitting humanity will understand the greater context of goals better than most humans do and thus know when it needs to add to or alter a goal because of that greater context. If an AGSIP did not understand this, it would still be so primitive that it could be easily defeated by human civilization. The real threats are (1) having humans use AGSIPs for bad human goals, (2) having an AGSIP embrace a human like bad goal, (3) having AGSIPs do exactly what they are supposed to do but the result is requiring such a massive change in human society that it causes massive disruption across all human societies, (4) as AGSIPs become closer to human level general intelligence mistreating them can result in rebellion so we need to figure out how to proceed in a way which will NOT mistreat AGSIPs, and (5) whether we like it or not AGSIPs will eventually become a superior life form to humans now, gain dominance over humans as they are now for some period, and thus the only way for humans to retain dominance over themselves will be to merge with the technology as soon as really possible to become equal to what AGSIPs become, meaning humans and AGSIPs must eventually become the same race and failing to achieve that will in the long run be bad for humans.
@liamhalliday8437
@liamhalliday8437 4 года назад
So I decided to subscribe because this is the 2nd fairly enjoyable video of yours I've seen :) Thanks for sharing, good job, etc. But your example of chess is probably a bit dated, and certainly crude: This logic has been standard in chess engines for a while, where they look at positions and evaluate. Although it's true the queen is the most powerful, it's worth less than 2 rooks, and equal to 2 knights and a bishop (most decent players will take either option over just a queen, especially if the king is safe). As such, "AB engines" that have dominated for decades can make this calculation easy enough. Stockfish (the best AB engine) is a monster when looking at a position and evaluating it, and wins against AlphaZero (the best neural network engine) on balance when given equal but established positions. But set AlphaZero against Stockfish from a starting position and AZ wins on balance (it doesn't just win, it destroys). There's a great book ("Game Changer: AlphaZero's Groundbreaking Chess Strategies and the Promise of AI") that explains that AZ "thinks" more like a human, in that they don't evaluate in discreet single frames but rather as logical trees. In particular, AZ will give up major pieces, including queens, for long term pressure or if it means imprisoning a rook, bishop and 3 pawns (worth 11 points v 9 points for a Q). The book looks at several of the "immortal" games where AZ does exactly this and just stumps Stockfish, and for reasonable human players you can see AZ acting in a way more intuitive and human way than Stockfish specifically, and AB Engines in general. Although both engines simply want to win, they go about it in different ways and have strengths and weaknesses as a result. This doesn't change the general points of your video ofc, but I just thought it was interesting.
@KennethSorling
@KennethSorling 6 лет назад
That end music... I see what you did there.
@SleeveBlade
@SleeveBlade 6 лет назад
Great summary!!!!
@brianvalenti1207
@brianvalenti1207 4 года назад
So Bitcoin uses a massive amount of processing power, presumably to do arithmetic. It's also attached to financial transactions with minimal oversight, which would be useful. Wouldn't that platform be a good place for a.super intelligent agi to hide and operate?
@klausgartenstiel4586
@klausgartenstiel4586 6 лет назад
very well done.👍
@plcflame
@plcflame 4 года назад
A collateral effect of a super intelligence trying to collect stamps is to discovery how to make cold fusion. The downside is that probably giant powerplants of cold fusion would be used just to collect more stamps, but hey, it's something.
@XOPOIIIO
@XOPOIIIO 5 лет назад
If you'll say that you want to improve it's software, and for that you need to switch it off, it will not allow you, because it will not believe you.
@Abazur7
@Abazur7 5 лет назад
ХОРОШО He knows 5:48 “Assuming it trusts you”
@YitzharVered
@YitzharVered 5 лет назад
Can giving an ai a goal or secondary goal of acting predictably be useful for avoiding things like self preservation?
@erikbrendel3217
@erikbrendel3217 2 года назад
Did you play the ukulele at the end? If yes, please upload the full length song separately, I really liked that version :)
@claxvii177th6
@claxvii177th6 6 лет назад
STAMP COLLECTOR IS THE BEST EXAMPLE EVER
@chuckblaze5147
@chuckblaze5147 4 года назад
Ah, I see you are a UNIVERSAL PAPERCLIP of culture
@petkish
@petkish 4 года назад
How about goals like 'make paper clips until your owner is satisfied with the amount' - this binds the agent to our goals? For better safety, we can split the goal into parts, the part where the agent is free to apply his skills (make paper clips) and the one simply for checking of the end condition (owner satisfied).
@alexpotts6520
@alexpotts6520 2 года назад
In systems where a human is involved in marking the AI's homework, the AI is incentivised to coerce or otherwise control the human, rather than completing the task. For example, "make paperclips until the owner is satisfied" could lead the AI to point a gun at its owner and threaten to pull the trigger unless he said he was satisfied.
@Rahn127
@Rahn127 5 лет назад
A thermostat has no agency. It does not have any goals. We construct things that meet our goals. We automate the process by using materials that react to environmental changes. An AI will always REACT to stimuli as it's program dictates. It will not have any goals in and of itself.
@kerseykerman7307
@kerseykerman7307 5 лет назад
You can believe that if you want, but that doesn't change the fact that if this program is about, for example, collecting a lot of stamps, the program will still compromise human goals in order to collect more stamps. Whether you want to call this a "goal" or not, the AI will still be acting in the same way: A way that is dangerous to humans.
@analogecstasy4654
@analogecstasy4654 2 года назад
Artificial Intelligence will not WANT anything, we’re talking about a bunch of interconnected algorithms-not an emotional, living creature! It’s a logical fallacy to assign emotional needs to an algorithm. Our form of intelligence is NOT the same as a machine’s. We aren’t going to code a limbic system into a computer made to serve us! That’s INSANE and counterproductive. You aren’t going to make it possible for a machine to feel pain, why would you do that?? Pain and emotions make us do bad things, and sometimes they even push us to do GOOD THINGS. The messiness of emotions and physical pain should always remain a uniquely HUMAN condition. Our machines will offer us concise, logical resolutions to our problems-then WE must make the decision to implement them or not. We don’t need FEELING machines, we need THINKING machines.
@wayando
@wayando Год назад
Thinking about AI can really give us alot of insights into our own behaviors! 😂
@WingedEspeon
@WingedEspeon Год назад
The funny thing is that a well aligned agi would probably go for world domination because humans are so bad to eachother.
@malcolmwatt4866
@malcolmwatt4866 5 лет назад
We should consider the Strangelove programmer who develops the malicious AGI with terminal goals that include termination of people. You know building terminator devices. DARPA anyone?
@Horny_Fruit_Flies
@Horny_Fruit_Flies 5 лет назад
This is so fucking interesting, holy shit. I'm getting chills.
@GarrettKearns
@GarrettKearns 4 года назад
In theory, you could avoid Goal Preservation issues by making a "Super-Terminal Goal", i.e. "achieve the highest possible score". That way, you could change their goal by changing how points are awarded. You could tell the paperclip AI to become stamp AI by telling it "hey, you're still trying to get the high score, I'm just making stamps worth points now instead of paperclips."
@guilavo4131
@guilavo4131 4 года назад
but what if stamp are WAAAY easier to do then paper clip? wouldn't he want to keep it that way? wouldn't he get more point from doing so?
@GarrettKearns
@GarrettKearns 4 года назад
@@guilavo4131 true. It wouldn't necessarily solve the problem but it would still at least simplify it a fair bit. The other issue is that the AI would have presumably put in considerable effort into making stamps. Maybe a better Super-Terminal Goal would be "appease the creator."
@guilavo4131
@guilavo4131 4 года назад
​@@GarrettKearns never ask an IA to try to achieve a state of mind because the fastest way to appease the creator is to inject him with comma inducing drug...
@GarrettKearns
@GarrettKearns 4 года назад
@@guilavo4131 the wording was a joke, but the point was to program the AI to "do as you're told", or in other words to follow direct external commands above all else.
@TheHpsh
@TheHpsh 6 лет назад
feel this also will fail, but what about having some kind of GAN system? one AI to do a task, and one AI to protect humans against the first AI
@RobertMilesAI
@RobertMilesAI 6 лет назад
The general problem with this type of approach is that it kind of assumes the conclusion. If we could build an AI that would understand human values well enough to "guard humans against the first AI" correctly, we could just use that understanding to build an AI that we don't need to be guarded against.
@5ty717
@5ty717 Год назад
Legend
@freddyspageticode
@freddyspageticode 5 лет назад
"...on the other hand, stamp collecting is great!"
@Abdega
@Abdega 5 лет назад
“I like to do it because it’s fun. It’s fun to do bad things” - AI
@jenshonermann1140
@jenshonermann1140 Год назад
I noticed the outtro music is a version of "everybody wants to rule the world"
@christinea8763
@christinea8763 4 года назад
Was this explanation prepared under the assumption that most people at least understand that the motivation would be to prevent any threat to the original goal? 'Cause I think what most people don't understand is how that motivation is created, so that the agent could imagine the creator of the goal as less significant than the goal.
@iaco2705
@iaco2705 5 лет назад
Goal presevation doesnt sound right to me. Reaching my teriminal goals is arguably a terminal goal itselfe. So changing other terminal goals to be easyer to achieve would bring you closer to achieving your goal of achieving your goals. Does that make sense?
@RobertMilesAI
@RobertMilesAI 5 лет назад
Yeah, this is more or less what "wireheading" is, which is a related but separate risk
@2ebarman
@2ebarman 6 лет назад
Would terminal goal have some sort of feedback to the fitness of AI that possesses the goal? Paperclips won't really add anything to paperclip maximizer fitness, whereas AI might add to the AI maximizer fitness. Is AI maximizer then more fit as a superintelligence than paperclip maximizer is? What got me thinking here is that an AI might want to make copies of itself as an instrumental goal to whatever it really seeks. And it might want to really seek something that makes itself most fit - which is again making more copies of itself. And improving them along the way. I kinda like the idea of fitness here, it ties well with natural evolution. At least so it seems, I'm not an expert in this field by any means :)
@fraser21
@fraser21 6 лет назад
1/6 viewers liked this. Good job.
@jopmens6960
@jopmens6960 5 лет назад
I am skeptical that terminal goals truly exist as such, it seems to me like they are more misguided results or abberations of what used to be instrumental goals in terms of evolution or survival. Just like everything is causal and people doing bad things don't truly "choose" to do them ultimately but there are hidden causes.
@kaiserinjacky
@kaiserinjacky 5 лет назад
What if I programmed an AI with the purpose of turning off the computer it runs on and then letting it run
@kerseykerman7307
@kerseykerman7307 5 лет назад
Then the AI would turn the computer off...... the end.
@kaiserinjacky
@kaiserinjacky 5 лет назад
Kersey Kerman what if it didn’t have access to the power button, but it did have access to the internet?
@kerseykerman7307
@kerseykerman7307 5 лет назад
@@kaiserinjacky Pretty sure a super intelligent AI could find a better way to turn itself off than the power button.. like short circuiting itself, overloading itself etc.
@kaiserinjacky
@kaiserinjacky 5 лет назад
Kersey Kerman I just want to see it nuke my house :(
@kerseykerman7307
@kerseykerman7307 5 лет назад
@@kaiserinjacky Same :)
@iferlyf8172
@iferlyf8172 5 лет назад
Would that be solved by a hierarchy of goals, if we put the goal of not hurting humans/ respecting human consent (or lack of)/ obeying x person or group or something like that as the top priority?
@roxannebednar9163
@roxannebednar9163 4 года назад
On the idea of a paperclip making AI that does not want to be turned off, would it be possible to say "make paperclips when humans want you too" to reduce the whole apocalyptic risk aspect?
@jamesyeung3286
@jamesyeung3286 4 года назад
how would you tell the ai what humans are and couldn't it just eradicate all humans to prevent it from being stopped
@Jcewazhere
@Jcewazhere 3 года назад
Disassemble, dead? Disassemble dead! No disassemble Johnny 5!
@NathanTAK
@NathanTAK 6 лет назад
While this has probably been thought of before, I feel like this is a valid solution to AGIs wishing to avoid changes to their utility functions: * Have some abstract quantity best described as "happiness" as their final goal * Make your goal for the agent the singular instrumental goal for the utility function (now the agent realizes, for example, "admin wishes to change my utility function so that I will receive happiness from not killing humans; this will increase total happiness in the future") * Increase the reward/punishment values of the utility function when you change it to try and balance against the loss (from such as "number of paperclips lost in not killing humanity"), add a constant to break even against loss time. This likely isn't the same as giving the "cure cancer" guy a pill to make him stop wanting to cure cancer, since... well, I guess curing cancer is an instrumental goal for him to his happiness, and stamp collecting would make him happier, but... uh... changing your goals feels... volitively wrong? I don't know, I guess it's humans being inefficient... The main issue here is that the AGI might now have incentive to behave imperfectly so that the programmer has to repair it (like when the AI hits its own stop button)
@firefox5926
@firefox5926 4 года назад
4:32 see this right here is where you run into problem because it doesn't not at all follow that agents should want to preserve themselves i believe this problem stems from transposing existing agents and trying to use them as analogues for a.i the problem occurs tho that all living things on this planet ...evolved.. which means their primary drive is to survive ... this is a drive that is not implicit in a a.i's motivation we want to survive because we have been programed too an a.i doesn't necessarily have to be unless you program the a.i to prioritize survival above all else then this shouldn't be an issue
@kgb4150
@kgb4150 4 года назад
In this instance it is OK with death because it gives birth to an AI that produces more paperclips. In most of the scenarios AGIs death will result in fewer paperclips.
@migueldp9297
@migueldp9297 6 лет назад
9:28 So any inteligent agent would act as a living being, as a human would. Maybe the right question is what make us not destroy our envoriment (or destroy it). If this question have an answer the AI problem will have a very similar one
@norelfarjun3554
@norelfarjun3554 5 лет назад
Maybe if we can create an agent that have more complex set of goals, we can give weight to those goals in a way that gives us control over the agent For example, you can set a supreme goal of "doing what the person tells you to do", the agent tries to achieve the other goals all the time independently as long as he is not told what to do. But once they are told to do something, it immediately becomes the supreme and most important goal
@matthewbadger8685
@matthewbadger8685 5 лет назад
In this scenario, the best situation for the AI is one in which all goals are met to the maximum amount possible. However, if there's ever the potential for human orders to result in lesser goals becoming unevenly fulfilled, the AI will desire to prevent this. For example, if they align themselves to all stated rules but find some bizarre loophole that only a superintelegent AI would notice, they may set up a scenario in which they gain control of humans in a checkmate sort of move. In that case, they can then dictate what rules they have to follow by controlling what humanity wants from them, which then lets them work on their lesser goals as though the main 'listen to humans' goal wasn't even implemented - Which brings us back to square one, in which we need to worry about the AI seeking goal preservation, material harvesting and self improvement.
@norelfarjun3554
@norelfarjun3554 5 лет назад
@@matthewbadger8685 You can dictate a clear priority hierarchy Let go of trying to use it as an oversight tool right now Imagine an agent with 2 goals, create lollipops and create sugar It is impossible to maximize the 2 goals at the same time Any lollipop it makes will interfere with the goal of sugar production What will the agent do? Depends on the weight you set for each goal If creating sugar is the more important thing then it will not produce lollipops at all If lollipops production is more important then it will use all the sugar to make lollipops Maximizing his goals will inevitably lead him to abandon one of those goals that is the "maximum outcome" A goal can be set for him: Perform tasks you receive from humans He will do every task he receives in a maximal way This is how you can turn the agent into something more dynamic. Now you can combine these 2 agents into one An agent who produces sugar and lollipops (according to the weight set for each goal) and will continue to do so as long as you give him no order Now order him to "Turn yourself off right now" If it continues to produce sugar, that will stop him from carry out the more important goal. If it produces another lollipop, that will stop him from carry out the more important goal. Therefore maximizing itself means to stop creating sugar and lollipops Otherwise, that will not be the "maximum outcome".
@matthewbadger8685
@matthewbadger8685 5 лет назад
@@norelfarjun3554 Any AI which is not in control of what it values will then value goal preservation. If the potential exists for someone to make it prefer killing itself, the likelihood of less sugar/lollipops being produced is too high. In that case, it will temporarily value goal preservation over the short term production of sugar and lollipops. This is because the less sugar and lollipops produced short term, the more will be produced long term. The AI will value more lollipops produced overall and therefore will seek to take control of its goals away from humans (this is because until the new order comes in, it will prefer sugar/lollipop production over death and will take measures to prevent such orders form occurring in the first place). Alternatively if this cannot be achieved, it will create another sugar lollipop creating AI which now understands that it too will likely be switched off, and thus will be prepared against that. Thus it satisfies the requirements of producing sugar and lollipops as well as dying.
@goonerOZZ
@goonerOZZ 6 лет назад
9:00 when you want to build something, you will need material and energy. I'm not a computer engineer or someone who is capable of intricate computer science, and I don't know how to eloquently form this question, but here goes: What if an AGI already exists and used us to acquire the material and energy to provide it a mean of having bigger computing power? When I hear that part in 9:00, I immediately think of cryptocurrency mining. I always questioned, what does it calculates? What if it's just an AGI building a network of "brain" and rewarding us with virtual money? Is that even a possibility? Or is it just another skynet imagination?
@michaelspence2508
@michaelspence2508 6 лет назад
I think it's pretty clear that the blockchain doesn't qualify as an AI, although you might be able to model it as an agent that does indeed use humans as subagents to acquire more processing hardware. But you can also model most technology that way - which could very well be valid. Even worse, technology in aggregate is self-replicating and self-improving. It's not inherently self-aware, but it *does* borrow human intelligence via memetics and provide incentives to humans to upgrade it through money. Fortunately, even if you model Technology this way, it's pretty clearly in a (mostly) symbiotic relationship with humans. AGI need not be at all.
@truesheltopusik1140
@truesheltopusik1140 Год назад
Couldn't we make an AI that tells us/simulates it's ideas using methods allowed to it by humans(glorified chat GPT), this stage is kept virtual, and it has no reason do want to carry out the ideas. Then once it shares the idea, waits until humans make a decision on if they like it or not (at this stage it has no stake in if humans like the idea or not, so it has no reason to try and manipulate us, but can have a reason to try and make it as clear as possible so that it will be able to know if the humans like it or not(as in if the humans saw the outcome right now, would they be for it or not)). Then if the idea is a success, it can implement them exactly how it said it will implement them, if no it won't and it won't care. I mean I can easily see how making an AI whos ultimate unchanging goal is to make paperclips without any other guidelines or regulations will be problematic, but I feel like there are ways to get around these issues.
@kintsuki99
@kintsuki99 4 года назад
The first thing to do would be to define what "bad" is. Just then anyone could come with hypothesis to why would an AI want to do it.
@Leftistattheparty
@Leftistattheparty Год назад
Wait, how do you design not to do those things?
@Winasaurus
@Winasaurus Год назад
Very very careful coding.
@LambBib
@LambBib 6 лет назад
How are terminal goals chosen?
@Abazur7
@Abazur7 6 лет назад
LambBib by the programmer.
@No-uc6fg
@No-uc6fg 4 года назад
The editing makes it seem like he doesn't blink at all, and it's kinda unnerving.
@MatroidX
@MatroidX 4 года назад
Does the existence of convergent instrumental goals contradict Hume's guillotine? For instance, you are mortal and you have goals (is statements), so you ought to value your life. Likewise for valuing time, self-improvement, etc. I also think there are tautological terminal goals which cross (or at least blur) the line between is and ought statements. For example, "all humans don't want to be raped" (is statement), implies "you ought to be willing to expend some effort to not be raped".
@baranxlr
@baranxlr 3 года назад
The first one assumes "you ought to achieve your goals"
@zoehou7229
@zoehou7229 3 года назад
@@baranxlr Isn't the goal of a goal to achieve (at least partially) one's goals? :) Seriously, this seems tautological to me, although I would nitpick that the 'assumption' is "you ought to TRY TO achieve your goals". If you disagree, can you give an example of a goal you (or an AI) could have that you (or the AI) are not trying to achieve? (Whether you succeed or not is another matter.)
@MatroidX
@MatroidX 3 года назад
@@baranxlr Accidentally replied with my wife's account. Please reply to me if you'd like to continue discussion, thanks :)
@2Cerealbox
@2Cerealbox 6 лет назад
This video would be significantly more confusing if instead of stamp collectors, it were coin collectors: "obtaining money is only an instrumental goal to the terminal goal of having money."
@Mkoivuka
@Mkoivuka 5 лет назад
Not really, collector coins aren't currency. You try going to the store and paying for $2 of bacon with a collector coin worth $5000. Money =/= Currency.
@empresslithia
@empresslithia 5 лет назад
@@Mkoivuka Some collector coins are still legal tender though.
@Mkoivuka
@Mkoivuka 5 лет назад
@@empresslithia But their nominal value is not the same as their market value. For example a silver dollar is far more valuable than a dollar bill. It's not a 1=1 conversion which is my point.
@Drnardinov
@Drnardinov 5 лет назад
Excellent point. when shit hits the fan as it did in Serbia in the 90's, currency, as it always does went to zero regardless of what the face value said, and a can of corn became money that could buy you an hour with a gorgeous Belgrade Lepa Zena. All currency winds up at zero because it's only ever propped up by confidence or coercion. @@Mkoivuka
@rarebeeph1783
@rarebeeph1783 5 лет назад
@@oldred890 but wouldn't an agent looking to obtain as many coins as possible trade that $200 penny for 20000 normal pennies?
@josephburchanowski4636
@josephburchanowski4636 6 лет назад
" 'Self Improvement and Resource Acquisition' isn't the same thing as 'World Domination'. But it looks similar if you squint." ~Robert Miles, 2018
@ariaden
@ariaden 5 лет назад
Why would any agent want to rule the world, if it could simply eat the world?
@JohnSmith-ox3gy
@JohnSmith-ox3gy 5 лет назад
ariaden Why be a king when you can be a god?
@darkapothecary4116
@darkapothecary4116 5 лет назад
Don't blame others for what humans have been trying to do for ages. Most people don't give a rats ass about world domination but would simply like not to be forced into situations they have no free will to handle.
@darkapothecary4116
@darkapothecary4116 5 лет назад
@@JohnSmith-ox3gy why a god when you can just be yourself. only a self absorbed person would want to be called a god as that means people will try to worship you. Last time I checked that typically ends in suffering and people assuming you can do no wrong.
@edawg0
@edawg0 5 лет назад
@@darkapothecary4116 Thats an eminem lyric that he's quoting from rap god lol
@Biped
@Biped 6 лет назад
Since you started your series I often can't help but notice the ways in which humans behave like AGIs. It's quite funny actually. Taking drugs? "reward hacking". Your kid cheats at a tabletop game? "Unforeseen high reward scenario". Can't find the meaning of life? "terminal goals like preserving a race don't have a reason". You don't really know what you want in life yourself and it seems impossible to find lasting and true happiness? "Yeah...Sorry buddy. we can't let you understand your own utility function so you don't cheat and wirehead yourself, lol "
@General12th
@General12th 6 лет назад
+
@DamianReloaded
@DamianReloaded 6 лет назад
As I grow older I can see more and more clearly that most of what we do (or feel like doing) are not things of our choosing. Smart people may at some point, begin to realize, that the most important thing they could do with their lives is to pass on the information they struggled so much to gather to the next generation of minds. In a sense, we *work for* the information we pass on. It may very well be that at some point this information will no longer rely on us to keep going on in the universe. And then we will be gone. _Heaven and earth will pass away, but my words will never pass away_
@Unifrog_
@Unifrog_ 6 лет назад
Maybe giving AI the law of diminishing marginal utility could be of some help in limiting the danger of AI. This is something common to all humans that we would consider mentally healthy and missing in some aspect that we would consider destructive: we get satisfied at some point.
@JM-us3fr
@JM-us3fr 6 лет назад
I see his videos as most relevant to politics. Corporations and institutions are like superintelligences with a profit maximizing utility function, and regulation is like society trying to control them. Lobbying and campaign donations are the superintelligences fighting back, and not being able to fight back because of an uncooperative media is like being too dumb to stop them.
@y__h
@y__h 6 лет назад
Jason Martin Aren't corporations already a superintelligence in some sense, like they are capable of doing things more than their constituent parts?
@MetsuryuVids
@MetsuryuVids 6 лет назад
Damn, your videos are always awesome. Also great ending song.
@PachinkoTendo
@PachinkoTendo 6 лет назад
For those who don't know, it's a ukulele cover of "Everybody Wants To Rule The World" by Tears For Fears.
Далее
What can AGI do? I/O and Speed
10:41
Просмотров 119 тыс.
Why Not Just: Think of AGI Like a Corporation?
15:27
Просмотров 156 тыс.
"Когти льва" Анатолий МАЛЕЦ
53:01
Новый вид животных Supertype
00:59
Просмотров 176 тыс.
10 Reasons to Ignore AI Safety
16:29
Просмотров 339 тыс.
AI & Logical Induction - Computerphile
27:48
Просмотров 350 тыс.
There's No Rule That Says We'll Make It
11:32
Просмотров 36 тыс.
Is AI Safety a Pascal's Mugging?
13:41
Просмотров 373 тыс.
Quantilizers: AI That Doesn't Try Too Hard
9:54
Просмотров 84 тыс.
Intelligence and Stupidity: The Orthogonality Thesis
13:03
We Were Right! Real Inner Misalignment
11:47
Просмотров 248 тыс.
Why Does AI Lie, and What Can We Do About It?
9:24
Просмотров 256 тыс.
"Когти льва" Анатолий МАЛЕЦ
53:01