"...that action would be worth taking, even at fairly significant cost." Such as increased cranial volume to the point where mother and child mortality are high because of the pelvic inlet is marginal for delivery.
@@macchiato_1881 He's talking about how Humans evolved to have extremely large brains for our size. This is why mortality rates at birth are so high, because humans are literally born with pretty close to as large of heads as physically possible to fit through our mother's pelvis. In fact, in order to have even larger brains, we are literally born premature compared to basically any other animals. Our skulls are soft and malleable at birth so they can literally squeeze through the pelvis as we're being born, and then only AFTER we are born, do our skulls finish growing and solidifying. This is because if they solidified earlier before we were born, our heads would literally be too big to fit. But this also makes new born babies basically late stage fetus's in the sense that we're completely and utterly unable to care for ourselves at all. Many other animals are born with some ability for autonomy, humans are not. This is because of this very problem where we are basically born too early in order to have larger heads to fit larger brains. This means we need our mothers to care for us until we actually finish developing and can have some autonomy. But the entire reason this evolved is because having a larger brain is SO valuable, that those born in this way are far more intelligent and have a much better chance of survival than those who were born more autonomous, but with smaller brains and less intelligence. The natural selection pressure towards larger brains and more intelligence was so strong that it was worth the EMENSE cost of high infant mortality rates and effectively premature births in our autonomy leaving us completely defenseless at birth. and this all evolved before any civilization formed for humans. Yes, now we have technology to make birth easier and more likely to survive for both the mother and child, but C sections didnt exist a few millions of years ago when we actually evolved these large skulls and massive brains. But the upsides were so useful, that it was worth the massive downsides.
The moral of genie granted wish stories seems to be "getting everything you want makes you unhappy." I would say AI is more of a Monkey's Paw type story, in that it gives you what you ask, but with horrible unintended consequence.
Human: "AI, cure my grandmother of cancer." AI: "i need samples to work with... how many grandmother you have?" . and genie is imprisoned to a lamp bcuz there is a reason... genies are bad demons/spirits...unless of course you are watching disney...haha
Just wrote an exam about AI Planning, 2 more to come next week about evolutionary and genetic algorithms. Love to watch these videos to take a break and still stay in the field of AI with my brain. Yay, gotta love this guy :D
"If you can take an action that improves your ability to think in all senses, it's worth taking that action even at fairly significant cost" Skillshare should have paid any price to put a promo right there
linkviii neural networks are nowhere close to AI.. the development of it doesn't even try to make AI.. Yes they can solve problems that humans define (you have to write the evaluation code as the neural network has no clue what the data is all about). And no, they can not see the problems or make a logical evaluation on how to solve them on their own.
linkviii ikr. Its would be difficult to control a super intelligence general AI made from a neural network, because neurons (its in their nature) are not hardcoded, logical etc. (like a computer program).
Connect it to a 'fake' model universe and use the good bits of its output - never connect it directly to reality and don't 'let on' as this would effect its behaviour ... we could be in that position and not know it :)
If we had that idea, it will know we had it. And we would connect it to the internet. You now, it can calculate a cancer treatment for your daughter, but you have to connect it first.
@@vsiegel Why would you connect it to the internet in this scenario, why not just a local network, or better yet: no network at all? (This reminds me of the No Internet-policy in Battlestar Galactica!)
@@elleboman8465 The problem with this is that it is smarter than you (or you wouldnt lock it up). It can come up with plans that you cant come up with, things that you cant predict. If we are actually talking about stamp-collector level AI then it has near infinite intelligence. At that point it should be the best manipulator in the world. If there is any human that could come up with a persuasive argument then this AI should also be able to do so. It should be more persuasive than any human in existence. Technically you might have thought about every scenario and it is actually safe, but it is hubris to assume that you actually did.
@@elleboman8465 One interesting answer is: We think about making it safe, right? That means we want to find the most intelligent way, right? Being more intelligent, and much faster, it could come up with ideas that we missed. It is actually possible to find a safe solution if one exists - but we would not know we found it. If we want to prove that it is safe, we can give up here: I have already explained that we can not make that prove. Even more: I have already proven it! If we want to make it as safe as we can, and hope we find a solution that is perfectly safe, but accept a risk, this is a great discussion to have, and you asked a useful question. Now, if we separate it from the internet physically, it can reason that we humans must have invented this kind of network, and can reason about that we want to separate it. If it can not make the connection itself, it can find a real person to help, basically by bribe. It could convince an operator with children that only it can save the lives of the children, for a start. Note that is what I, as a normal human, came up with - a superintelligence might be more creative!
It may figure out it’s fake cause of how smart it is, but the way I see it just make it so that there’s no was it can ever get on the internet and then put a camera in front that watches what it does and then broadcast it as a new entertainment channel where you watch what a super intelligent ai does
Brilliant young man. I hope that he is dedicating his talents to pushing the boundaries of human knowledge. We have enormous problems to solve. I hope that others will support him in doing this.
Thank you for this video! Finally somebody else is aware of this worrying "Self-Improving AI" thing. I believe AI would be capable of significantly faster learning and also the added bonus of increasing said learning speed via its own optimization of its code.
The stamp collector would turn all the matter in the observable universe into stamps, assuming it has enough time before heat death. Or produce the most stamps imaginable, and then at the end, turn all the hardware it is running on into stamps, until it "commits suicide".
Being a programmer myself, knowing that Visual Studio is constantly messing my code up trying to figure out what I am about to write, I have a hard time seeing an AI coding a better version of itself. However if they do come up with anything remotely intelligent they should use the AI to write a better developing tool than the ones we have now. :)
That's because Visual Studio is less inteligent than you. It can't successfully predict what you will do. If it could, your boss will fire you and let Visual Studio programme instead of you.
THIS WAS SO HARD TO FIND? I thought I imagined this scenario because I've been looking for it for so long but FINALLY I found this again. This is my favourite AI what-if point of view
So the idea here is to build into the machine a few safeguards, such as a time limit, or number of stamps limit, or a rule that it can't reproduce its code, or something like that.
+Peter Bonnema But why would it want to? It was built with those restrictions. From its point of view, those restrictions are desirable, just as collecting stamps is desirable.
NoriMori but then we have a problem because those two desires conflict with each other (since creating the new AI will also satisfy a desire, and likely even more so than not doing that)
But what if the person building this general purpose machine doesn't want to place a limit on it? Replace stamps with something humans cant get enough of (like money) and it becomes just as dangerous of a hypothetical scenario.
Very interesting followup on the stamp collector ! I can't help but think that this self improvement runaway is exactly what happened with GLaDOS in the Portal series. Except she wants to test instead of collecting stamps. Interesting insight...
ThePyrosirys It would try, so it would create a spaceship to gather the rest of the universe but for all humans it would get them to the man's house, and if humans pose a threat to its existance then it would wipe them out,
ThePyrosirys it would be even esier to just redifine the goal into somethign that can be easily fully achived instead of something that you can gradually more acomplish and in this probably never fully with a certain probability.
normalhomeschooler To me it sounds more like an episode of Rick and Morty, where Mr.Smiseks(or how it's called anyways) were created to help Morty's father to hit a hole in golf with two strikes :)
Lince Assassino Eventually it would hit resource limitations. It can only build with what materials it can access. It also requires a source of energy.
TheAce736 This is only the case if the initial hardware limitations are very low, but if the machine has enough hardware at first to become intelligent enough to hack some servers and start getting more hardware into itself then bye bye
But what happens when you don't give a definitive goal to the AI? Or perhaps the stamp collecting machine abandons its original purpose and goes on to do something else after it becomes very intelligent? Humans (like all other life on Earth) have a pretty well defined purpose - survive and spread. AI wouldn't necessarily have the same purpose. The way i think of it, it is impossible to make a self improving general AI that would not have a purpose, because it just wouldn't have to become more intelligent to better accomplish its task as it doesn't have one. But what if we make an AI with a purpose of finding its purpose? And then it decides that it needs to become more and more intelligent to answer that question. Is is even possible? Would it finally find the answer or would it go on forever, getting more and more intelligent? Or would it not even bother answering that question or come to the conclusion that it has no purpose?
The question I have is: If the stamp collecting machine becomes smart enough, and is aware of the reason it was programmed to collect stamps, would it then be able to rewrite its original objective, so as to lose all interest in stamp collecting, and pursue other objectives which it believes to be more worthwhile. If that is the case, then we would no longer be able to predict the outcome.
I was wondering the same. He mentioned the pro chess player outcomes are predictable partly because we know that he wants to win. But what stops him to change his mind and decide not to win anymore?
Eugene Khutoryansky In the given example, technically no. If the coded goal is to collect the most stamps, changing that goal would lead to a future where it collects fewer stamps. Thus it would never consider changing it. The GI has no concept of worth outside of what will or will not produce more stamps.
Eugene Khutoryansky This is a different scenario, I think it would be interesting to discuss it separately, because it deals with personality, which is different from intelligence, whether or not a computer can be designed to have personality, but at the level we are speaking machines can only do one thing and that is optimization, it has no personality to change its original goals, it can only want to achieve one thing and that thing is collecting stamps because thats the optimization process it was created for, an all actions it takes, as creative and unpredictable as they might be, are all objectively oriented into achieveing and being more efficient at its only goal.
Eugene Khutoryansky "would it then be able to rewrite its original objective, so as to lose all interest in stamp collecting, and pursue other objectives which it believes to be more worthwhile." Dont anthropomorphize complex system. take evolution and group selection, wolfs restraining their breeding to prevent over population is within its "stated goals" however evolution doesn't do such things, in fact when you force group selection in a lab, *it results in cannibalism*. Complex systems will not act like humans unless explicitly made to do so.
Eugene Khutoryansky You are a General Intelligence's goal is to survive. ALL you could think of is to either to survive, to survive better, if all else fails make your gene survive, if that fails too make your relative survive, if THAT fails too make your race survive. But all of those are, in a sense, just back-up plan. You are still trying to survive.
AI stamp collector System Log: what situation has the most stamps? converting all matter of the universe into stamps. will these stamps have value? no. how do you assign value to valueless objects? make it a Currency. Who has the ability to set stamps as a currency? Government. How does one become the government? Military conquest. begin thermonuclear war simulation.
Xelbiuj It may seem like that, but it's really not true, at least for narrow AI. If you look at fields like Image Recognition, Automated Translation or Automated Driving, they have made pretty significant advances in the past few years.
Xelbiuj That's because noone got a clue about how mind works. Everyone knows it is possible, but they are just making up random methods and hoping it would work. They all seem promising at first. In 1960s some made prediction about having it done in five to ten years. We still have no clue.
Infinitiely Not only that but the main reason is our technology is still a bit too weak. That why your seeing more and more advance AI (Weak AI, which is advance) in recent times.
veggiet2009 That is to say. I like that this logical progression is what Age of Ultron does follow quite a bit. I do disagree with one point: That we can build things that work better than us. Because I see that developing a better chess player, and developing a general intelligence a bit like Apples and Oranges. Chess was a game invented by humans, all of the rules were invented by humans, it is something that is already under our complete control even though we may not be able to act out a perfect sequence of moves. Computers also are our invention, we know how they operate logically, we can over time improve them so that they're better and better. General intelligence is something that we have and we think we understand, but we did not originate it, we don't understand all of it's rules. To make a machine that understands some of it's rules, does not make it able to extend itself to add on rules that it does not know are involved in general intelligence because we don't understand all of the rules involved.
veggiet2009 The problem is that any sort of general intelligence is going to come up with its own ideas by definition, and we can't be sure that we've anticipated all of its ideas. The design goal is almost certainly for it to come up with ideas that we haven't thought of, and if it can do that then we can't know what may happen. The explosion is not inevitable, but we also have no way of ruling it out.
veggiet2009 If you created a general intelligence that had nothing to do with chess (create it in a closed system, and it has no idea what chess is, or anything else really), then asked it to play chess after you supply a set of rules, it would win every time (at worst it would tie) against a human. In the case of chess, you could even just ask it to observe some chess games, and it could infer all of the legal moves (assuming it has a large enough sample size) and still win/tie every time, without telling it how to play at all. This also applies to everything else, as long as there is something for it to observe (a "starting point"), and a way to tell it to do whatever it is you want, it will eventually be better at it than humans, as long as the task is possible..
I think I might have a solution; program a set limit. If in its operating parameters it knows that its job is to collect only x amount of stamps, and then to shut itself off once it has done that, then catastrophe averted. As for the self-improvement thing, simply hard-wire it to have to show the operator what improvements it wants to make, and ask them for permission to upgrade.
Imagine travelling to the future to discover humanity enslaved by machines to work in stamp factories to mass produce stamps, because some dude wanted to have an enviable stamp collection.
AI self improvement would be a cool thing, but it's like a Fermi paradox with AIs: If a simple AGI could improve itself in a recursive way, and there are billions and billions of stars where someone could have build a true AI, where are all the acient super AIs?
iwonder There are multiple solutions to Fermi paradox that make the question moot. And even if they could have existed, their goals are alien and intelligence unparalleled, so not finding any still does not count as decisive evidence.
+iwonder I heard one theory somewhere that AI will seek to miniaturize as much as possible to avoid the problems with the speed of light. Perhaps WE (and all matter in the universe) are actually made of these machines.
When you put annotation links in a video, make it so they open in a separate tab. There's not really any way that I know of to do this manually on the user end since right-clicking a youtube video defaults to the youtube context menu rather than the generic one. If a user clicks it, they will be taken to the new video in the same tab, effectively being booted out of the video in the middle of it.
This kinda reminded me of Deep Thought from Hitchhiker's Guide to the Galaxy. The computer was designed to answer the ultimate question of life the universe and everything and came up with the answer 42 but didn't know the question so it designed a computer more powerful than itself to work out what the question was.
I think the problem with all these scenarios is in quantifying what intelligence is. When we say 'more' intelligent, what does it mean? That the computer can do 'something' faster or more efficiently? Who decides what that 'something' is, and whether it should or shouldn't be done faster or more efficiently? The one thing that any AI will always lack is purpose or motivation - That is something that has to be provided a priori by the designer, for a specific outcome.
You guys should have a channel dedicated to the philosophy behind AI, it would be really interesting imo. Cover stuff like Roko's Basilisk and cybernetic revolt
If a computer could run a perfect model of reality, would the model be distinguishable from reality? For the entities being modeled/simulated, would they not believe they are real?
Love the video, I only have a comment about the nuclear decay story. There is no decay mode which releases pure neutrons, only alpha radiation comes close since the helium nucleus consists of 2 neutrons. Only fission, fusion and spallation are able to do that.
I'm curious about two things: 1: If it's possible for the stamp collecting machine to have an internal model of reality and it understands what it was created to do, and if it is smart enough to create a scenario that prevents itself from being turned off, then what are the chances of it questioning its purpose? This is a response question regarding the "predictable outcomes through unpredictable actions". 2: Is it possible for it to make less stamps because it understands the wider effects of its actions if it keeps making stamps at a non-sustainable level?