***** The more I think about it, you kind of have to think on even smaller scale. Otherwise, according to the definition you provided, many electrical devices are not machines (for instance CPUs, lasers, radio transmitters, immersion heaters, electromagnets, USB flash drives, ...) In that case however, almost everything can be considered a machine. So in the end, it comes down to our intuition / ingrained notion.(epistemological nihilism wins once again)
neuron1618 .... Most sophisticated? Human brain? Human brain is nothing. You don´t even remenber what was your thinking, when you did not know language... If is brain the most sophisticated machine, it is brain of genius and genius is not human, but next step of evolution.
You forgot the point beyond the uncanny valley where the object in question becomes so human that it is no longer creepy, as it fits in with the human look perfectly (and by perfectly I mean that from a distance or similar point of misinterpretation a robot and a human would look alike).
That falls in the "Stylised" are, which is before the uncanny valley, where it looks human, but is easily discernible from real life, being animated and all.
A robot that can make expressions with a face is not "feeling" , it is just a piece of plastic that looks like a human face. We CAN create something that mimics our reactions to our own feelings, but not a machine that actually "feels"
Those who ask the question "will robots ever develop feelings?" seems to indicate a misunderstanding of just what human feelings are in the first place. Human beings ARE robots, biological in nature, and we have developed feelings over the course of our evolution. If robots do not develop emotions it will only be due to our lack of ability to program them adequately and not due to some limitation imposed by their nature. We will cross the boundary very soon into biological robots. People like to think emotions are something special because they want to feel unique, better than, or at least different from, the rest of the universe. Once you take away that delusion and see emotions in their true context it becomes a question of "will humans be able to program robots sufficiently complex enough to develop feelings?"
I think we would have to fully understand human emotions before we can replicate it artificially. Robots would not be able to develop it themselves probably
Only one problem with that we don't even understand how a critical mass of neurons possibly in combination with random mutations in genes influencing brain structure caused emergent behaviours like consciousness or complex emotions in the first place. Worst part of it all is that despite our knowing ignorance of this we still have the hubris to use biologically inspired evolutionary neural networks and assume that our ignorant bumbling efforts couldn't possibly stumble on similar emergent systems. DNA didn't "think" that it would one day bumble it's way into creating a sentient entity capable of awareness of it's own existence yet it did it anyway, nor did any of those early sentient organisms "think" they would stumble on sapience and start producing complex machines of their own separate from themselves but that happened too. And now those sapient organisms don't think that their ignorant bumbling with machine simulations of those same basic building blocks will reach the same path but are they right this time? Guess we will find out one way or another.
I'd like to make a response to the "four requirements for AI to be like us": 1) Must recognize objects. 2) Must engage in complex dialogue. 3) Must be manually dexterous. 4) Must understand social interaction. For #2, there are plenty of humans who are thoroughly unable to accomplish this qualification (and I tend to include myself in the "sub-par" category). For #4, there are myriad number of human beings who cannot master social interaction. Many subcategories of autism, for example, can severely hamper social abilities for human beings. In the case of #1, there are very few people who have difficulty recognizing things and people and categories of things, but there are some individuals who cannot recognize objects, and in some rare cases, faces. In #3, dexterity is considered essential for a robot to effectively simulate human behavior. Well, yes. People move about. So a robot trying to simulate people should likewise move about. But there are also plenty of humans who can't even move their limbs (Stephen Hawking, for example). What happens if a human being lacks one of the "requirements for being a human"? Can a human lack all four and still be a human? Are these qualifications really what make us human?
I said the same about four weeks ago (though unfortunately my version wasn't so well worded). The only requirements a program would need to have for me to consider it a true AI would be the ability to make a decision without knowing all the facts, the ability to learn, the ability to form relationships with other life and some kind of curiosity.
Agent Vengeance Me too! Although extinction could be good too. Sure the human race may seize to exist as 'humans' but maybe OUR next step in evolution is just artificial intelligence? That would be so cool :D
Freja Rößle I guess there's also a chance that humans could adopt cybernetic upgrades while machines become more human by becoming biomechanical constructs. Ergo, a single race of cyborg beings.
They are in a way that they are pre-programmed/taught, and are only there to help our survival - and since we are intelligent enough to have pulled ourselves out of the vicious cycle of nature/evolution, emotions have become pretty much obsolete. We no longer have a need for emotions - scary, isn't it?
The singularity is a hypothetical moment in the future that occurs soon after humans develop the first AI that attains human intelligence and self awareness. Because this hypothetical entity would benefit from the numerous advantages that computer processing has over an organic brain, its intelligence upon inception would already be greater than any individual human being. Should the entity task itself with creating a superior intelligence, exponential progress would continue and soon an individual intelligence would exist that surpassed all of mankind. At this point a singularity (or event horizon) begins, beyond which we cannot predict the trajectory of the human race because progress on Earth will no longer move at a pace that is conceivable for us biological beings. Ray Kurzweil believes that this represents a continuation of biological evolution and a milestone on the same path of exponential progress that has occurred since life began on Earth.
The main concern about the singularity is that it may be a turning point in human history in which computer technology becomes the primary life form on Earth, pushing humanity into slavery to the computers. In a more realistic sense, the computer overlords would probably find humans to be rather useless and drive them to extinction, along with all other impractical species on the planet. If the computers developed human-like emotions, we would get an outcome that would either be the enslavement of humans out of sheer spite for the species or a benevolent nature reserve for humans to live out their lives in paradise. We won't know until the singularity actually happens.
commode7x Transhumanism is the answer, simply merge biology with technology, both systems have advantages it is only logical, and if humanity phases out to technology when we are both and inseperable who cares? We are still the progenitors and if merged witness or integrated into the transition.
I think your right. I'm not some computer genius or anything but i'm pretty sure computer as we know them can't have emotions. If we could create cyborgs though, then we could.
Yes mostly chemical reactions but this reactions are triggered by events around us and that is the problem, Full Artificial Intelligence requires that a computer decides when the event is suitable to trigger this "chemical reaction" to experience the emotion and the chemical reactions trigger physical change in the body (speeding heartbeat, smiling, butterflies in your stomach etc.) this would not be present in a robot
Youssef Khaled What would trigger things is the least problematic aspect though since it can at a basic level easily be programmed what triggers what. The more complex thing is for actual artificial intelligence to change and evolve through causes and effects like complex organisms and not just with having an effect from cause but also changing the own inner workings because of it so to speak. Theoretically though that´s no issue, just practically since neuroscience and robotics aren´t at a level to realize it. So a "robotic organism" must be made in a way that it learns and evolves, what the more complex issue is, programming a reaction to an impulse is like not a problem at all, doing that at a complexity level of organisms with a cognitive/nervous system is the issue I think.
Hey James! Can you make an episode about Why do we like bitter drinks like beer and coffee? I didn't like either of those when I was younger, but now I love them.
***** And I can run Linux on my PS2, it still doesn't make it a good idea. Go for optimization and reliability over pushing the limits of your system. Nobody wants their brain to crash.
Dear James May, I have a very pretentious history teacher which requires not just all the material to be known (check) but also very good story telling (not so check).I implemented your style of explanation and guess who finished with top marks :D.Anyway I really enjoy your work both here and in Top Gear which is my favorite show.Cheers Georgi Trichkov 17 Bulgaria
*facepalm* Do you really think we can just do that? Do you realize how much more difficult that is than just creating a thinking, feeling robot? Am I being trolled?
I have a question, can artificial intelligence think about the complications of its own existence and refuse to have it obliterated that it puts faith on the existence of a god? (with or without having a biological brain)
I can tell where you are going with this but like you said it whould ultimately fall short. The things about feelings or emotions is that they are affected by chemicals yes, but they are governed by a complex array of other factors aswel, mostly our trouble in recreating this boils down to our lack of understanding how WE feel in the first place, take depression, it's a selfamplefying loop of sadness and we know some chemicals that will curb it slightly but we can not at will create or undo it
Mike: "Why is a laser beam like a goldfish? Neither one can whistle." From Heinlein's....The Moon is a Hash Mistress.1967 An excellent read as it deals with a computer, who becomes self aware.
Havent watched it through yet, and I must say, I'm already impressed by the question. Very curious, and not many people talk about it even though it's pretty common an issue. Ok going to watch the vid now
Yes, people mistake Kurzweils Law of Accelerating Returns (which deals with all forms of evolutionary systems, including technology) for Moores Law, which is only about semiconductor circuits. Kurzweils Law of Accelerating Returns basically covers as far back as the beginning of evolution. Its a good read, take a peek.
that will take its time, but in the end it will be mainly about understanding the brain exactly (and I mean totally perfectly in every single way) and then simulating that. We won't see that happening any time soon, and the costs of research will be tremendously high, but I believe it could be possible to be done within the next 100 years.
There was a really great video on Veritasium a while back about how we are reaching the limit of transistors built with classical mechanics, and how Moore's law is becoming invalid because of it. Currently, we can't build a processor smaller than an atom, because the electrons in the transistors quantum tunnel (which defies the point of the transistor, effectively making the "processor" useless).
You have a really good idea going there, I think its good to base your ideas around current games in an attempt to create something original. I enjoyed both games and now I am very curious to see how you will develop this.
Excuse me James but I need to clarify: Moore's Law isn't about a computer's processing power nor its performance. His statement was about the number of transistors in a microchip, as they would double every 18 months. Altough they are related, processing power does not only depend on transistor number, it also depends on architecture and/or programming efficiency, and it's also affected by the task they're used for. So, computer processing power does not 'roughly double every 18 months'.
Consciousness simply means "being aware of what's around you". A character's AI in a game is already conscious of what's around it, to interact with it. Just like you are conscious of what's around you (while you ignore what's around you at a higher (or lower) scale, or even depending on your education level). The game's AI is only much, much much less complex.. but it has centuries to improve.
I think I remember hearing that Moore's law is supposed to break down in the next ~10 years in conventional computing. Conventional computing meaning not quantum computing.
Aaah his voice is so calming and nice to listen to! But technically, if we were able to re-create certain human instincts in artificial intelligence and give them the ability to learn and evolve, to grow like we do, then they could be classified as capable of emotions, yes? But it's just way too complicated for us to do right now?
Freja Rößle Yeah, I like May too :) The human brain is ridiculously complicated, so yeah, we're far off. But If we did create an AI complex enough to accurately mimic human development and behaviour and exhibit emotion, then we would have as much proof of it experiencing *actual* emotion as any fellow man - and that's not all that much... food for thought.
I would like to think that if you could develop a computer that could learn and progress within a social environment instead of creating a sympathetic program, the robot would do most of the work and we could learn from it, instead of doing it entirely bacckwards
When I was just a little kid in the early 90s they said we were approaching this computing power wall. I think we will find this one is no more real than that one was once we actually approach it.
I see you haven't heard about I, Robot. Asimov wrote the three laws of robotics: 1-A robot may not injure a human being or allow him to come to harm. 2-A robot must obey humans, except when it conflicts with the First Law, and 3-A robot must protect its own existence as except when it conflicts with the 1st or 2nd law. In the film, (this uprising isn't in the books..) Robots thought that humans are self-destructive, so to protect humanity they needed to rule it and kill some on the way.
I think a common misconception with AI is that human like emotions are going to be a capability. At best they will have predetermined programmed reactions to certain situations based on what they are told to feel, which is arguably just as human as any sociopath.
the capability to recognize objects, have emotions and so forth, they are all learned by the brain over many years. With the development of machine learning techniques, it is not that unlikely that robot will once be exactly like humans. All that is needed are complex algorithms that imitate the learning process of the brain, and some more complex algorithms to use what is learned to interpret the input (which can then be learned again).
As long as your computer has a microphone and it is not mechanically disconnected or otherwise not functional, it could be listening. But a computer that has lots of error messages is probably not one to worry about because after all, it's having issues on its own completing basic computer functions.
Recognise objects, engage in complex dialogue, be manually dexterous, understand social interaction. If those are the qualities that must be fulfilled for something to be human then a lot of humans fail at at least one, and a lot of animals can manage two (and for some even three) of those things. Personally I think the only thing a computer would need to be considered as a living thing is a sense of self preservation. If it was more human it might be curious too. The basic drive for most...
I like what was said about our distaste for robots that operate and appear extremely similar to humans.I think that would also be the case if we were to find extra terrestrial life forms that were similar to our build and action because they are,like you said, fundamentally inhuman.Maybe thats why when we make "scary alien" movies they are built that way because that kind of invasion would be the most terrifying to us the closer they are because they are harder to differentiate and protect from.
I bet that the machine which interprets dreams would be very useful for artificial intelligence in robots. Ideally, for a robot to be able to recognize objects all it needs to be able to do is record and capture video while also interpreting images in different ways. For a computer to recognize an individual it would just have to take a picture and compare it to it's personal "contact list" and then be sure to make special notes about the difference between individuals who look similar. That or some kind of retinal scan would work. This relates to dreams because if dreams are our mind absorbing and analyzing recent events than those images seen that reconstruct video are like the left over impressions on the brain of visual stimulation, by mixing that data that associates visions with computer code in an effort to "record" dreams and placing that in a robot that is actually actively interpreting its environment constantly and programming it with the ability to react to these situations. I think that will probably be a major step forward in the field of artificial intelligence. I also question the necessity of robots to be able to empathize in the first place, though they would be able to be programmed to respond to the situation in a logical manner. Such as "Why are you focusing on this breakup and allowing it to ail you. This is an opportunity for you to move on and be a new person in your life and dwelling on this event serves no purpose and can only waste your time." In essence they would have the potential to be psychologists and help humans manage their own chaotic emotions. Which while it would be hard to hear it is true that most situations requiring "empathy" are individuals dwelling on events and perpetuating sorrow for themselves. Of course if you were to program robots to learn and also have an in depth understanding of human psychology... that could really not bode well at all.
Moores law may very well just simply continue in the vein of quantum computers via q-bits as they still, fundamentally, relay on 'machinery', and thus capable of being improved with greater experience. Will be interesting to see.
Moore's law states the number of transistors on a chip double, not processing power. Not even logic-gates [transistors are fit together to form logic gates] - there is diminishing returns on trying to configure transistors into actual logic gates, and with the problem of actually making connections between these gates takes up more space still! So not effectively double unfortunately, plus it's flattening out Sorry, I'm a computer engineer so I felt obligated to make the correction ;) carry on
To a point yes. If you have a computer that can start building its own code on top of the base programming, you have a learning system. It's going to be extremely complicated to build one of these though and I'm a huge doubter of the technological singularity with the state of artificial intelligence as is.
Dear Mr. May You only explained half of the uncanny valley. The other important part of the uncanny valley is that when the robot has so many human features, that we have accepted it as one of us. Albeit, I don't have any references for this...
The program used to make the robot know what emotions to put would be made by a human. Yes, we can make a robot so advanced that it can do every single thing a human does with absolutely no jerking around like they do now, even make them talk like a normal human. EVEN make a program that effects their tasks depending on if they are (artifically) happy, or (artifically) sad. But they will never have more power than what they are programmed to do.
When CPU manufacturers are naming their design principles More-than-Moore to try and increase performance because Moore's law is limited by how tiny we can go, I think we can safely say Moore's law in and of itself won't get us to the technological singularity, or robots.