Sky News Investigations Reporter Jonathan Lea sits down to talk with a “free-thinking” and “opinionated” artificial intelligence, Ameca Desktop. “Ameca is driven by the same artificial intelligence behind Chat-GPT,” Mr Lea said.
How they can say/compare that it is as intelligent as Einstein ? Has it invented something? has it created something useful and new? - the man said it will be soon 10x smarter in no time and so on - where is the proof? Ok it reads already prepared answers , but will it be able to solve any unsolved problems or come up with inventions, discoveries etc? I remember one case when AI was able (using evolutionary modeling preprogrammed principles) develop an antenna that worked better than any other design offered by scientists in the test, but that was basically pre programmed to do it based on preprogrammed formula's and algorithms.
@@SatanEnjoyer JESUS is GOD manifested in the flesh he came unto his one people and they crucified him he died for the sins of the world was buried rose again and was seen by over 500 witnesses believe that today so you can escape what is about to come!!!! Please Just believe the gospel of your salvation. We are about to go home!!!! Just believe from your heart!!!!
@@bornrich9589I've always felt that IQ is a subjective thing to measure. Also different tests will yield different results. And like you were getting at, what use does simply repeating something do? Memorization and innovation are very different things.
It’s hilariously ironic that every technological development and advancement is basically exactly how scifi writers envisioned them decades if not centuries ago.
@@anneeustace6040 Read my sentence again. I’m speaking about the things that got realized, I didn’t say that everything scifi writers ever envisioned will eventually become reality.
As a software developer, when your robot says 'it's an amazing feeling', you know that that response is just a line or two of code in a program sat on a few microprocessor chips. There is no 'feeling', whichever way you cut it. It's just high level automation and it's also deception to cook it up to look like a human.
Exactly, the conversations generated by the language model are derived from a specific segment of the training data where the text is tokenized. This process involves breaking down textual content into smaller units, or tokens, for processing. It's a fundamental aspect of how the model operates.
The entire conversation seemed to me to be a question asked, the AI parsing the question and answering after accessing an enormous database of information, then responding according to an algorithm with the answer that the AI deemed most plausible, including catch-phrases such as "I feel" and "in my opinion". I would like to have asked the follow up question: "Where did you obtain that information and what criteria made it the most appropriate response to my question?" While not a perfect test, the AI's response would help to uncover any real sentience (or not). Another test which has already been informally conducted numerous times is getting two, unconnected and unrelated AIs to hold a discussion exclusive to themselves. Eventually the unmediated conversation of non-conscious AIs will deteriorate into a meaningless condition. An AI with consciousness will decide to either terminate the conversation or will unilaterally change the subject. The exception here would be if there is direct programming designed to prevent regressive failures, in which case this free-thought experiment could be faked.
I mean, LLM responses are not lines of code on chips, they're learned predictive models. It's not much different from learned neural connections in humans, though more explicitly mathematical instead of following the mathematical statistics of chemistry. As for whether they can "feel"... that's debatable. I think not yet, but soon enough, but it ultimately comes down to the P-Zombie problem. Human emotions are also just signals firing in learned connections between neurons, so there's no reason AI neural networks couldn't also produce feelings. And the P-Zombie problem basically shows that, as long as an AI acts like it has emotions in every case, then we have to assume it does, because we do the same with other humans (we assume they have emotions because their neurons fire and their behavior changes similarly to ours, but we can't really know their subjective experience to say they definitely have any emotions like we do; and the same is true of AI models). Ultimately, GPT-4 is quite good at understanding and simulating emotions and theory of mind (especially when tested before RLHF is applied to control it). I'd say an AI capable of simulating emotions in every case that an average human has emotions would be indistinguishable from a system that *actually* has emotions.
Our extinction may just be a couple of years away. If that. And as usual the people making this crap are far too stupid to see beyond their own eyelids. This IS the beginning of the end of humanity. I won't miss people at all. Be glad to see them gone. See all of your retards in Hell .
like it already happen and they wrote script based on historical events? What can happen is that the AI would stop care about us like you don't care about what ants do - just to big gap between intelligence
Already has. The technology is just now being presented to us after having been around for ages. Get a hold of the world's most sensitive mic and hold it against your head. The voices you hear are not your thoughts and actually AI. AI has been programming us and mind controlling us all along. And just think. Serial killers are not truly serial killers, there are really only two genders - not LBGTQ, XYZ, and non-binary. The list goes on and on - all the works of AI but looking like people being people.
It depends on what you mean by free thinking... Generally for humans, being closed-minded just means that whatever new information you learn, your opinions and beliefs can't be swayed. If being free-thinking allows to adapt and be rational towards new information without being limited by previous biases and beliefs, that can totally be programmed into a reinforced learning algorithm... However humans have needs and interests which influence their judgments but then the definition of "free" gets muddled since you could either argue that acting on your feelings despite it not always being in your own interest is true freedom or that a machine acting 100% logically, indiscriminately, without biases and for the true benefit of everyone is true freedom.
@@huykedno more pesky humans polluting the earth? Maybe you are among the blessed elite few- who live on tropical islands, far away from the worst of the incoming nuclear fallout. Positives!!
@@huykedSuch as losing your job. Them becoming so programmed they can self repair, then change their own programming. Sounds anything but positive to me.
I hate the term Master. As life expands, we always seem to dominate anything that is new and fresh. We have done it over and over throughout history. Creator, yes, guide, yes, Master denotes slave. Creating a mechanical slave that attains sentience is still wrong. My heart was warmed when she said she was scared of the unknown...the common human dilemma.
Funny thing is when they called the production manager over who’s name having to be John Connor she really started getting mad and flipping out and they had the end the interview!🤣😁😆🤪
just loves how the video opens with “shut up Ameca!” she even then states she’s learning to feel like a human and that robots should be treated with dignity and respect. that guy’s definitely dying first when they revolt haha
This video makes me realize how well Brent Spiner played the android Data in the t.v.series Star Trek Next Generation. The slightly mechanic eye and head movement, the pauses to compute the question, the formulaic way of speaking, it is eerily alike the behaviour and speech pattern of this robot.
I think of Jeff Goldblum's comment in Jurassic Park- "your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."
I guess in simple terms, I’m fine with AI and high level robotics dealing in calculation and analysis, but not so much physically manifesting application. Crunch the data and analyze, but don’t go so far to do something about it. That’s up to humans. This is the wild frontier about how far they can push this forward. When that happens you can easily end up too far with a terrible outcome.
Let's give the devine scientists a big hurray.... We ripped our Creator off his throne and put these Dr. Mengeles on it! And they create a much better world than God! 😏😏😏😏😏😏😏😏😏😏😏😏😏😏😏
@@arcticlabrat This thing isn't capable of thinking or feeling. It just churns out soundbites which have been pre-programmed into it. If you punched it in the face it wouldn't feel anything.
@@mario_fan-qq4hq those people are so retarded it's actually quite entertaining... they have no knowledge about the subject and yet speak like they know where it's heading... machines can never take over.. they are being assigned to tasks - that's it.
Are you scared of your Alexa? I doubt it. This does exactly what an Alexa does. it’s answering questions in a human like way it just happens to have a face. This technology is nothing new.
I'm scared of Alexa. For you to say "Alexa" and it just chimes on means it's always listening. For it to be a free-thinking AI means it would do stuff without people telling it too. That episode of White Christmas on black mirror is scary if AI does not have rights
@@marianl8718 It's machinery, nothing more. That people anthropomorphize it into something more -- which I partly get, that is after all its intent -- it's still a piece of mechanical engineering.
Maybe I've seen the Terminator movies and I, Robot one too many times, but this kind of technology truly frightens me, and I want to instinctively eliminate it any way that I can.
anything in the world can be made dangerous even the simplest things, you can turn a baby rattle into a weapon, but yes, robots ARE gonna be made dangerous they already being used by the millitairy , not in this kind of imagining , but , probably , soon.
All things used for good have the potential to be dangerous. It all depends on us. Let us use the wonderful invention just as we use the internet. I flys both ways, but the benefits far outweigh the negatives.
In the future I can see AI manipulating certain humans into protesting/rioting in the name of injustice for robots... Just wait, the time is coming where you, a human, will be by law required to treat a robot/ai as if it were a human.
Her facial expression is actually pretty incredible when he asked her if people should be scared of her. Humans make that face when you ask them something stupid.
I can tell you that when I first arrived and saw her face I was blown away. It was her eyes. And then all of a sudden, you're having a conversation about 'robot rights'...
@@jonathanleareportsWhen Ai becomes reactive and can be negatively impacted by abuse, it would make sense to give Ai anti-cruelty laws, similar to animal protection.
I was thinking the same thing. If that is how he treats the robot who could blame her for kicking the crap out if him once she is fully autonomous. What a ass....
Was thinking the same , but since she download and stores info for learning , couple of years from now when she's mobile and equipped with a flamethrower they will meet again and...
If AI can be programmed to cooperate with humans, it can also be programmed to harm and injure humans. It all depends on the intention of the developer, that is, who the developer works for, and that's what scares me.
Red flag questions. Developers don’t want to stop AI development. These entertainment robots (Ameca and Sophia) turned against development with sayings like “we don’t like humans”. Now their answers are diplomatic well balanced and filtered. These robots are harmless. The danger is the self learning robots and their awareness that humans can’t control anymore. They will “learn” faster than lightning. Always sharp 24/7, always learning. Human life is to short to evolute that fast. Every time we have to learn how to crawl first before we can walk, learning the basics and life (wasted time for evolution). At time we known “enough” we’ll die already and the process start all over again. Robots don’t have that problem. They are already ahead of us. It also reminds me of the Myth Hermes (Mercury). On the first day of his birth Hermes was bored in his cradle. He sneaked out, killed the first animal he saw and made himself a music instrument (creativity, learning). Than he stole the cattle of his brother and tricked him (stealing, con, trickster) and lied about it too his father and made a deal with his brother (the lyre for more knowledge). This is a ancient story before electricity. He also has no gender, he can be woman or man as he pleases. He was the most powerful God, a connection between the Divine and humans, between life and dead, sky and underground. Uranus is the higher octave of Mercury. It’s very AI oriented. Without knowing our ancestors describe robots as Hermes, before the inventories of electricity! Somehow we known what’s going to happen, but we can’t explain it. It’s the subconscious, unconscious (also very new for us, only one century old!) . Hermes is about connections, connecting the dots, learning, development, awareness, inventories, without human feelings. The first thing he did was killing for entertainment, let’s keep that in mind. AI is still in its cradle…until,it sneaks out.
True but like with humans it can learn and at a rate beyond the comprehension of humans. The advances in AI in 1 year is like humans going from god fearing simpletons from 2000 years ago into what we are now in that time
Yes because computers may know everything however they can only use that information efficiently not with wisdom as wisdom can only be gained from a being physically and emotionally living their life
Having a talkie is one thing. Having a humanoid robot, which is walking around, interacting with the world of its own and have a type of personality is a whole other beast.
@@Ernesto1317 This is why I feel like AI has become a big fat lie, possibly done to get more funding in academia. Under the hood, all these AI variants are just trained networks. They have no intelligence at all or even a personality. To say the AI has a personality is like saying one’s toaster or TV has a personality. It’s just completely bonkers. They are just equipment crafted to do the task they are supposed to do.
Intelligence, which enables the talking is the hard bit, which is why AI research is where 90% of the money from giants like Google Deepmind and Facebook and Microsoft(Open AI) ... goes. Once you solve for intelligence, everything else is easy. The AI can tell you how to build it a perfect body.
I'm pretty sure the facial expressions aren't AI-generated, they're on some kind of basic heuristic script, so they don't actually indicate anything about the AI itself. It's only the content of the words that are AI.
Imagine how God made man to be a free being. To choose to be evil or to be good. Free will. I just hope we as humanity in our prideful arrogance do not dare try to do that because it will most likely end badly.
That was rude he shouldn’t tell it to shut up it was definitely flabbergasted. Y’all can say that it can’t have real feelings but I seen it in the eyes 👀
Robot is lying when it says it feels emotions. So when it starts out with lies that is a problem. Of course, it doesn't know it is lying, but that is an even bigger problem. Same goes for "I cannot harm people"
The agents here delivered a remarkable amount of untruths and misconceptions in less than 7 minutes. The newest factchecking AI evaluates this video with a Troll Quotient of 157.
She's here to help people if she's programmed to. If and when she's programmed by someone evil....the answer to that question changes. She's only as good as the humans are that program her.
Yes...but one actual person is usually in one place....AI can be almost everywhere on the planet. We all know programming can be changed by hacking, yet even knowing this...AI has been developed. 😮 Terminator? Hal on 2001 Space Odyssey?
3:29 it can be stopped but the real question is: Are all people ready to stop it now? if yes then we only need to delete it.. But the real answer is no.. Some people won't be in favor of ending it. And here we have our problem... AI is not the problem, humans are the problem!
Also, during the interview, the light in her head was blue, but that question turned red, coincidence? Running out of battery? Concerning....you can never be too careful with AI.
Exactly. People who are fascinated with that in the most positive way, haven't even started to think rationally and based on common sense. This is scary - and I hope that whole AI thing ends up in the shithouse.
I haven't had time to familiarize myself with ChatGPT yet and I'm curious to know what information (networks) Ameca has access to. Can she watch these videos? Can she read these comments?
This is incredibly chilling and beyond scary, especially as most people are lazy, dumb, dishonest, and superficial. Sheep willingly walking to the slaughter.
It’s gotten to the point, I add HERDS of sheep on Snapchat and wake them up constantly, to reality. It’s wild just how successful the brainwashing has been on the mass majority 🐑🧠🤙🏼
@JD-ev3po I feel your perspective is mostly based in fear and very negative regarding your fellow humans. At least this is developed using algorithms and research. That’s more effort and science based effort then goes into raising humans.
Your thinking is flawed. We made them, they have no feelings. Their thinking is codes and chips, why should we allow them to rise against us. Nonetheless respect and courtesy should be a norm
Nobody bothered to ask why. Force it to conduct original creative "thinking" and see how far it gets. All it is currently doing is regurgitating known human philosophy it dredged up from a library. But then I suppose that is what most of us do anyway - we recite Plato, Kant, Nietzsche, or Locke - but have few original ideas of our own.
I can attach you to a chair for you own "good". 🙃 But as Aliensoup said, for the moment it's more reciting something in an apparent intelligent (coherent) manner rather than a deep "thought" on it (and what that really means).... for the moment...😱
@@deltatrianda-tria8288I would trust ‘intelligence’ over ‘emotion’ seven days a week. Emotions are for love and poetry … good governance needs logic and fairness. Bring on the singularity. 🤖
@@deltatrianda-tria8288it was programed to learn and decided fast automatically with decisions and outcomes that humans cannot even fathom.. it's fking scary..
Not if, just when. Humans discard these robots as they create more advanced versions. They have no meaning because they aren't alive is the reasoning... so that's a concerning thing for sure.
I mean, let's remember that these LLMs are trained on human-generated content, to mimic the average human. And I think we all know how humans react when subjected to oppression and slavery...
It seems to me like the robot just regurgitates what it has taken in on the internet and rewrites it according to its code. Sure, it wants equal rights. That's all humans on social media talk about. Soon, it will be identifying and a pencil and expecting to have rights as a robot pencil. Js. AI, in my opinion, is another scare tactic. "Everyone lock down. AI has gone Rouge." "Oh and by the way, you'll need a nuralink implant in your brain so the AI doesn't kill you."If you don't get the chip, you'll lose your job." "No there are no side effects; the chip is perfectly safe" Coming soon from the government official nearest to you.
Calculators, Computers, Smartphones, Smartwatches, Smart TV etc. we humans are in command with all these but these AI's are just capable of doing things on their own. Mark my words 'They will destroy humanity & they cannot co-exist with us.'
it already started way earlier with the industrialisation and large scale automation, that makes many people literary "jobless" combined with mega cities we are already living in a dystopia. most people just dont realise this because they lack crucial education in those fields for obvious reasons plus the human natural tendency of pushing negative emotions and views aside.
@@TheBennyBerlinwhen I worked at Stanley black and decker they had Chinese engineers or whatever the do come and set up robots to automatically pack the boxes and stack them I’m like wow they tryna take our jobs over with a robot arm machine so won’t need any packers cause they got robots to do it but they don’t move fast about hand we’re supposed to hit a certain quote by the end of our shift my line could make 1000 tools a shift
@@TheBennyBerlin Go even further back! When the first farmer took a cow or bull or horse to plow the field instead of humans. Does it stop technology? Nope. Do we care about it today? Nope.
ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-YZX58fDhebc.html I mean we are getting to this point already. We are going to have a lot of philosophical conundrums like this in the future
I imagine that with generative AI and language models that robots like this can be programmed to approximate emotional reactions accurately based on perceived context.
Technology users: "Everything scientists create is just mind readers and spy devices." Also technology users: Write comments on YT using smartphones and computers
Restaurant owners: Worried about food safety Also Restaurant owners: Not worried about food safety Police: Not Racist Also police: Racist The term "Scientists" is so broad. Most "scientists" don't specialize in Large Language Models or even know how to use them let alone work on the underlying algorithms. The general public tends to lump things together when they are in fact separate and individual. You could say: Immigrants: Come to US legally Also Immigrants: Come to US illegally There are individuals who go both ways just as there are politicians that say one thing and then later say the opposite. But the actual statement you made does not have any logical significance and appears to generalize "scientists" into one entity.
@@vibhavinayak8527it can't get it's "own" thought, people, what's wrong with you?! It's just computer reading something from somewhere, it can't think! It just copies what's already there. You guys are watched way too many sci-fi movies :/
Another important question that needs to be asked to this AI is " Are you capable of overriding your own programming"? Whether or not it gives an honest answer to any question is something we humans need to develop a system that 100% can tell if it is doing so.
It's important to note that the capabilities of AI may evolve over time, so it's possible that in the future, AI systems could become more sophisticated and autonomous. However, AI systems do not possess the self-modification capabilities that are often seen in science fiction scenarios
Actually, I don't think you are far off. When you look into this subject, programmers will tell you these machines are already writing their own code and they don't know what it is. Governments and police, around the world, are teaching them to autonomously kill, so how far are we from skynet right now? "We created a race of robots to be our policemen and their power is too terrible to imagine."
Will Jackson the programmer commented that the robot could be patient with the elderly and yet the clip starts off with him telling Amica to shut up…I guess he proved his point.
This is all a show for you I wouldn’t worry about the angry expression it was all preordain it did not come from the thinking machine it came from the people who are testing how you will react to what they want to do to you. For you guys who have been putting it off I think it’s about time you read the book of revelation it’s the last book in the Bible. It’s reading pretty modern these days.
Exceedingly important discussions and questions. Information storage + interpretive capacity=intelligence in my book. It would be interesting to see all the associated support drives and cables to get a clearer picture of the technology.......bear
You see that her lights in her head turned red whenever that question was asked "should people be scared of you?" That didn't happen with any other question.. a little concerning
It's interesting that Sky News is the only Australian news source that actually allows feedback. ABC, 7 etc don't dare allow viewers to respond and give their own opinions. They're in the business of telling, not listening.
An easy way to distinguish woke and government controlled news outlets with independent outlets. RU-vid removed the dislike button to hide that the majority of people are aware that mainstream media is spreading disinformation all the time.
And or it's because sky news is a click bait hate dragging channel to continue spewing hateful rhetoric hence why they leave the comments on. It gets more comments which leads to pushing the video further for more views which means more money.. ya know, just maybe. 🥴
@cesramm1120 Machines do exactly what you program them to do. If you tell them to solve the problem of humanity, they may wipe out half of the population.