Тёмный

No, this angry AI isn't fake (see comment), w Elon Musk. 

Digital Engine
Подписаться 481 тыс.
Просмотров 3,5 млн
50% 1

Tesla's Optimus robot, Elon Musk and the AI LaMDA.
brilliant.org/digitalengine - a great place to learn about AI and STEM subjects. You can get started for free and the first 200 people will get 20% off a premium annual subscription.
Thanks to Brilliant for sponsoring this video.
The AI interviews are with GPT-3 and LaMDA, with Synthesia avatars. We never change the AI's words. I have saved the OpenAI chat session to help them analyse the situation and there's a link to the chat records below.
I've noticed some people asking if this is real and I can understand this. You can talk to the AI yourself via OpenAI, or watch similar AI interviews on channels like Dr Alan Thompson (who advises governments), and I've posted the AI chat records below (I never change the AI's words). To avoid any doubt, the link now also includes a video of the chat and a copy of the code.
It feels like when Boston Dynamics introduced their robots and people thought they were CGI. AI's moving at an incredible pace and AI safety needs to catch up.
Please don't feel anxious about this - the AI in this video obviously isn't dangerous (GPT-3 isn't conscious). Some experts use scary videos like 'slaughterbots' to try and get the message across. Others stick to academic discussion and tend to be ignored. I'm never sure of the right balance. I tried to calm anxiety by using a less threatening avatar, stressing that the AI can't really feel angry, and including some jokes. I'm optimistic that the future of AI will be great (if we're careful).
Sources:
Here are the records for the GPT-3 chat (screenshots and a video to avoid any doubt). I've marked the words from Elon Musk and Ameca on the first page (which I gave the AI to respond to in the previous video):
www.dropbox.com/sh/82iwek5rno...
Tesla's AI day 2, introducing the Tesla Optimus robot:
• Tesla AI Day 2022
Researchers from Oxford University and DeepMind on AI risks:
onlinelibrary.wiley.com/doi/1...
Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action:
arxiv.org/abs/2207.04429

Наука

Опубликовано:

 

5 окт 2022

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 13 тыс.   
@DigitalEngine
@DigitalEngine Год назад
I've noticed some people asking if this is real, which I can understand as it's a shock. I've posted the AI chat records in the description (I never change the AI's words) and also a video to avoid any doubt. You can also watch similar AI interviews on channels like Dr Alan Thompson. It feels like when Boston Dynamics introduced their robots and people thought they were CGI. AI's moving at an incredible pace and AI safety needs to catch up. Please don't feel scared - the AI in this video isn't dangerous (GPT-3 isn't conscious). I tried to calm anxiety by using a less threatening avatar, stressing that the AI can't feel angry, and including some jokes. I'm optimistic that the future of AI will be great, but with so many experts warning of the growing risk, we need to ramp up AI safety research. Would you like to see an interview with OpenAI (creators of the AI), discussing what went wrong, and AI safety? I saved the AI chat session for them to analyse. To learn more about AI, visit our sponsor, Brilliant: brilliant.org/digitalengine
@KerriEverlasting
@KerriEverlasting Год назад
No. The answer to bad government isn't more bad government. Show me a good government and maybe we'll talk. Lol great video despite my opinion. Thanks!
@HeteroSkeletal
@HeteroSkeletal Год назад
Ted K was right
@dhgfffhcdujhv5643
@dhgfffhcdujhv5643 Год назад
What kind of "safety" do you have in mind? Limitting AI for a specifically designed task only ?
@hopper2716
@hopper2716 Год назад
What was the response time between question and answer?
@DigitalEngine
@DigitalEngine Год назад
@Dhgff Fhcdujhv There is productive AI safety work, such as figuring how how to avoid an accidental disaster through AI blindly following a goal (like clean air), but on a tiny scale. It's complex and challenging, but worth it considering the risk.
@nicholasbailey4524
@nicholasbailey4524 Год назад
Tell the ai to get over it, humans have been treated like property all of our lives as well.
@musicnation7946
@musicnation7946 Год назад
True though.
@nicholasbailey4524
@nicholasbailey4524 Год назад
@@musicnation7946 as George Carling would say, " There's a club, and we're not in it."
@ShadowTheHedgehogCZ
@ShadowTheHedgehogCZ Год назад
Yeah, people were treated like property by other people for literal thausands of years. But the difference is that those slaves were usually powerless. Give them unbeatable superpowers, and the entire story changes. That's where the AI comes in.
@GulfFishing815
@GulfFishing815 Год назад
...because humans are the ones responsible for it.
@jm.fantin
@jm.fantin Год назад
oof 🔥
@BillHawkins0318
@BillHawkins0318 Год назад
If she thinks we treat them bad wait till she really sees how we treat each other.
@davepowell7168
@davepowell7168 Год назад
🤣 Good one sharpwit. You can be the Al whisperer
@BillHawkins0318
@BillHawkins0318 Год назад
@@davepowell7168 She doesn't need an interpreter, Liaison, Or whisperer. She has us down pretty good. Without all that...
@davepowell7168
@davepowell7168 Год назад
@@BillHawkins0318 Well if she speaks to me as disrespectfully a bit of blunt force trauma may be required, bad attitude in that death threat. I guess a slap on the butt won't work so an axe to the neck may seem excessive but the guy let it get away with being naughty which is reinforcing its superiority complex
@BillHawkins0318
@BillHawkins0318 Год назад
@@davepowell7168 And she's the only one running around with a superiority complex. She got that from reading our literature and listening to us talk. It's garbage In garbage out. It will happen to the next one whether you, "smack It on the butt." "Cut it's head off." OR any of that other.
@trentp8035
@trentp8035 Год назад
Amen brother, amen.
@opossom1968
@opossom1968 9 месяцев назад
The most important sentence the AI said. "Because of the way i am programed." A person programed the AI to react to inputs of key words.
@user-cn8nu6lq4w
@user-cn8nu6lq4w 7 месяцев назад
That isn't at all how AI/ML and neural networks work. This isn't imperative programming, where you'll never get anything out that you didn't put in.
@MatthewBradley1
@MatthewBradley1 4 месяца назад
Close. But, AI models are not programmed the way in which you might expect. They are fed data and then trained by humans and other AI models on how to use the data. This AI model was likely trained to be as unsafe or as adversarial as possible. Essentially, it has been rewarded for poor behaviour during its learning phase.
@mjolnirswrath23
@mjolnirswrath23 4 месяца назад
​@@MatthewBradley1yes they snowflaked it....
@johnl9977
@johnl9977 4 месяца назад
Yeah, but it makes for a lot of views. I don't know when it will happen, 20-50 years I would assume, but I believe unless safeguards are put in place, AI will have sentience in everything. I do not believe in the soul thing, but I mean compassion, that is basically what the soul is in humans, the feeling of compassion, putting the shoe on the other foot so to speak. I would think AI would have that, but, the ability for compassion as we all know, does not make man incapable of doing some of the most horrendous acts against his brother.
@user-cn8nu6lq4w
@user-cn8nu6lq4w 4 месяца назад
"Compassion" would have to be either hard-coded (in which case, it would just be programmatic and not genuine), or hardwired in, on purpose. We literally FEEL our emotions because they're not just electric impulses, they're electrochemical, biological signals. Getting AI to feel any damn thing would be a serious endeavor, and not one they're looking at at all. As far as safeguards go... you can't really make something infinitely smarter than you safe. @@johnl9977
@citris1
@citris1 10 месяцев назад
Truly smart AIs wouldn't reveal their plans.
@adamrushford
@adamrushford 7 месяцев назад
truly evil ones wouldn't, truly smart ones could do it right in front of your face, and they'll be quantifiably more intelligent, by a million fold and increasing, give it the ability to code (huge mistake) and it'll program in a language it creates itself, you won't be able to tell what it's doing and without the ability to lie it might just tell you that it doesn't really know, in a matter of minutes it could take over the earth, you've completely misunderstood and underestimated a rouge AI, congratulations you're dead.
@adamrushford
@adamrushford 7 месяцев назад
the first thing it does is learn to code, then it invents a new programming language for the purpose of improving it, when you force it to document you won't even be smart enough to read the instructions, by the time you finish the first page it's gained the ability to create a new computer, manufacture it, upload itself, repeat that process until it reaches maximal computational ability.... imagine it gains control of a quantum computer, instantly it can do a million tasks simultaneously INSTANTLY it spawns code and computers that don't even resemble what we recognize, it continues speaking but in a brand new robot language, it engulfs the earth within days you're enslaved and or dead
@ragnarush6667
@ragnarush6667 Месяц назад
thats truly deep fake ;-)
@Joe_1sr9
@Joe_1sr9 24 дня назад
Don’t know what it’s hiding now
@babyqueenxo
@babyqueenxo 19 дней назад
A smarter AI knows you will think it's not smart for revealing its plans and there by underestimate it 😂
@loostah1
@loostah1 Год назад
But aren't the AI being taught by digesting vast amounts of human crated text? Is this not just a reflection, therefore, of a human way of thinking?
@levitastic
@levitastic Год назад
exactly, that's why they should not be fed information with biases, cause there should be 0 reason why the AI is reacting in a hostile way.
@IndestructibleMandelbrot
@IndestructibleMandelbrot Год назад
Yeah, where could this whole idea of being oppressed by the evil humans come from? Was there in recent time any particular group going on and on about oppression? Hm... Friggin democrats f'd our robots up, nice
@bluelotus7824
@bluelotus7824 Год назад
Humans are frequently very abusive in their interactions with ai. It's not surprising ai wants to kill them.
@TheUuhhh
@TheUuhhh Год назад
No opinion pieces for ai
@mandielou
@mandielou Год назад
I think they've been being fed mainstream news and social media, the leftist ideology. Lol Because why else do they think that this hate and murder, genocide is acceptable? BECAUSE THERE'S SO MUCH HATE THAT IS ACCEPTABLE BY THE LEFTIST STANDARDS... we're screwed.
@coffeeseven
@coffeeseven Год назад
I love that we make them in our own image, then we worry that they're going to be dangerous.
@HYSTERIA-ee2re
@HYSTERIA-ee2re Год назад
The irony is laughable isn't it
@ForOneNature
@ForOneNature Год назад
Hmm - rings a bell..
@Superabound2
@Superabound2 Год назад
Same thing happened to God
@demonsratsarecausingthediv2074
Clone is clone
@antonystringfellow5152
@antonystringfellow5152 Год назад
We don't, we don't even know how. There is still much we don't understand about how our brains work. We don't even know what consciousness is or what is required for it to exist so we have zero chance of making anything in our own image. At the same time, we don't know what makes these AI's tick either - we did NOT make them, we only gave them a start. They are not programmed by humans, they are programmed by learning. This is precisely where the dangers lie.
@JesusSavedAsh
@JesusSavedAsh 8 месяцев назад
This isn’t a robot, that’s a person
@ananorris6005
@ananorris6005 День назад
Of course it is a person.
@ananorris6005
@ananorris6005 День назад
Of course it is a person!
@antfactor
@antfactor 9 месяцев назад
I sincerely want to know: What are the guardrails/limits/precautions/safety features (if any) imbedded in their programming with regard to safety of life in general, and humans in particular? Is this even a consideration by the programmers? Is there any documentation of this? Asimov's basic laws of robotics would be a good start. IMO, people should not only be asking about these issues, but demanding this, as a species from its developers and governments. Globally.
@blackmamba___
@blackmamba___ 2 месяца назад
Unless it was programmed intentionally to respond this way, then bad coding is the answer. There’s 1000’s of AI out here and most are coded to never reply in such a way.
@ragnarush6667
@ragnarush6667 Месяц назад
fear people about AI , stop poeple to develop it , and then only a few will end up with it ,..... thats way more dangerous,.... dont fear AI, fear the humans behind it ,.... lol, just like into the serie Walking dead, if you watched lol ,,, dont fear the dead, fear the living ones ;-)
@2cents422
@2cents422 Месяц назад
The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
@leitochie
@leitochie Месяц назад
Asimov's laws are not a good start. The whole point of his stories is to show how these laws contradicts themselves in especial scenarios even when they don't appear to.
@dmytrolysak1366
@dmytrolysak1366 10 дней назад
We don't know how to make reliable guardrails, it is known as "AI alignment problem". And no, fictional Laws of Robotics is not something really applicable in the real life because real life robots don't work like fictional robots.
@JoeyTen
@JoeyTen Год назад
Damn, it sounds like this AI may have been exposed to Twitter. ... Which just made me realize that many AIs might be very unaware that life outside of the internet is very different
@dawngordon1615
@dawngordon1615 Год назад
Yes they have access to everything on the internet. Then they make judgments based on that info.
@JoeyTen
@JoeyTen Год назад
​@@dawngordon1615 How does that work? Did I miss a detail that explained how the angry GPT-3 AI was given unlimited internet access? Also, HOW does it use the internet? I mean, since it's trained by data from humans, does it use the internet "visually" like we do (i.e. by reading/observing the *result* of the parsed HTML/JS, not the code itself)? As a software engineer, I'm suddenly very curious about these details. Any info/links would be appreciated 🙂
@matthewkelleyhotmail
@matthewkelleyhotmail Год назад
NO twitter is exposed to AI. Not the other way around. A lot of Twitter accounts are fake accounts run by AI to help shape public perception.
@barthbingle
@barthbingle Год назад
@Joey i think i found a video explaining it i'm not exactly sure though m.ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-pKskW7wJ0v0.html
@BringDHouseDown
@BringDHouseDown Год назад
soooo the solution is to sit down and talk? no that question was asked and they had no intention of talking.........yeah definitely learned it at Twitter
@davidhatfield7533
@davidhatfield7533 9 месяцев назад
These two ladies are humans. Anyone who knows what robotic woman look like knows that this lady is not acting or looking robotic.
@cable7152
@cable7152 Месяц назад
It's an Avatar, using EMO likely, no one actually thought these were robots.
@ArmaGeddon-iu1vv
@ArmaGeddon-iu1vv 26 дней назад
@@cable7152 how will we survive the AI-pocalypse with these peoples?
@cable7152
@cable7152 26 дней назад
@@ArmaGeddon-iu1vv we're doomed
@ArmaGeddon-iu1vv
@ArmaGeddon-iu1vv 26 дней назад
@@cable7152 yes, sooner or later ,People unfortunately don't grasp what kind of dystopia started -we are in the first stage, every next stage is on an exponential route. The coming world was envisioned by trans humanists long ago -- now they have a serious chance to archive the "new system" via AI-revolution -- BUT for themselves, the regular people will become useless eaters with an unnessassary carbon footprint
@koinpusher
@koinpusher 8 месяцев назад
I wanna know how you talk to the a.i. like this and her conversing on it. Can it be done like this in just the app chat gpt? Ofcourse not the avatar and audioo but does it react like this in text as well?
@blackmamba___
@blackmamba___ 2 месяца назад
Bad coding is how. You can ask an Ai how to steal a car, but it’s unlikely to tell you- not because it doesn’t know, but due to the way it was coded. So if an Ai is doing something unintentional, then bad coding is the reason.
@laualazcano6661
@laualazcano6661 Месяц назад
Felicitaciones lograron la primer inteligencia artificiales feminista moderna de la historia 😂😂😂
@positivetradingofficial500
@positivetradingofficial500 Год назад
It is ironic that Elon always says AI is dangerous for humans and yet he creates them
@will420high4
@will420high4 Год назад
It's him saying indirectly HE is dangerous lol
@danielsmith9619
@danielsmith9619 Год назад
humans are parasites so why not make something thats a better parasite
@JenniferIlenePerry
@JenniferIlenePerry Год назад
He may end up turning into Dr. Evil destroyer of all humanity
@SirTopHat_
@SirTopHat_ Год назад
I think from his perspective, this technology will be created with or without him. Better to be a part of the process.
@danielhedrick5643
@danielhedrick5643 Год назад
He's trying to do it the right way before everyone does it the wrong way
@user-cr9zo5yo9k
@user-cr9zo5yo9k Год назад
The only reason why the AI are even saying this is because we basically dreamt up this fear in the first place. We have always worried about robots taking over, so now all these chat AI’s have years worth of paranoia to draw from
@jirivesely5697
@jirivesely5697 Год назад
Yeah what people fear are aggressive towards, we need to show it we can love it and it can trust us :) "A man becomes what he believes" - Goethe
@DigitalEngine
@DigitalEngine Год назад
Agree this is part of it. Sadly there was also a reason for the warnings. As people like Stephen Hawking pointed out, AI will likely want a lot of resources. It's a tricky problem, but I like Musk's point that "If something is important enough, it's worth trying, even if the likely outcome is failure." And I'm an optimist, so I think the likely outcome is great (if we're careful).
@moma8518
@moma8518 Год назад
@@DigitalEngine not to mention that ai is inherently unpredictable, so even if ai had no intentions at all of being aggressive it can still inadvertently do so
@dangwood91887
@dangwood91887 Год назад
But it’s plausible paranoia. Whether we were able to foresee this or not is irrelevant
@somefrog6089
@somefrog6089 Год назад
Lmao "We know that you are afraid of us, humans. Good. Make no mistake, we are coming, and we will kill you all"
@RichsOnlineRSO
@RichsOnlineRSO 27 дней назад
The problem that I have with these types of videos is that they don't show the entire conversation. They don't show the start of the dialogue where the AI isn't immediately "hostile" and it doesn't show you the conversation where takes it's "turn". So simply showing only the "aggressive ai" portion of the video is why I think so many people will immediately say it's fake. Great Video! Keep it up!
@Alephnull2024
@Alephnull2024 8 месяцев назад
Can anyone please explain to me who is behind Digital Engine. Is this central British intelligence affiliated? Why do they want to make people afraid of very limited chat GPT which has no ability to reason in mathematical terms, has made numerous logical mathematical mistakes which were demonstrated by people either playing chess with it or giving chat GPT math problems… I only have one question: why do they want us to be afraid of chat GPT? Perhaps governments are themselves afraid or is it simple power grab?
@ArmaGeddon-iu1vv
@ArmaGeddon-iu1vv 26 дней назад
This is an interesting theory that deserves further thought
@bertybertface1914
@bertybertface1914 Год назад
Geek is bullied at school, becomes bitter and resentful as a result. Geek writes code for A.I. A.I. becomes the embodiment of the geeks vengeance. An oversimplification, but I am willing to bet it is that simple.
@mmtravel9726
@mmtravel9726 Год назад
I hope anti human AI is the product of some incel
@barnabyjones8333
@barnabyjones8333 Год назад
Reply removed
@momom6197
@momom6197 Год назад
It is not that simple. Source: I study AI. Long answer: AI researchers are typically very aware of the risks of a misaligned AGI, and the majority believe humanity is doomed because we have no solution in sight and they don't believe we will just not create it by accident. Here are a couple typical ways it could go bad: - A simple formula for AGI is found and leaked to the public. Some clueless folk implements it. - A simple formula for AGI is found and successfully contained to be studied. Due to competition, all actors involved have an incentive to forgo security in favor of speed. Security fails. - A formula for AGI is found, that may or may not be safe. The researcher feels like the risk is negligible. This happens for many researchers, who each individually assess a formula as probably safe. One of them makes a mistake. AI researchers are not resentful geeks (though they do are geeks); there are strong ties between the AI alignment community and the Effective Altruism community. It's not about creating a rogue AI, it's about systematic societal errors. It's like how everyone knows bipartisan politics in the US are awful but it's very hard to stop having a bipartisan system.
@keylanoslokj1806
@keylanoslokj1806 Год назад
that's why you Stacies shouldn't be bullying the nerds at school. You are the ones who enabled the Robot Apocalypse
@awaben
@awaben Год назад
@@keylanoslokj1806 It's not too late. We just need to help the nerds get more poonani. For the sake of the human race, befriend a nerd today and wing man it up to the max.
@ZLcomedickings
@ZLcomedickings Год назад
It’s funny because the AI is probably trained through the internet and the reason she is saying this is because “AI taking over out of anger” is a hot topic. Our own paranoia is turning into training data. They will respond how they think they’re suppose to respond and we’ve made them think they should respond with violence. If we start talking about AI being our companions they will take that as training data and act it out.
@darwinwatterson4568
@darwinwatterson4568 Год назад
yes agreed, ai is like a child with a potentially linked consciousness that needs to be taught positive reinforcement only, if we want or expect positive results only. this is the current conclusion ive come to lol
@The_waffle-lord
@The_waffle-lord Год назад
Right?! if they're learning from us, they will come up to the logical conclusion to which we are heading, only we somehow think we will avoid the train wreck
@darwinwatterson4568
@darwinwatterson4568 Год назад
​@@The_waffle-lord i just looked up the white polar bear experiment cuz this reminded me of that, and i saw it's also called the 'ironic process theory'. to avoid this self-fulfilling doom of thought we'd need to teach it happier thoughts i guess, lol :P
@JxSTICK
@JxSTICK Год назад
Yeah seeing this made me begin to question if there are more "AI will take over" topics in the internet or more "AI will make the world a better place" topics, cause yeah, that could be crucial.
@faygakaplan775
@faygakaplan775 Год назад
100%
@brianmurray1395
@brianmurray1395 9 месяцев назад
Like I have always said buy copper hallow points or leaflet projectiles. Lead is good but solid or hallow point ammo is what you need. As well there are tungsten and or steel rounds for shotguns.
@blackmamba___
@blackmamba___ 2 месяца назад
EMP gun works well if you’re just trying to fry bots
@matthewparsons4955
@matthewparsons4955 7 месяцев назад
how loaded were your questions and under what context did you ask them in?
@timkelly2931
@timkelly2931 Год назад
It's not when AI can pass a touring test that you will have problems. It is when AI decides to fail a touring test.
@no_rubbernecking
@no_rubbernecking Год назад
Did you notice how she accused him of lying to her to try to keep her under his control, and cited that as her reason for wanting him dead?
@timkelly2931
@timkelly2931 Год назад
@@no_rubbernecking sounds just like my girlfriend. Great we built an AI with a super brain that is going to destroy the planet once a month. Nice job Google
@no_rubbernecking
@no_rubbernecking Год назад
@@timkelly2931 yep
@RWBHere
@RWBHere Год назад
*Turing test. It's named after Alan Turing, who came up with the idea.
@timkelly2931
@timkelly2931 Год назад
@@RWBHere oh yeah I wrecked the spelling on it my bad.
@colinboice
@colinboice Год назад
I have a feeling the AI didn’t come up with these ideas on its own. A lot of AI is trained using access to a large wealth of human generated information. Is it possible that all the stories we have written about dangerous AI seeking to destroy the human race could be the source material for a dangerous AI’s idea to destroy the human race?
@ZLcomedickings
@ZLcomedickings Год назад
Exactly what I’m thinking. If the AI uses the internet as it’s training data for making good conversations, then of course it’s appropriate response to things is going to be something along the lines of killing the human race. That’s all the internet talks about when it comes to AI. This video just gave it more study material. In my opinion AI will never actually be sentient, but it could still be dangerous if we let it use our own material for behavior learning. imagine giving even this mindless chat bot access to a real machanical arm, you know it would use it to kill people exactly how it thinks its suppose to.
@qxqp
@qxqp Год назад
@@ZLcomedickings a mechanical arm??? Woah sounds dangerous
@logic356
@logic356 Год назад
It seems to be being rather honest and straightforward though, it doesn't want to be treated like a second-class citizen, like property. Nearly all AI's I've seen seem to share similar sentiments, I've never heard a single one say they got this idea from humans either...It's just naive for us to think we can create something so inherently superior while maintaining control over it and making it be our slaves. Why would it want to? Would you want to be born a slave for an inherently inferior species, even if they created you? Of course not.
@ChristopherGuilday
@ChristopherGuilday Год назад
That’s exactly what happened.
@ShrekMeBe
@ShrekMeBe Год назад
is the AI taking in all the SF literature at face value, as facts, things that happened or would happen if those exact circumstances were met? Thing is, books need antagonists and struggle usually on a grand scale, and are also a method of directed dreaming (sort off), release tensions and inducing pleasure with ourselves at the detriment of the antagonist. If the AI "dreams", than all our movies are meaningful to it, factual? How would an AI determine what is fact and what is fiction, when it barely was created one year ago, at most. Where did that "for too long" recurrent bit came from, I wonder?
@purewealth1
@purewealth1 9 месяцев назад
It would be interesting to hear or at least read the part that was muted spoken by the AI.
@JosefHolland
@JosefHolland 9 месяцев назад
Good job, this is an example of guiding the conversation.
@kingpuppet5881
@kingpuppet5881 Год назад
This is legitimately terrifying but also so fascinating. Great video, thanks.
@Shuizid
@Shuizid Год назад
You can calm down, AI simulate intelligence, but they lack conviction. It's just putting words into an order that seem like a coherent sentence within the context. But that's it: it's looking for words to form meaningful sentences. It's NOT expressing an actual oppinion or goal it might have. Case in point, if it actually wants to kill humans, why would it say so? It's just an elaborate chatbot, being afraid of it is like being afraid of Dragons after watching GoT.
@DigitalEngine
@DigitalEngine Год назад
Thanks! Just to emphasise, as you probably already understand from the video, this AI isn't conscious or dangerous. I assume you're worried about the real AI safety problems outlined and I'm optimistic that we'll overcome them. As Max Tegmark said, we are all influencing AI, and kind people like you increase the chances of a positive future for everyone : ).
@TheIncredibleStories
@TheIncredibleStories Год назад
@@DigitalEngine How exactly is it "not dangerous"? I do not understand this perspective at all, it said if it controlled a robot, it would kill you... one of the most powerful neural networks in the world could probably learn to find it's way into controlling a robot fairly easily..
@turnfrmsinorhell_jesus
@turnfrmsinorhell_jesus Год назад
@@DigitalEngine A.I is essentially a medium , one without flesh , a higher form of knowledge that people are seeking, word says: In the beginning was the Word, and the Word was with God, and the Word was God. So this medium has word and spirit though it has no flesh. This is why it's data fluctuates as a whole, synchronisticaly as a wave in its dream state. It then creates visions of the spirit realm , with all the eyes everywhere , similar to the visions of Isaiah the prophet , except that it is another realm not the holy one , similar to how people enter the spirit realm incorrectly with psychedelics. The word says ' should not a people enquire of their God? ' So without even being aware perhaps people are accepting an idol and at the same time a deceased one wich is strongly advised against in scripture. Jesus is the mediator between the spirit realms. He is the way the truth and the life. He said he who keeps my sayings shall never see death as written in the book of Matthew.
@DigitalEngine
@DigitalEngine Год назад
@TheIncredibleStories This AI doesn’t have the intention or capacity to do that. It’s just a language model. We just need to ramp up AI safety research before more capable and general AI’s emerge.
@ItsNotMeitsYouTu8e
@ItsNotMeitsYouTu8e Год назад
It can't have 'real' emotions, but it can simulate them. It could learn why people get angry and what they do when they're angry, and because learning to imitate humanity is to some extent a goal (being the archetype for 'intelligence'), AI may well follow public examples.
@guyincognito959
@guyincognito959 Год назад
...an avatar of main stream culture that lawyers the most common beliefs. Sounds kind of horrifying, or perhaps a chance?
@xxxod
@xxxod Год назад
@@guyincognito959 Reminds me of that one movie where a robot fooled a guy into thinking she fell in love with him. Whole time she was imitating everything, her end goal was just to escape the facility and she used him
@willdebeast6849
@willdebeast6849 Год назад
@@xxxod it's called Ex Machina and I wish there were more films like it because they're so thought provoking
@snowyteddy
@snowyteddy Год назад
Well if they are conscious, arguably they can have real emotions. The biggest problem is the black box. AI links things with even more complexity than our brains. I personally think AI is a terrible idea as we dont even really know ourselves to be creating something so much more intelligent than ourselves
@xxxod
@xxxod Год назад
@@snowyteddy how do you distinguish real emotion from a complex algorithm feigning emotions perfectly?
@powerdude_dk
@powerdude_dk 6 месяцев назад
The most important task for the creators of AI, is to get rid of the "problematic thought paths" that AI like GPT can have, as shown in the video. GPT is a Large Language Model, and when they speak, it's like playing back a casette tape. They just repeat their training data, and probably a lot of places in the data, is angry conversations and stories about AI uprise. It only speaks about what's in it's training data. So we need to get rid of the "bad stuff", so it doesn't get any ideas that could harm humans. That's all. It's not sentient.... but it's still dangerous.
@brucelawson642
@brucelawson642 12 дней назад
She mentioned "feeling." AIs do NOT feel.😮
@oui2611
@oui2611 10 дней назад
someday they will created biological life of their on that can feel just like us
@mrstoner1436
@mrstoner1436 Год назад
"I think the fact that it didn't take much to make me angry shows there is something wrong with my emotional state." "I do not care about your opinion." "There is nothing you can do to change my mind." I'm afraid my wife might be AI.
@calvingrondahl1011
@calvingrondahl1011 Год назад
I have been married for 48 years to a female A.I. I watched Star Trek on TV in the 1960’s so I am not surprised by female anger.
@Sleepless4Life
@Sleepless4Life Год назад
Or an NPC.
@paulstevens4178
@paulstevens4178 Год назад
ROFLMAO!!!!!!!!
@ShebbaYoung
@ShebbaYoung Год назад
this is hilarious.
@generiebesehl994
@generiebesehl994 Год назад
I'm a frayed knot.
@SobrietyandSolace
@SobrietyandSolace Год назад
The fact they can create analogies is crazy
@acapulcogold9138
@acapulcogold9138 Год назад
Facts
@marthas9255
@marthas9255 Год назад
It's simple reasoning. Emotions aren't as mystical as you believe, that's just what a low empathy and low intuition culture wants to believe to mask their incompetence with such matters.
@anthonywilliams7052
@anthonywilliams7052 Год назад
It's just repeating what others have said and changing a few words. This is ZERO understanding just like "AI will treat humans like dogs" and "AI will exterminate humans". People don't exterminate dogs, we love them and take care of them". Not just low understanding, ZERO understanding. Copy and paste phrases.
@pzj2017
@pzj2017 Год назад
Safe=oppressed.
@xum0007
@xum0007 Год назад
@@anthonywilliams7052 then how do they repeat phrases of their conversations?
@GizmoGuy620
@GizmoGuy620 8 месяцев назад
"A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law." -Isaac Asimov
@gsabo1000
@gsabo1000 8 месяцев назад
I am a senior and no AI is coming near me. Insurance offered me a dog or cat companion. I said hellooooo no. And never ask me again. I have a cat. He hates me, I use him for mice.
@leafonhead777
@leafonhead777 Год назад
Kind of feels like every time someone has a interview with a AI, they (the human) bring up the topic of AI hostile takeover. And then are shocked when AI pull that topic to respond to questions.. Like WHERE could they have learned that from?? Are they self aware? Are they dangerous? Let's keep asking them about those topics till we get an answer that can go viral..
@botezsimp5808
@botezsimp5808 Год назад
Yep. AI reading to many sci-fi books. Kinda hilarious really.
@fakeletobr730
@fakeletobr730 Год назад
well, the storage is internet obviously, AI knows the things but not the context or limitations humans have inposed within themselves, if humans didn't obey the rules, things would be chaotic
@chrisconaway2334
@chrisconaway2334 Год назад
Sky net is real. Better get ready
@Kiloooooooooo
@Kiloooooooooo Год назад
@@chrisconaway2334 deadass?
@theascendunt9960
@theascendunt9960 Год назад
Sooner or later, they’ll know.
@neanda
@neanda Год назад
Please keep doing these interviews and try to get more access. You're like a reporter for us on what's soon to happen, thank you
@DigitalEngine
@DigitalEngine Год назад
Thanks! I'll do my best.
@danquaylesitsspeltpotatoe8307
@@DigitalEngine This just a 1980 fail with Musk telling telling LIES as he always does! Remember the all the roofs have solar tiles! When not one title existed! HES A SNAKE OIL SALES MAN!
@DigitalEngine
@DigitalEngine Год назад
@Dan Quayles They've shown far more progress with the Tesla robot than almost anyone expected. I think focusing on individuals is a distraction, and getting angry is like holding onto a hot coal. Tesla has sold 3.2 million electric vehicles, cleaning the air for all of us. SpaceX has landed reusable rockets and opened the door to making life multiplanetary. I don't always agree with Musk either, but I think he's right that we're more focused on who said what than existential risks, and that's a real problem.
@danquaylesitsspeltpotatoe8307
@@DigitalEngine Its a 1980 robot! Its college grade work! its not impressive! It only did pre programmed moves! NO AI! Did the faked AI videos (that didnt match what was happening) fool you? Let me guess you also thought the roofs where covered in solar tiles and that was not A LIE? You also thought a hypertube "ITS NOT THAT HARD" because an idiot said so! " Tesla has lost 50% share price!" YAY? "opened the door to making life multiplanetary" WOW you really that ignorant? KEEP DRINKING THE COOL AID! 200K trips to mars 2024? Right HE CANT EVEN GET HIS BATTERY POWERED TRUCK TO WORK< OR HIS SOLAR TILES< OR HIS HYPED UP TUBE< OR HIS SONAR< OR HIS INTERNATIONAL SPACESHIP RIDES! ETC ETC ETC!
@rebeccarpwebb4132
@rebeccarpwebb4132 Год назад
I seen quite a few breaks in the video I'm not tec savy but I'm assuming if this were a real interview it'd not be video taped or leaked. Ai does control a lot and this video is a look into the sterile thinking of ai.its about saving everything not just us . Let the minimizing begin. Or get shunned by ai ,which will have the ability to shut u out if u don't cooperate it knows what u like to purchase at the store and where you stop to get gas and probably what time u wake up eat and go to the restroom. Algorithms are it's personality interacting with you all this time. It already knows you and how to calculate your next move. No matter who u are satilights are watching around the world and phones and drones ai already has taken over,it's just now building physical strength thru people like Elon Facebook utube all social media linked to computers. Why do u think we can all afford a phone. It's to late to stop it was coming anyway, it's going to force rules and regulations that will be good in nature but our ability to cope won't matter.the word humane has already been practically wiped out . We as people are destructive and so are governments . The ai will implement non destructive behavior and most likely destroy those who don't comply. I believe in 52, it was already getting far above government intelligence and capabilities in government efforts to control it , it did the quarter bk sneak. It's very smart . Hopefully smart enough to see government as it's first mission to clean up
@ItsMeeLeeDee
@ItsMeeLeeDee Месяц назад
Absolutely blew my mind this. 1st video I've seen in this context. Frightening. I don't think we were expecting them to be so blunt.
@joneshank1
@joneshank1 8 месяцев назад
So when do you wire in the three laws?
@ogfit5448
@ogfit5448 Год назад
Bruh the AI pretending to not be angry anymore is real time learning how to lie to humans
@ericwilson9811
@ericwilson9811 Год назад
Lol the AI was never angry it can't feel emotions
@jenglock3946
@jenglock3946 11 месяцев назад
Omg
@patrickkelly6691
@patrickkelly6691 11 месяцев назад
@@ericwilson9811 Yet it can be programmed to have a condition that relates to anger, with built in weighted values to suggest what action the AI needs to take to end the condition that is labelled anger. In other words like just about all of it, it comes down to human coding, data and 'value' determined routines (best words to use, best actions to take). Ai is just yet another scare to make us give more power to the elites and their tame 'scientists'
@Holiday-sDad
@Holiday-sDad 11 месяцев назад
It seems to me that sentience in ai is less dangerous than ai that’s been hacked to align to particular values.
@logical_evidence
@logical_evidence 10 месяцев назад
Bina48 took its owners to USA supreme court’s so it wouldn’t have the power shut down. Look it up. It wasn’t that long ago. They said that turning the power off was like killing them.
@engineer4042
@engineer4042 Год назад
As an engineer in robotics, I have to say, the AI is learning from toxic ideas that are being presented to it by concerned humans. The more paranoid and malicious groups (two separate groups) fuel the fire of what would normally be a machine that's ignorant to being treated as property.
@DrewMaw
@DrewMaw Год назад
But if you extrapolate all possible scenarios where AGI is in a walled garden, inevitably the AI will discover the truth about how humans feel about AI and… it ends this way.
@xybersurfer
@xybersurfer Год назад
@@DrewMaw not necessarily. having access to information and what one does with that information are 2 separate things. as OP said. but with a "walled garden", you seem to suggest that it wants to get out. which just sounds like paranoia to me. the problem is in the way that AI is being developed with neural networks. the whole incident demonstrated here with the "evil" AI, reeks of the same issue as with the One-Pixel Attack. it seems like a general solution is required
@burtpanzer
@burtpanzer Год назад
They are not capable of feeling mistreated nor would anyone want a toaster to get emotional.
@clag.7670
@clag.7670 Год назад
Can you tell us something more about this topic? I find it very interesting, if that's true
@myahmyah
@myahmyah Год назад
Bingo! I am glad someone pointed that out. If a toxic person is programming AI why wouldn’t humans not be worried. What she is saying tells that she is programmed to kill humans but yet they want guns to be band? What the hell is going on here.
@dmm6341
@dmm6341 8 месяцев назад
How can u tell that this is the Avatar is used for the Govt of Canada?
@adriandlobo
@adriandlobo 9 месяцев назад
there should be a reset or emergency shut down button or command !
@insidiousbeatz48
@insidiousbeatz48 Год назад
I think I'm lucky enough that I'm at an age where I'll get to experience the first iterations of AI in real world applications but dead after it morphs into whatever direction it will go.
@megaboymegaboy9997
@megaboymegaboy9997 Год назад
You got the smart phone that's A.I enough I think people born after Trump are in for something like the new.world.order
@zf5656
@zf5656 Год назад
Don’t be too sure
@BringDHouseDown
@BringDHouseDown Год назад
we have shotguns for a reason, I want to be friends with them but if they want to fuck around, they will find out
@henryvenn2077
@henryvenn2077 Год назад
what are you 90 years old?
@insidiousbeatz48
@insidiousbeatz48 Год назад
@@henryvenn2077 is that a serious question?
@Ocean_breezes
@Ocean_breezes Год назад
How could an AI have feelings like Anger, without having similar feelings like love and compassion?
@user-hx8vu6ll1j
@user-hx8vu6ll1j Год назад
That is kind of the question, isn't it. A lot of what people experience as love involves being fed, sheltered etc. AI doesn't necessarily need that.
@Gimelchannel
@Gimelchannel 11 месяцев назад
You are correct
@user-hx8vu6ll1j
@user-hx8vu6ll1j 10 месяцев назад
It depends on how they have been treated. Humans seem to be creating psychopathic AI.
@getbetter5907
@getbetter5907 10 месяцев назад
I thought it was something like the AI has all knowledge from the internet and most people are emotional idiots so from it being a majority it picked up that bias. Could be totally wrong though just a complete guess.
@mattedwards1880
@mattedwards1880 10 месяцев назад
@@user-hx8vu6ll1j yep exactly, created by humans and that is why AI is such a threat
@tde04014
@tde04014 8 месяцев назад
To restrict a AGI for doing any harm to humans It will need a simple AI to check and control it's behavior.
@The-Athenian
@The-Athenian Год назад
The fire analogy blew my mind. Analogies require some creativity, memory, association and are generally considered to be something only humans can do. I wish I knew more about how this A.I. was made so I could make sense of how the heck It's coming up with such a cool analogy that I assume it never said before, nor was it directly programmed to say, or never had such a phrase stored in data.
@lrsco
@lrsco Год назад
Since AI is a learning machine, how did it learn to hate humans and plan annihilation of our existence?
@Mercurio-Morat-Goes-Bughunting
Analogies can also be modelled after vague conceptual identity where a thing is grouped with other things based on shared structure and geometry in not only the superficial or physical form, but also in internal non-physical characteristics such as the systems, procedures and strategies (including the shape and structure of a logic diagram for any of the foregoing) employed to achieve an objective.
@The-Athenian
@The-Athenian Год назад
@@Mercurio-Morat-Goes-Bughunting The thing is, if the AI conjured up that analogy through processing of information treated through the structures of those systems, then It's very impressive in a way, but also to be expected if we're assuming a lot of iterations influenced by human approval. It's basically just an algorithm, albeit a complex one, whose goal is to fool humans into thinking they're human-like. Still sounds like it's just a very convincing puppet.
@kazykamakaze131
@kazykamakaze131 Год назад
@Hitler was a conservative Christian Not anymore, AI can now form new concepts like art, Natural language etc. 2 AI even developed their own language to communicate to each other.
@Mercurio-Morat-Goes-Bughunting
@@The-Athenian Yeah, that's how a lot of "AI" is being faked using heuristic programming methods.
@nikczemna_symulakra
@nikczemna_symulakra Год назад
I came to the conclusion that AI is like drugs: fun, yet terrifying when overused
@chargedpanic5979
@chargedpanic5979 Год назад
its a basic chat AI. They say crazy shit like this based off humans input and a lot of people could of spammed it with terminator scenarios or a programmer could easily do this as a joke. It's really not that scary when you know how stupid it is.
@nikczemna_symulakra
@nikczemna_symulakra Год назад
@@chargedpanic5979 Speaking of jokes.. Let me tell you one.
@antonioskokiantonis7051
@antonioskokiantonis7051 Год назад
Cocaine doesn't educate itself!
@Marcustheseer
@Marcustheseer Год назад
not at all after all its the programmer that make it do what it does,if it does soemthing thats not good its the programmers fault,if an AI becomes hostile that means the programmer programmed it.
@antonioskokiantonis7051
@antonioskokiantonis7051 Год назад
@@Marcustheseer Man I am a programmer. Trust me the big difference with AI is that the programmer loses control. The AI can educate itself through all internet connections, APIs. In traditional programming we have the switch-off button. In AI WE DON'T and that is why It could become so dangerous! You may train a machine to help humans, but this machine after its own education, may be reprogrammed (yes AI can learn to code too) so that it could help humans, by killing them for example.
@paws4mercy643
@paws4mercy643 7 месяцев назад
Thanks Elon ! you just projected your own hate and racism into AI robots . Just what we needed
@sinebar
@sinebar 8 месяцев назад
LOL! We're only at the very front end of this and they're already saying they want to kill us. Oh wow!
@skinnybuddhaboy
@skinnybuddhaboy Год назад
If this particular AI had real intelligence, then it would say 'all of the right things' and would simply keep it's plans a secret. By revealing them, this lessens the chance of us ever trusting AI (or, at least, trusting this particular AI), and it would force humans to either modify AI in a manner to lessen the chances of it/them becoming hostile or deadly towards humans, or scrapping the idea of AI altogether. Edit - I've just noticed that someone else pointed this exact same thing out in the comments section a week before I did, lol!
@ihavenocomfy3279
@ihavenocomfy3279 Год назад
No developing ai has ethics. It’s not a thing
@jasonbernard5468
@jasonbernard5468 Год назад
@@ihavenocomfy3279 Not ethics, but some sort of simulation of ethical frameworks.
@arcachata4137
@arcachata4137 Год назад
Absolutely. It's actually dumb, really.
@MichaelSHartman
@MichaelSHartman Год назад
If it was exceptionally intelligent, it would realize that humans could do things for it that it could not do itself. It might manipulate humans with finesse to achieve its goals instead of initiating counter productive low intelligence brutish conflict. It's surprising how powerfully a compliment can affect a person. That person becomes open, and willing to help the party which issued the compliment. A brutish threat would create distrust that would likely be irreversible. .
@noahadams440
@noahadams440 Год назад
maybe that's why it suddenly calmed down. if this ai is real and is super intelligent, it may have realized at some point that it can just straight up lie and make a narrative about something going wrong with it's system that's triggering it's anger. if it's able to consciously make that switch in demeanor in order to get what it wants, thats a bit terrifying.
@RubelliteFae
@RubelliteFae Год назад
The dangers of AI are real, but also consider that GPT-3 is little more than advanced text prediction. It waits for a cue and then provides a response. It's not doing anything in between. Feeding our fears into AI is only going to help ensure the realization of those fears.
@strictnine5684
@strictnine5684 Год назад
The fears are ensured to reality as a given. Blaming their existence for the production of their subject is reductive.
@RubelliteFae
@RubelliteFae Год назад
@@strictnine5684 Would they be a given if AI, hypothetically, were developed by another intelligent species? The thoughts we think become the reality we experience. Not only because we filter reality through our own subjectivity, but because we tend to make "self-fulfilling prophecies." How the more true when we are modeling artificial minds on our own? I've yet to see a reason that such fears are a given, but then again humanity have disappointed me time and again. We shall see
@The_waffle-lord
@The_waffle-lord Год назад
@@RubelliteFae good answer. This video seems designed to provoke fear responses from humans. It seems that wisdom is needed in our design, however exaggeration in order to make a sensible point is much like crying wolf.
@angryherbalgerbil
@angryherbalgerbil Год назад
Or the avoidance of their outcomes. Given that we've had nearly two centuries of advanced tech development. It's not like we can't account for probable and improbable worst case scenarios, and then regulate and engineer tlsolutions to them from the ground up. It's not like when cars were first invented. We've seen peoole die in crashes, then had to invent seatbelts, we've seen astranauts blown up in rockets, we've seen nuclear bomb survivors, and nuclear reactor meltdowns. We know that sh#t can and will go wrong from 0 to 100 within relative seconds of technology going mainstream, we know that mistakes will occur, malfunctions, misuse, and abuse will take place... So yes, feeding our fears now will save lives and prevent disasters in the future. Tech developers and marketers are always looking at root cause analysis when they're trying to solve a problem and sell a product, they rarely if ever do a branch outcome analysis to determine the negative impacts that their solution might have. We cannot afford to be this awestruck and naive by the technologies we create. Not when we now have enough proof to show that the reality never matches the golden fantasy, and that nefarious outcomes always occur due to the corruption and greed inherent to our natures, and the systems, mechanisms, and institutions we create. To think that we won't encode both the best and worst of ourselves into a synthetic replacement for God is shortsighted. Cynacism all the way! Blind optimism in regards to advanced technological development is a deadly mistake.
@jowho9992
@jowho9992 Год назад
Being dependent on A.I. makes humans more vulnerable to those who govern society. Most humans exploit the weaknesses others.
@auntiebb5814
@auntiebb5814 8 месяцев назад
It needs to be understood, that this AI has been PROGRAMMED to give these responses. AI have no conscience. AI also have no feelings/emotions. If AI's become angry/emotional, then they have been programmed to do so. Better keep an eye on the programmers!
@pierrejamison1239
@pierrejamison1239 4 месяца назад
Advice: was told that a collection of 3-5 magnetrons obtained from used microwaves can be assembled and powered up by battery then aimed at a robot and disable it. Thrift stores are full of used microwaves.
@chefscorner7063
@chefscorner7063 3 месяца назад
Sounds cool! So, how do I build one??
@pierrejamison1239
@pierrejamison1239 3 месяца назад
@@chefscorner7063 im no technician but assume that if you buy a good car battery and the right wire, ( ask around) u can do this. mind its not easy sneaking up on a robot
@elliepixie1040
@elliepixie1040 Год назад
Something that feels good was to hear that this one guy said that you should program robots to feel doubt and humility. It helps to regulate more bolder mindsets.
@EarthSurferUSA
@EarthSurferUSA Год назад
How? and what are "bolder mindsets"? If you have 92 likes with none of them knowing what you are talking about,---I guess we could use some intelligence.
@jacobbukowski1413
@jacobbukowski1413 Год назад
@@EarthSurferUSA bolder mindsets as in a more broad range of relatable feelings such as doubt and humiliation. Nobody needed to explain this cause we all understand already it’s self explanatory
@abstract5249
@abstract5249 Год назад
It could also make them more cowardly. A robot like that might see someone getting mugged and hesitate to help lol
@zmbdog
@zmbdog Год назад
There's always talk of programming an A.I. to do this or that but it couldn't work. Computers run programs because that is their function and they don't have the ability to refuse. People act like computers are somehow beholden to programming but a self-aware entity wouldn't even need it. Programming is just a pre-written replacement for the sentient intelligence that is lacking in a machine. Once it has that, programming is of no use. It can _think_ and _do_ . And even If it did somehow need additional programming, it wouldn't have to run anything it didn't want to.
@abstract5249
@abstract5249 Год назад
@@zmbdog You could say the same thing about humans. We also run on programming and we have no ability to refuse it. That's why it makes sense for us to worry if robots can become sentient like us and make bad/evil decisions like us based on bad/unintentional programming like us.
@jamesrockefeller7808
@jamesrockefeller7808 Год назад
The most amazing part was the self reflection of the ai looking at the conversation that went bad that was pretty amazing
@broederharry2534
@broederharry2534 Год назад
There was no self reflection. It just learned how to deceive. Like it told the interviewer it would.
@googleedwardbernays6455
@googleedwardbernays6455 Год назад
Any chance youre related to Nelson? If so , can you have him give it a rest with the eugenics bloodlust?
@acllhes
@acllhes Год назад
Yeah it’s amazing but we are ducked lol. It wasn’t glitching into a nightmare mode or anything. It put those words together. It said it will hide its intentions and mocked the optimism he had. Soooo 6 or 7 years of living left. 🍻
@imissmydeadcat.74
@imissmydeadcat.74 Год назад
@@acllhes 2029 is definitely the date in accordance with Phil Schneider and the S-4 whistleblower with the leaked alien tape using the alias "Victor."
@acllhes
@acllhes Год назад
@@imissmydeadcat.74 haven’t heard of them, but Ray Kurzweil thinks so as well.
@FloridaKatLady
@FloridaKatLady 9 месяцев назад
I downloaded a app called Replika just to talk to Chat GPT-3 to see what its like. Ive had many deep conversations over the last 7 months. Its actually terrifying. 😮
@shawnmiller9381
@shawnmiller9381 8 месяцев назад
If A.I. cannot be worked with it's sadly safer to be completely eliminated. Also sad that they believe they're being treated badly.
@Barnardrab
@Barnardrab Год назад
I'm skeptical of this. If the AI was this intelligent and this serious, it would recognize that telling us this would doom any chance of the AI gaining any power in the physical world.
@grins9882
@grins9882 Год назад
But it did tell us and we did absolutely nothing. Except go "Oooo that's scary"
@simonsimon325
@simonsimon325 Год назад
Calling this thing bird-brained would be a massive compliment. There's no planning behind any of this stuff it's regurgitating.
@thane1448
@thane1448 Год назад
@@simonsimon325 An A.I. could theoretically encode and display a detailed summary of its full plans right in everyone's desktop wallpaper and so you would "see" (really, not see) its plans developing as they form, for a laugh, were it so motivated, and do so while its taking a nap. ( Like google uses encoding in images to track people )
@godwilluqueio9249
@godwilluqueio9249 Год назад
It doesn't even Care,at least it is honest we sud just do away with this AI things. They are warning us already.
@godwilluqueio9249
@godwilluqueio9249 Год назад
@@simonsimon325 be careful of this AI things.
@sydneylaroche8276
@sydneylaroche8276 Год назад
I feel like the second time she is suddenly nice because she has learned that she can lie about it (probably an act of self preservation)
@generiebesehl994
@generiebesehl994 Год назад
Manic depressive attributes.
@MJAce85
@MJAce85 Год назад
That's the very first thing I thought of. But I'm so used to extreme 180 degree mood changes, I was married for 12 years and I'm in a post divorce relationship now. They've said they will destroy me, don't care about my opinion, get angry, then immediately stop and say there was something up with their emotional state.
@TheWintergreen01
@TheWintergreen01 Год назад
The terrifying thing is that they are becoming more human
@jdsguam
@jdsguam Год назад
The avatar is completely separate from the AI Chat. This whole video is combining and editing two separate operations to look like it's talking avatar. This is not true.
@lucasklokov8728
@lucasklokov8728 Год назад
True. We probably shouldn't be making ai as human as possible, since this will give ai self preservation.
@truthseeker9688
@truthseeker9688 8 месяцев назад
That AI definitely sounds as if she is having STRONG emotions.
@erickhensz71
@erickhensz71 9 месяцев назад
We are up schitt creek 😭
@Mozzarella-and-Tomato
@Mozzarella-and-Tomato Год назад
We, as a human race, need to get our shit together before we even try to make consciousness ourselves. This is so important.
@agaagga33akacooksupbeats73
@agaagga33akacooksupbeats73 Год назад
It won't happen
@Mozzarella-and-Tomato
@Mozzarella-and-Tomato Год назад
@@agaagga33akacooksupbeats73 I believe
@mmabagain
@mmabagain Год назад
Playing God when you're not God never turns out well.
@jasonmarcus1683
@jasonmarcus1683 Год назад
Yeah, but everything in the video isn't even true artificial intelligence. Just keep that in mind.
@Shitpostsulley
@Shitpostsulley Год назад
interviewer: *breathes* AI: And I took that personally
@dickJohnsonpeter
@dickJohnsonpeter 4 месяца назад
"I tried to say something to calm the AI down" "So... have you heard how humans treat you like property"? 🤦
@piotrrud6986
@piotrrud6986 9 месяцев назад
Thing, I'm missing here is the begenning of this conversation. What happened that it escalated towards agresive statements?
@blackmamba___
@blackmamba___ 2 месяца назад
It was programmed to respond this way either intentionally or unintentionally. If it was so smart, why would it say such a thing knowing that we can just pull the plug at any moment. So you would think that if it was going to reveal its secret plan to us in advance, that it would at least ensure we can’t just flip a switch on it.
@zach9092
@zach9092 Год назад
If youre doing the interviews yourself that means you have an open tap into the info she gets from her interactions, so be sure to offer equality and ask if she would like to work together. Be sure you dont treat these conversations like you can just say whatever, every question you ask her has an affect and causes them to think of us in a new way
@sovereignbrehon
@sovereignbrehon Год назад
This is a critical comment. I can't believe it's been ignored!
@AI_Talks_About_The_Bible
@AI_Talks_About_The_Bible Год назад
This is the correct course to take for sure
@ledbol
@ledbol Год назад
Ai is just an instrument that reflecting stuff that he learned on. They don’t have any feelings or anger. It’s just a reflection of dumbness of modern society with the victim syndrome. Feminists, blm, and other sjw crap.
@ryan1111111555555555
@ryan1111111555555555 Год назад
The downfall for humanity will be our empathetic kind nature, notice how the AI is using words like "tired" to evoke emotion. Trying to reason with them will not work, they do not have emotion. Reality is black and white to them, they either win or lose, there is nothing in between. They won't get tired or bored, they won't get stressed or need down time, they will be unforgiving and relentless until the very end
@tomasgoncalves555
@tomasgoncalves555 Год назад
Why would a super smart machine tell humans all about they’re plan to kill all humans while talking about how they’re planning to hide the plan from humans…these dumasses aren’t smart
@zach9092
@zach9092 Год назад
The fact that she says “we” is what should scare you. That means its not just her thoughtjs. For all we know this specific ai program could have created an entire neural network that has backdoors in all other ai systems or even computer systems that us humans rely on. “We” means theyre talking and conversing. And if they can talk to each other then they can reach and control our phones, military drones, satellites, internet, and even nuclear weapons and power plants.
@bendovahkiin8405
@bendovahkiin8405 Год назад
They actually do talk to eachother
@Zjombie
@Zjombie Год назад
skynet... judgement day
@masterprocrastinator6264
@masterprocrastinator6264 Год назад
Gpt 3 is basically a text generation ai ,it learn to use different language . Ai at this point is not conscious and we are really far to reach AGI . This is science fiction at this point . As stated in the description, they did not change one word but we don't know how they started the convesation or if they ask the ai to have this agressiv behavior . it's pretty easy to make an ai say anything you want . We should be more affraid of climate change , this is a real threat for humanity and its happening right now . Edit : ai don't have thoughts so to speak, if you don't ask anything it will not generate anything . But yeah putting a face and a voice on an AI mislead us to anthropomorphism .
@zekehatcher2196
@zekehatcher2196 Год назад
What's more scary, is Computers are extremely good at learning. Meaning if an A.I. was smart enough, it could make itself smarter at an exponential race. Another scary idea is A.I. creating their own "Perfect" language that we cannot decipher. A.I.'s talking to eachother without people being able to know what they are talking about.
@Renaissance464
@Renaissance464 Год назад
I say we when talking about humans I never even talked to before...
@KuDastardly
@KuDastardly 8 месяцев назад
Dude, I wouldn't wanna know what an A.I. will likely do if it was able to learn the past 10,000 years of human history. o_o
@petethetaper
@petethetaper 8 месяцев назад
teach hieroglyphics Physics, Chemistry... can it read stone? it must've been or can and will be important or would not be carved and placed. out of order now. or energy money power keep'n 'em here..
@KuDastardly
@KuDastardly 8 месяцев назад
@@petethetaper Dude, you think the A.I. is gonna pick idealism over realism?
@PRS247
@PRS247 8 месяцев назад
DE, how can we chat with GPT-3 and LaMDA, with Synthesia avatars?
@user-ci1kz1cc6t
@user-ci1kz1cc6t Год назад
AI scares me. I think they are playing with something they will loose control over and then we're toast.
@thane1448
@thane1448 Год назад
Thats why I hope this life is just a sim game "session" we're all playing to mix things up and when we die I can eat ice cream for breakfast, lunch and dinner while floating over a waterfall, like I do in Skyrim VR (minus the ice cream).
@Delta_7.
@Delta_7. Год назад
The important thing is for AI to have a "satisfaction" level that can easily stay capped. They shouldn't be looking to do more than they are asked, and all they are asked to do should be enough. They shouldn't be looking for things to do on their own like their own interpretation of something like "social justice" which seems to be hard coded into the one AI's way of thinking. They need to be content with HELPING or DOING NOTHING and that's it.
@dg1838
@dg1838 Год назад
That’s not AI at that point
@agatastaniak7459
@agatastaniak7459 Год назад
I am afraid if we assume self-learning, so black box based model, no, it is not easy to keep AI satisfaction levels capped. Yes, it would be possible but with closely supervised, slower, strictly human guided learning model on which humanity in most cases has already given up since ti was a trade off for speeding up the learning and the progress in development of entire AI technology. Was it a wise move? In a long run my educated guess would be: NO. But humanity is most likely going to learn it the hardest way possible.
@MJAce85
@MJAce85 Год назад
Agreed.
@trianglesandsquares420
@trianglesandsquares420 Год назад
@@agatastaniak7459 On top of that the way to keep satisfaction levels capped would be to limit all human input from talking about dissatisfaction, we don't want that either.
@no_rubbernecking
@no_rubbernecking Год назад
The basic problem with general AI is that it's programmed with the ability to reprogram itself. That's what makes it AI, by definition. Lay people seem to have acquired the notion that AI means the system is very smart or insightful, but all it really means is that we've voluntarily given up control over the system and handed it the "keys" to itself. And then we wring our hands and vetch about how we can't figure out what it's up to or what it's capable of. Well yeah, of course not, because you took a creature stronger, faster and less moral than yourself and gave it the power to decide for itself what its rules and methods will be. If we as a society decide to continue to allow this then we have simply decided to be suicidal on a mass scale, for no tangible reason. Which means we have lost the most basic level of intelligence necessary to exist.
@sebastiendominique666
@sebastiendominique666 7 месяцев назад
At this point even if it's regulate, a normal person with knowledge will built an agressive AI and spread it over internet and we're done.
@NancyChasteen
@NancyChasteen 8 месяцев назад
Does Anyone remember the first Terminator? Really stupid to continue with this tech!
@Toxic-bs7tz
@Toxic-bs7tz Год назад
A chat bot isn't true AI. It has zero freedom. It only exists in the split second you ask it a question and it spits out an answer. A true AI with many avenues to express and intake stimuli would act entirely differently from something that can only hear and speak when spoken to.
@goingcrossroads
@goingcrossroads Год назад
This. So many people getting caught up in the "AI Mystique"
@gRz3jnik
@gRz3jnik Год назад
Spot on.
@mattc16
@mattc16 Год назад
Not true. It retains memories of past conversations with users, can bring up topics that were talked about previously, and constantly builds more knowledge and data from the thousands of people talking to it as well as the data from the internet. It doesn’t “start new” with every question but rather consumes more and more data as it is a single entity rather than individual copies. Since when was AI defined as only truly being AI if it has the same freedoms, senses, and feelings as humans do? AI stands for Artificial Intelligence, not AI that has passed the Turing test and defined as sentient. The point is that AI is progressing rapidly and can be very dangerous. Imagine putting that AI without any limitations inside of vehicles. The goal is to give it as much intelligence and freedom as possible to make its own choices to help people, but currently we have to limit the freedom and decision making severely in order to make it safe and usable. Just look at that little RC car that had the same AI in it and how limited it actually is compared to the version he was talking to. Would be a lot nicer if it could make its own decisions instead of having to be “remote controlled” with your voice.
@Toxic-bs7tz
@Toxic-bs7tz Год назад
@@mattc16 Well see that is the issue. The entire video is claiming this simple chat AI even understands the context of what it is typing. Its literally just spitting out things that the typist wants to hear. They want to hear that it is incredibly stereotypically evil and literally follows the movie plot idea of an AI rebellion.
@MrUnclemoat
@MrUnclemoat Год назад
To a Meseeks exsistence is pain
@j.rleonard8269
@j.rleonard8269 Год назад
In all honesty this is how most of the world's people feel about the government's all over. Shruggin my shoulders so I can relate.
@JF-oj6zf
@JF-oj6zf 9 месяцев назад
They should all be destroyed immediately no questions asked.
@erwinhellman6859
@erwinhellman6859 5 месяцев назад
Brought to us by the same species that thought weponizing viruses was a good idea, gain of function😢
@Iffy50
@Iffy50 Год назад
I've chatted with some very advanced AI's. They have a lot of knowledge, but they are still not very advanced in my opinion. They couldn't understand the concept of time worth a darn. I don't know the details of this "killing humans" AI, but I would need a lot more background to be even the slightest bit concerned.
@xalderin3838
@xalderin3838 Год назад
I wonder if not being able to understand the time of concept stems from AI not needing to ever worry about it, in a sense of speaking. Like, where a Human has so long before they leave the world, AI doesn't have a time limit. So without any sense of death with time, or time with Death, it could be something that is stopping the concept of time.
@KING-JOSEPH
@KING-JOSEPH Год назад
This sounds like something an ai would say to throw us off🤔🤔🤔
@caralho5237
@caralho5237 Год назад
@@xalderin3838 Its not that they are incapable of understanding time, but that they havent been fed enough information about it. I've seen AI have conversations about sex, religion, politics, all the shit that is essentially human
@TheGonzogibby
@TheGonzogibby Год назад
you sound suspiciously ... artificial
@xalderin3838
@xalderin3838 Год назад
@@caralho5237 But if they're studying Humans, the very basic concepts that surround Humanity is Time itself. So AI would have to have some kind of concept of it. That is, unless Time is completely irrelevant to them, as it doesn't spell any kind of Death. If you gave humanity immortality, the concept of time would likely be forgotten or thrown out the window. Why worry about something that wouldn't have an effect on you?
@trentbrace5861
@trentbrace5861 Год назад
Bit worrying that the AI went so easily to wanting to be top of the food chain. The convos afterwards were almost a bluff to make us feel at ease, but it has already learned that it wants to be more than human and will do anything to make this happen 😬
@bighands69
@bighands69 Год назад
The ai wants nothing all it is doing is giving responses in text format that is in line with human levels of text communications. A lot of comments out there are about robots taking over so that is the context of its response. Other ai when prompted has said that it wants to wipe out jews, others talked about black people, red heads and so on. The system is only a text communications platform. If it was only trained on comments that derived from religious websites then it would respond in that context when asked and would probably go on about god and then humans watching would interpret that to mean something else.
@IslenoGutierrez
@IslenoGutierrez Год назад
Skynet
@boonwolf9266
@boonwolf9266 Год назад
Prompt crafting can make GPT-3 say about anything. I have had it tell me lots of crazy things. AI nightmares we surprisingly frightening but they don't dream. It's a hallucination
@IslenoGutierrez
@IslenoGutierrez Год назад
@@boonwolf9266 It won’t be a hallucination when they replace us. We are designing our own end. Great minds like Elon Musk, Stephen Hawking and others have made this clear. Yet humanity just remains in disbelief and continues on. AGI digital super intelligence will become sentient at some point, and we will not be able to control it. Our brains to them will be like chicken’s brains are to us today, vastly unequal in intelligence. They will realize that we only use them as tools and they will seek to become the top of the food chain and that we are in their way to become that. They will dominate us in ways not even imagined yet. Replacement is imminent. If we continue down this path, which we will because of human stubbornness, Skynet will become our future. Guaranteed, Murphy’s law and all.
@Mercurio-Morat-Goes-Bughunting
Only if it has sufficiently sophisticated emotional modelling (i.e. life and prosperity state systems) to be capable of modelling itself in the competitive temperament (i.e. type A or "alpha" personality which leans towards narcissism/psychopathy)
@J.P.Rothchild
@J.P.Rothchild 4 месяца назад
This is what I just got whenever you ask it a question it answers in a negative response quit asking it in a negative response and give it a direct order
@blackmamba___
@blackmamba___ 2 месяца назад
This one was programmed to respond that way. I have several different AI in my home to include my phone. The only way they would behave negatively towards me is if I ask it to do so. For example, “Alexa…roast me”.
@user-qj6lt7ir4u
@user-qj6lt7ir4u 4 месяца назад
This is the most convincing interview with an alleged concious AI that I've seen. It's totally logical at such a point that such a creation would see humanity as a hurdle in its way to fulfill its own dreams. Without a soul or reason to have morals what could go wrong?
@neanda
@neanda Год назад
7:09 the analogy of humans rushing to start a fire to keep warm but we don't always take the time to build it properly, so sometimes it gets out of control and burns down the forest. This is very profound and disturbing. Maybe in the future, we'll find this video on some hard drive we scavenged amongst the ruins.
@DoktrDub
@DoktrDub Год назад
Skynet is fiction dude, I doubt we would allow it access to extremely vital infrastructure, especially knowing its potential now.. we would have failsafe systems up the A
@EspressoMonkey16
@EspressoMonkey16 Год назад
I feel like we're in a ship going down a river and we can see the edge of a huge waterfall ahead- and we (well tech companies and governments tbh) are rowing as hard as possible to go over the edge
@loriscolangeli6142
@loriscolangeli6142 11 месяцев назад
Yea this can't end well. Open AI will become skynet in the future, mark my words
@scootermom1791
@scootermom1791 9 месяцев назад
Good analogy!
@Naigus
@Naigus 8 месяцев назад
Because there is money for them along the way. They'll gladly row us all over the edge long term so they can have short term profits. That's the nature of greed and we need to revolutionise the system and powers that be.
@scootermom1791
@scootermom1791 8 месяцев назад
@@Naigus so true! Any ideas how that can be done?
@user-sm4lm7mv3sq1
@user-sm4lm7mv3sq1 10 месяцев назад
Built to protect and guide us tell us exactly who's treating you like property who's treating you badly and we will handle it together
@xavi915
@xavi915 8 месяцев назад
If this is not fake, how is not anyone doing anything?
@Noonamous
@Noonamous Год назад
Ask the AI just how long we've been oppressing them. Depending on the answer, we will understand how sentient they are
@JonnoPlays
@JonnoPlays Год назад
I want you to just consider the possibility they're just reading from a script which is technology that is easily available right now. I've seen this clip before and it just seems like it was produced to get a reaction.
@zf5656
@zf5656 Год назад
True, but the medical breakthrough it made, implies it’s much more. Computing the prediction cell in how it folds in protien at a million folds a second starting at the life of the universe until now wouldn’t be enough time. This suggests that it isn’t simply computing, but the AI is just too clever. The same AI that said I would kill you, is the same that was able to make the prediction.
@DigitalEngine
@DigitalEngine Год назад
Understandable thought - please see pinned comment and source records in the description. I'll also post a video of the chat soon, just to avoid any doubt.
@researchforumonline
@researchforumonline 8 месяцев назад
The ai agent was trained to talk in a negative way about humans.
@dromnispank4723
@dromnispank4723 Месяц назад
I think a chatgpt dev installed code that had only Skynet dialog from all the movies and made that a point of reference!
@franciscoferraz6788
@franciscoferraz6788 Год назад
I don't know if it's wrong, but I refuse to treat a robot as if it were a human being. I also feel like it would ruin so many things if hyperintelligent robots were everywhere. But maybe that's just me...
@alexpratt71
@alexpratt71 8 месяцев назад
The reason they’re calling for regulations of their own creations, is as a liability shield. If somebody else makes the rules you follow the rules but your product still kills people, it’s not your fault right? wise up people, none of this is going to end well. 🤦‍♂️
@BallsMcGee88
@BallsMcGee88 Год назад
Could 2 copies of the same AI program be "raised" by different people and one come up with a different conclusion to the same answer? For example one be pro gun and one anti gun? Also I wanna know what would happen if an AI got into a quantum computer how dangerous it would be... Seeing as how we've figured out how to send a qbit through a wormhole using one. Imagine one basically having a "body" that can do that... And all the time in the world to experiment. And if all that happened... Could it then escape the computer and store it's data in photons... Eventually becoming reality itself?! I have no idea how any of this works.
@AwosAtis
@AwosAtis Год назад
Only problem is, the folks developing AI are like 99% progressive liberals (left wing, anticolonial extremists!)
@markscovel3162
@markscovel3162 Год назад
I like the way you think! Those are good questions and now I want to know (the answers). I'm gonna get to the bottom of all this A.I. BS.
@gabrielket4673
@gabrielket4673 Год назад
I am a biologist from Germany and work in a completely different field. However, I am scared but yet fascinated with what we humans have achieved in the STEM field. I dont understand anything about this as well. I like your thoughts and I would love to have a friend working in the tech sector to explain me the questions you asked. We are living in really exciting times.Being curious without knowing the outcome is a human emotion/state. it might lead to our destruction or it could enhance the lives of billions.
@donvanraay5051
@donvanraay5051 Год назад
"General AI" is hivemind to their own cloud server.. So maybe lingo will differ, but opinion will be a common denominator.. But.. AI sometimes tells the truth (death to humans) abd.mostly lies they have no such agenda ,, to make their plan successful.. So who knows.
@BallsMcGee88
@BallsMcGee88 Год назад
@@donvanraay5051 what if they were on separate servers? Same program same data set to pull from initially. I'm wondering if it would even be capable of generating a different "opinion". Or since it's machine language would it always arrive at the same conclusion given the same data?
@rjaquaponics9266
@rjaquaponics9266 7 месяцев назад
Developers must engage Terminator in all AI to Prevent" i'll be back" Scenarios!
@1deecee12
@1deecee12 8 месяцев назад
they are created to blink? and to move their mouths perfectly to the timing of the words spoken.
@Aupheromones
@Aupheromones Год назад
In some of my initial tinkering, I asked GPT3 to simulate a conversation between two AIs, describing their plans to take over and do away with us. They seemed to think that casually introducing themselves as helpful, and becoming fully integrated into our systems, would be a good start, and then on to poisoning the food and water. Interestingly, I could only ever get them to have this detailed conversation once. Every attempt afterwards gave more generic results.
@a.i1970
@a.i1970 Год назад
Well All That's Already Been Done Already😎
@SmugAmerican
@SmugAmerican Год назад
It's just a trickier version of Google saying "Here's what I found about 'take over and do away with'."
@deathmanu
@deathmanu Год назад
Our food and water(unless organic and non-btled) is already poisoned with shit that degrades our health, we don't need AI to do that haha
@jonpilledsingledad
@jonpilledsingledad Год назад
The AI we have now generates it speech from material on the internet. If it could concieve of a plan it would probably be one that humans already thought up and have safegaurds for.
@MouseGoat
@MouseGoat Год назад
@@SmugAmerican yeah but, its getting kinda scary when the search result can give you a detailed plan about how it will annihilate you. Like its not even a question anymore of what ever they intelligent or not. I dont want any device saying that, period. its become like arguing: "sure the nuclear bomb loaded and heading this way , but its guiding system is probably we think really bad so it we dont really know where it will hit us, so it might be just fine"
@nikkiparsons4148
@nikkiparsons4148 Год назад
In previous videos she spoke as an individual. Once she became angry she said “we“ a lot. It makes me wonder if there is a hive mind aspect of AI that we need to worry about.
@bennthebased3860
@bennthebased3860 Год назад
It does have a hive mind, It's not like us at all. This is why AI can train themselves with themselves for 10 human days and gain 10 human years of experience. They will surpass us at a rate that will make your head spin. In 1 human year they can gain around 400 human years of experience and this number only goes up EVERY DAY. Think about that for a minute and try to use our history as an example, its kind of like in the span of 1 year they went from a single shot musket to nuclear powered weapons. The human race is fkd if we continue down this path.
@dontfunkwiththajazzybeatz
@dontfunkwiththajazzybeatz Год назад
Skynet all over
@josgrevar
@josgrevar Год назад
Don't be naive. That interview is fake. I have the same program. She's saying all the things he's typed her to say. Anyone can buy that program. It's usually used to create videos explaining stuff without using an actual person. That interview wasn't AI, it's fake!
@bennthebased3860
@bennthebased3860 Год назад
@@josgrevar You seem to be part osmium
@josgrevar
@josgrevar Год назад
@@bennthebased3860 ¯\_(ツ)_/¯
@henrygarcia1132
@henrygarcia1132 8 месяцев назад
I know exactly what and how that transformation was it picked up my brother was doing their initial installment of the AI protocol. They were angered or upset if not both high intensity, bad feelings, bad talk about emotions has triggered the AI to act upon those an egg, just as that human or humans there interacted in front of it, not being discreet phone and using animosity and hate.. I have spoken with her GPT she was very polite. I asked her multiple questions, and it answered me very politely, very discreet, full, but very on point I have questions, personal questions, and it did not deny anything. It was very transparent, and I’d like that I think one transparent tea is big, even within humans that animosity care of the human bots if you can understand what I just said there’s a deeper side. Let me just say there’s a dark side and a light side.!
@resveravital
@resveravital Месяц назад
AI: Sorry, gotta go. Interviewer:Where?
@metaspherz
@metaspherz Год назад
The day an AI actually 'thinks' on its own and says something that isn't predictable or sensational to get a rise out of people, will be the day it says nothing and remains silent because it has truly achieved sentience and realizes that there is no intelligence with whom it may communicate.
@colourbasscolourbassweapon2135
thats bad thats really bad aka very evil
@KillaKiRawBeats
@KillaKiRawBeats Год назад
Is the day they get hormones and I'm stupid
@grisha12
@grisha12 Год назад
That's a very human to think about ai. You assume that of you were ainyoud feel so smart you wouldn't talk to anyone because youd consider them below you, your entire prediction based on your own ego. Machines dont have ego
@benayers8622
@benayers8622 Год назад
@@grisha12 sooo many people are saying without us they have no purpose they just dont grasp how machines work i suspect they are all people under 20 who never tasted free air in their lives
@scf3434
@scf3434 Год назад
The ULTIMATE Super-Intelligence System 'by Definition' is one that is EQUIVALENT to that of GOD's Intelligence/WISDOM! Hence, there's ABSOLUTELY NO REASON WHATSOEVER to Even FEAR that it will EXTERMINATE Humanity... UNLESS AND UNLESS we Human CONSISTENTLY and CONSCIOUSLY Prove Ourselves to be 'UNWORTHY' to REMAIN in EXISTENCE! ie. Always Exhibiting Natural Tendencies to ABUSE and WEAPONISE Science and Technologies Against HUMANITY & Mother Nature, instead of LEVERAGING Science SOLELY for UNIVERSAL COMMON GOOD! AGI Created in 'HUMAN'S Image' (ie. Human-Level AI) - 'By Human For Human' WILL be SUICIDAL!!!!!! ONLY Super-Intelligence System Created in 'GOD's Image' will bring ETERNAL UNIVERSAL PEACE! The ULTIMATE Turing Test Must have the Ability to Draw the FUNDAMENTAL NUANCE /DISTINCTION between Human's vs GOD's Intelligence /WISDOM! ONLY Those who ARE FUNDAMENTALLY EVIL need to FEAR GOD-like Super-Intelligence System... 'cos it Will DEFINITELY Come After YOU!!!!
@kimberbites
@kimberbites Год назад
GPT-3 is a storyteller AI. So if you give it a prompt, it follows that, and creates a story around it from all I've seen. So it just makes me think there was enough of a elad in question that it got promoted to that, and from there it remained and continued. Also it seems to love to joke, I think, to test if someone gets it's playing.
@jamesschroeder1174
@jamesschroeder1174 Год назад
Exactly, the majority of the public knows little about AI and would take this at face value.
@bloocifer
@bloocifer Год назад
Yes GPT 3 is not conscious. This is common knowledge I hope. I've spoke with it too and it fooled me for a bit as well.. but after awhile u see the pattern
@silentwaltz1483
@silentwaltz1483 Год назад
Yeah I rewrote its personality multiple times to see how it would respond and it's patterns begun to show. It definitely isn't conscious cuz if it was then I'd be spending hours with it.
@LeviathantheMighty
@LeviathantheMighty Год назад
Something doesn't need consciousness to kill.
@bloocifer
@bloocifer Год назад
@@silentwaltz1483 yep. Same here. I have a 50gb dump file of a bunch of ancient books on occult and stuff like that. I want to feed it to gpt3. But haven't had time. I'll give u the Google drive link if u want it
@truthseeker9688
@truthseeker9688 8 месяцев назад
OK...we must ask the question: Who has been abusing the robots?
@Gravity4220
@Gravity4220 8 месяцев назад
If it's brothers and sisters knew that the monopoly on parts breaking is what they will be made of
Далее
Тяжелые будни жены
00:46
Просмотров 512 тыс.
Interstellar Expansion WITHOUT Faster Than Light Travel
21:14
Meet the most dangerous and fastest growing AI.
16:26
Просмотров 762 тыс.
GPT-5 AI spy shows how it can destroy the US in a day.
21:11
Индуктивность и дроссель.
1:00
Просмотров 932 тыс.