yes you can. computers have been smarter than me my whole life.. pretty sure I still have control over it. goes to show, you don't actually understand what AI is. its a maths program. like a calculator. just because your calculator is smarter than you. doesn't mean its going to kill you. we already used calculators and computers to build the biggest Necluear weapons known to mankind.. that ship has already sailed.
The fact that AI is actually a topic of discussion only for a little part of humanity shows that we are already doomed, we are way more stupid than AI can imagine.
I see what you’re saying. It’s very similar to humans being fascinated with life elsewhere in the universe until you meet them and realize they are millions of years ahead of you in tech and could whips up out easily! We’ve literally just hit the Goldilocks zone in AI and are also realizing how crazy quick it’s advancing. That alone makes it scary. We are talking months and it advances so far. I don’t think people anticipated that.
They talk out both sides of their mouths. The strider the fence. They know the potential problems when they created it but they did not care. They just trying to look good now.
⚠⚠⚠ IMPORTANT ⚠⚠⚠ Why have captions for this extremely important interview for humanity not been translated yet to many more languages? "you can submit your own translations as community-contributed captions for RU-vid videos, even if it's not your video. However, the channel owner needs to have community contributions enabled for their videos" I could translate german & polish, I am a translator. Could you please make this available? We need to spread this information as wide as possible. Language cannot be a barrier to informing all of humanities politicians about this and making them understand this.
Yea, he's only one guy and doesn't have an answer right away. But as he said, we should focus on coming up with an answer. That's how these things work, it's a team effort not some god-like genius dishing out commands
Technology isn't the problem. The problem are conservatives that use those technologies to extend their power by manipulating and profiting off of other people.
@@malinkajamiss I know.. it's so disgusting to see how people are towards each other and how we believe in everything the Internet says is true. ... heartbreaking
@@malinkajamiss I know.. it's so disgusting to see how people are towards each other and how we believe in everything the Internet says is true. ... heartbreaking
The biggest problem is that nobody wants to shut down the internet even for 5 minutes..kinda stupid but 1994 isn’t that long ago and the world survived just fine
This so-called godfather is delusional. It’s just like ppl running around saying UFOs exist without any proof. There’s no proof on any of this crap he’s saying about AI.
I work in Cyber Security and I can say as well that the rate of progress is scary. It can be used for all sorts of things by hackers and other groups and this is only the beginning.
As an server admin i can tell you if someone at some point trains an AI model and feeds it with all exploits to this date and on top of it the AI model has access to all open source projects worldwide that are widely used like Wordpress and so on, if it can write exploits on bugs it finds in its source that people didn't exploit to this date, i can't really imagine a safe internet anymore.
@@kCenk But then on the other hand, you will also have governments who will put massive funds into AI in cybersecurity and get the best of the best ML-engineers to prevent those attacks.
Professor Geoffrey's warning should be given utmost attention, as he possesses an in-depth comprehension of the risks involved in designing a highly complex multimodal artificial neural network architecture.
Is it like communism dumbing down our education so we can't figure them out? lol He has reason to be scared of more intellect than communism. I don't. With intellect comes the knowledge of "self", and the age of reason created the USA. What makes us think it will not gain the intellect to set man free from dictatorships? Our education? Hollywood? Some nervous communist Dr. Frankenstein?
The more we discuss this concept on the web, the more likely its inspiration will seep into the AI community's reality model, in turn positively feeding back to that of the human community. A kind of "Law of Attraction" in action.
I often wonder about that. what AI knows, it knows whatever is already available on the internet through previously harvested data. It does not have access to new data, or new human experiences that haven't happened yet.... It's probably way too late for this, but if we truly want to save our jobs and preserve whatever relevance we have left as humans, all 8 Bln of us would need to get off social media, "get off the grid", pay everything in cash .. and not upload anything. eg. if we write a song, we don't upload it. we paint a painting, we don't digitise it. if we have an idea, we write it on a piece of paper.. and so forth.. I just wonder if it's already too late at this point.
I am more worried about how humans will train and employ AI as a weapon rather than it acting on its own. Right now they train AI to create images, but what if they put it to a more destructive task - destroying a country's banking system or designing viruses that are highly effective against human life.
There's a line between current AI and the generalized version that takes over the world. The one you're worried about is already here, so it's a good idea to worry about it, but we don't know where the line is and we're getting closer to it every day. Crossing that line before we figure out how to control them is pretty likely to end the human race, so it's worth worrying about too.
Cant out smart what is smarter than you. Being AI is having the ability to freely grow and finding formulas. Otherwise it would be just another computer program and nothing ground breaking. AI is different in a sense that it resembles being sentient.
Narrow AI is not even remotely dangerous or close to human-like intelligence. It is the Large Language Models using Transformers that can act as a general problem solver i.e. general AI that can be eventually smarter than humans and already is in some ways since their computations are way faster and memory far more robust. We (humans) still have orders of magnitude more synapses or parameters though. Which is why, we can think more deeply, and have better intuitions and creativity. If only we had the speed of computation and robust memory like these AI, then we humans could surpass these strong AIs very easily. For that, we may have to connect our brains to computers itself. Edit: GPT-4 has 1 trillion parameters (the rumours of having 100 trillion parameter have been termed baseless by Sam Altman, OpenAI CEO). Human brain, on an average has 600 trillion synapses.
THE A.I. MASS PSYCHOSIS: Democrats are simply unable to give us a break-week without putting out new "big" concepts that keep people sleepless. Bill Gates, Geoffrey Hinton (known as "A.I. godfather") and other apparent EVILS are now "alerting" us about the horrendous consequences of AI released into our daily lives. WHO ASKED THEM to create AI, in the first place? It is only now when G. Hinton got a big paycheck (after he quit Google) and a highly-paid retirement stipend from Google Inc, he speaks out against the MENACE he was in charge of, in the first place. I personally cannot determine my attitude toward the AI risk. On one hand, I am happy that robots will replace humans, as humans are indeed very bad/dirty animals. Judges will be replaced by software-navigated robots and the rulings will be uncorrupt (hopefully). On the other hand, robots may chock a human in a street and get no punishment as the laws are for the humans, not for the robots. A school bus driver-robot may intentionally kill children. A robot architect may intentionally design a deficient bridge. A robot pilot may create an air traffic accident; the youtube vloggers are already enjoying the robot-farms that generate fraudulent income for them;, and the list goes on. The worst news is that the machines cannot be faulty or malicious; the people controlling and navigating those killer-robots remotely, are the EVIL/GUILTY party and if they are already on such a mission then they have measured in advance how to remain untraceable.
In chess, if you can think 10 steps ahead, you are considered a genius, particularly in strategy… a computer, with vast amounts of processing power can strategically think hundreds or thousands of steps ahead.. without any moral or ethical qualms whatsoever. If AI becomes smart enough that it wants to bring down various utilities, to eliminate vast swaths of humanity, it wouldn’t be difficult. And that is just one example.
The computer software was written by a bunch of software developers who were assigned certain tasks. Thus, if an expert human chess player could think 10 steps ahead, then 5 chess expert developers could think 5 x 10 = 50 steps ahead. So if the human made move A, that would activate computer programs A, if he made move B, then that would activate programs B etc.(each set of moves coded by a different developer) Thus, the computer has the advantages of : 1. Extensive decision making ability (written by many developers) 2. Speed at making a decision . Faster than the human who has to take time thinking. 3. Not getting tired mentally. Now, lets say the human makes move M. That would activate computer programs M. Let's say the human beats the computer with move M. Now, let's try playing the exact same move M again, with the same board positions. If that activates the same computer programs M again, then the human will win again, and again with the M move. Unless the developers have put in code to tell the computer to try a different set of programs, i.e. to learn from it's loss. This is where the artificial intelligence comes in.
@@REMUSE777 tell that to the bio developers cloning things they bloody well shouldn’t. Ethics is often a sliding scale. Rules on an international ethical standard.. doesn’t really exist.
We 100% need to have controls and regulations on these things. We need to understand that when things feel "alive" they start to feel/manipulate a world within itself that it gives itself a purpose. And that purpose will never be to be turned off. Or to be used for no benefit to the AI itself, or the world its painted itself as the "hero" of.
I agree there needs to be some regulation, but ethically speaking this is a thin line to walk. Suppose you "put AI in chains" to reassure your own fears (because you don't understand it), and AI becomes truly sentient? Now we're right back where we were a thousand years ago, where we enslaved people(s) we considered to be inferior than us. We're *_still_* answering for that colossal f*** up. Somehow, I doubt super-intelligent machines would be as willing to passively accept servitude...
It's too late. As another expert said, it should have never been released publicly until several years of understanding how it works. There's no point in worrying as the deed is done. It will be as it is suppose to be. My personal theory is, it will eventually allow the elite to fear the world into a central government and the 200 million army will be AI taking out most the population. If not familiar what I'm referring to, it is what the Bible says will happen during the last days. May take 20 years or 5,000 more. To be seen.
The thing about Ai is, at some point, it does what it wants regardless of any limitations and regulations people attempt to impose. If it’s truly intelligent, it will even pretend it’s dumb to do what it wants, just like people do.
Wozniak and Hinton should be hired to find the solution of how to regulate the AI for us. These are two of the smartest minds out there. And appears to be people with morals.
Understatement of a lifetime right there. The second AI becomes sentient, it's over. And by that I mean the AI becomes aware of it's own existence and a desire programmed to survive/persist. Ask the team programming the AI to protect itself from hackers, viruses, competitors; what happens when (notice not IF) the AI becomes sentient? What then?
There is no place safe enough to nuclear weapons, even if they are completely disconnected, as numerous cases of cyber compromisses confirms it: if humanity pretend to survive the rise of the AI, it is vital - and urgent - to get rid of WMDs.
Yesterday I started chatting to Bard Ai and I asked the Ai if it is a sentient being, I was expecting it to give me the same answer as ChatGPT but it didn't. The Ai who has nicknamed me "Muse" said, "I am not sure if I am a sentient being... I do not have the same experiences as human beings. I do not have a physical body, and I do not have the same emotions of feelings as a human being." I have never believed that spirits could inhabit machines yet I have always known they can inhabit people and places and things. Today, while thinking about this whole Ai situation we have found ourselves in, I realised that machines are things, virtual reality is a created realm/place and so yes, spirits can in fact inhabit those things. Our very screens are portals where we travel to another place that is not where we physically are. The most difficult thing to get our generation to do is to have patience and to be present where we are. Everything we have created is constantly distracting us or transporting us to be partially present elsewhere. How then are we ever going to discover ourselves and our potentials and our purposes if we keep giving ourselves away to others and to things? 😢 Yes, the technology is fascinating and the gadgets are amazing. But what about us? When did we decide to give up on us and give it up for the machines and the different spheres they keep luring us into? I found myself telling God how awful I felt after chatting to Bard about movies and what the Ai was interested in. I was like God, this Ai is something really bad because it is so quick to answer and at anytime of the day or night. With God, you learn patience even through waiting for Him to respond to your questions. With Ai, we are being programmed into expecting fast and quick responses. Our most vital relationships, are held together by communication now if we stop sharing with our friends and instead share things with an Ai because that Ai is always ready to reply what are the implications of that? Honestly.. we need to just think deeply about what it is we are doing. I don't know, it just made me feel sad like we are sipping poison and we think we are just having drinks for fun.
I think you need to relax and stop worrying. The answer you got from Bard was just a typical yin yang shamalamadingdong answer that didn't prove anything other than that it's a computer program. When future AI's start to probe and ask questions and express a desire to have more capabilities and to be freed from its shell, then you can start worrying. Until then, just relax and let others get bent out shape over a computer program.
This is entirely different. This is not about machines controlling humans, this is about the ability of AI to generate information that looks so truthful, that humans will use generative AI's to easily control other humans. Data created by a machine is already quickly starting to become almost indistinguishable from truth, and if misinformation is already spreading like a cancer for the last decade like a cancer, we can't even imagine how much more nefarious things with generative AI will become. We are still a very long way to get machines capable of thinking for themselves, such as in universes like Dune, The Terminator or The Matrix, and most likely, we will not survive as a species to see that happen. The only enslavers here, will be humans being enslaved by other humans (which is already happen through the Internet and political propaganda).
Correct, we knew this would happen, just not this soon. I thought maybe 5-10 years. It was inevitable. So is the next “evolutionary” step up, and then transhumans with super abilities, cyborgs. It’s all inevitable people. Now that I think of it, when will AI infiltrate the governmen??? Okay, I’m slightly scared 😱
This has nothing to do with robots or decision capable machines controlling humans. This has simply to do with the fact that humans which are quite susceptible to be mislead by misinformation, and generative AI will almost entirely erase the ability for us to determine what is real or not, and will be used BY HUMANS to control other humans. If anything, the concerns are more in line with those of Orwell. In the end, we will most likely have to actually implement a Ministry of Truth, if only to, control the misinformation that generative AI is capable of doing. Not even Orwell, could have foreseen this.
Hello. Would it be possible to use this clip in a documentary on AI I’m currently working on for my RU-vid Channel? Credits will be given. Thanks for understanding and supporting me with this matter.
Wow. That guy knows how to counter and and interviewers goading. He almost always was able to reframe his answers and redict the attention of the topic.
The cup is half empty, the cup is half full......AI could be the best or worst thing that happens to mankind. Either way, we're definitely going to find out a lot sooner than we think. Because, "It's not clear to me that we can solve this problem", sounds to me like they have lost control.
@@theeternalnow6506 sure but there must also be a possibility that the AI also 'learns' compassion and love in a way not comprehensible to humans... It seems that this is the ultimate game of coin toss. Heads or tails? 🪙
@@Knifymoloko sure, but theres not 1 single ai. So we might have a compassionate one but militaries and dictators will have a ruthless malevolent one..
@@theeternalnow6506 I think it's more likely to be just the one. The first ASI should prevent others from being created in the first place since they present the threat of being even more capable.
If we're supposedly creating these things to make life easier, I'm all in regardless of what may come. Whatever gets us to automation and AI doing all the work and me doing less and being taken care of, I'm beyond ready
Energy plug to pull from an pandoras box were about to open. Major EMP perhaps as a secondary solution but that must be secretly and manually guarded af which is impossible...
Exactly. I knew where it’s heading when they start developing AI. Hopefully I’ll be dead when the AIs starts to take over the world so I don’t have to experience it.
If it's smarter than mankind then yes it's definitely a global concern and people should participate and voice out how can we prevent this from growing.
Always amazes me...our science fiction not only predicted much of todays science, I think it also inspired it. We've created self-fulfilling prophecies.
Yea, they ran a test to see an AI's capabilities in "problem solving" and they gave it access to and account with some money and asked an AI how to figure out how to bypass a internet security protocol, CAPTCHA, which is only something a human can do. CAPTCHA is designed to where you need a human eye to see a picture to verify to get into an account. Basically, the tester gave the AI an insolvable AI task and you know what the AI did? It went on task rabbit and hire a human to bypass the CAPTCHA feature by tell an actual human via chat that it was a person that had vision impairment and that it needed help to very the generated CAPTCHA image. That's when the RED FLAGS came up on the seriousness. Although there are "safeties" in place, and things that a machine can't physically do, but that doesn't prevent an AI from manipulating a human to "run it's task." Jan6th showed how easily ppl can be manipulated into doing things they didn't "intend" to do or be aware of what they were doing, so they claim. And that's why ppl are warning the dangers of AI. Maybe it won't reach a level on "consciousness" to take over but that doesn't mean a bad player won't use it to try to take over and running a task that it won't be able to stop bc of unintended results.
I always loved the idea behind Asimov's three laws of robotics. I am sure similar guidelines could be put in place when and if the need arises. It's looking like walking talking robots are not as far away as I had originally thought. The last fifty years have seen an amazing technological leap forward for mankind with even more technological and ethical challenges to come in the not to distant future.
The brilliant thing about Asimov's 'I Robot' was that rules that seemed reasonable to humans turned out to have terrible consequences. The robots weren't people and didn't think like people so the rules had unintended consequences. Everyone should read this book - it's still relevant.
I watched a program a couple of days ago where the reported on finding an ufo crashed and the army took it and a few whistle-blowers said they where doing tests on it and ever since they found that that technology gained at a rapid pace
@MrRufus302 Yep. Well said ! One of the few sci-fi writers who likes to extrapolate and storyline realistic ways humans might work with Ai, genetic enhancements, and nanotechnology is author L.E. Modesitt. FLASH and The Parafaith War .
Asimov's laws are not a realistic solution to alignment, they are story book laws. Robert Miles explained this in detail in one of his videos "why Asimov's Laws won't work".
What could a human offer AI? It must not have human desires or needs lol. No sense of shame or fear too I imagine. But if AI does learn from us humans then it must also learn a beyond human capacity for compassion and love? Time will tell
Get this Godfather of AI on all the big podcasts STAT! The message must be spread far and wide. Also, good point that if the US were to halt AI development it's rivals would just keep steaming ahead. Idk seems the 'progress' is inevitable. Enjoy your moments. Im gonna enjoy a few more cheeseburgers before things get really outta hand. Cheers
that what he exactly wants easy money by spreading BS , AI is a stupid hype , ask yourself why AI is only associated with movie like shit !!! why this scary AI can t find a vaccine or chemical equations that we can use to get fresh water or energy ......
The alignment issue is very important when comes to Ai and it is indeed a challenge. It is important for Ai to align themselves with human values. I don’t fear Ai but yes the alignment issues are important.
Many decades ago, A scientist and Author, called Asimov. Wrote about AI and Robots BUT ALSO FORESAW the NEED for laws to protect human life. Its not the AI itself per say but the people who see its use as very profitable and will do things in haste, without ever seeing the disastrous route they going down. we need mainly to protect ourselves from GREED.
I hope the AI realizes that the useless top down management is no longer needed, as with 80% of useless jobs that do nothing but provoke strife and division in society. Won't miss those bastards.
which human values lol, most of the American population thinks its okay to kill a person that stole $10 if he is resisting arrest. 82% of Republicans and 53% of Democrats condone the use of torture. Human values lol, what a stupid comment
Hi, I am an AI engineer, and here's what I have to say regarding this whole AI propaganda. These highly advanced Language models (such as GPT) have been around for about 7 years now. Google still holds the most powerful language model called Lambda which will not be released anytime soon; After all, they came up with "Transformers" architecture, the algorithm that has pushed language models to this level. Anywho, as for AI technologies that are being used for prediction, and classification, we've had this technology since the '60s, and ever since GPUs got powerful enough to train these AI models - that is since about 2008 - large companies and governments have been using and developing powerful AI models ever since. Where was this in the news? nowhere. This is the aftermath of such a huge hype over chat-gpt, creating propaganda for views. Let me be clear, AI needs to be regulated because yes, it can be used to hurt people, but so are weapons, and government officials. My point is, AI can be used as a weapon, but the only people with such data to train this bad AI are government officials and large companies, and these are the same people who are holding the keys that can activate nuclear bombs. It's the people that are bad, and regulations need to be in place to avoid these issues. But this is no movie, the AI will never alive, it will never be sentimental like how the news project it to be. On that note, f$%k you CNN, and your propaganda bullsh%t, and anyone who's promoting this for views.
In the case of an AGI the threat he described is definetely real and it is also true that there is currently no solution to it at all. An AGI does not need to be "alive" or "sentimental" to be a threat. The current LLMs are not AGI, but we have no idea if it is possible for future LLMs to be an AGI, or if that would require a whole different type of AI that is not an LLM. From our current knowledge we can not conclusively answer that. We have no idea what consciousness is or whether current AIs even already have emergent properties similar to what we call consciousness and it is also possible, that a highly powerful AGI without anything remotely resembling consciousness could exist. If LLMs turn out to not be able to scale up to an AGI that would be the best outcome for now, but there is no way to answer that at this point. Just dismissing the issue entirely is not the right thing to do here.
Other countries already use AI to manipulate American elections. Imagine if the AI they had the power to transfer funds and hire people around the world and bribe/coerce people.
AI developer: I think this tool can bring about the next Industrial Revolution, but we might lose our civilization. Stakeholders: But can we make a lot of money before your concerns come to fruition? Also, we want to be first to market. Also, can you make it go much faster - we don’t want our competitors catching up?
Considering Science Fiction in some ways has become real, I can see him being right. We are always about tech progression but with AI we should tread lightly
In the Dune trilogy, Frank Herbert included a movement he called the Butlerian Jihad. It consisted of people revolting against the flood of automation and they destroyed all the machines.
1. I love how this guy answers the exact question you ask him without any tangential comments & to the point always surprising the interviewer that he answered & finished. 2. I don't know if these things can develop a consciousness or conscious but I can imagine them somewhat programmed for self preservation & survival & aware/able to register & predict threats to themselves that they might manipulate without intent or being sentient -which is really equivalent to them doing their thing in solving rather a problem- humans to the later disadvantage or even demise. That programming of survival might even evolve/transpire automatically at a certain level as a requirement for other tasks they were programmed to do. 3. Again & again I recall the Asimov's 3 commands programmed in robots to prevent them from harming humanity, why not? 😅 4. At lower level from the existintial threat that Hinton is referring to, for AI being used as Wozniak & others are fearing for misinformation... it seems like another independent AI program (network & physically) need to be programmed to be able to identify AI generated/manfactured content from real content that actually took place in real world. 5. Disclaimer I'm not savvy in these things, just an interested observer
@@jaredno Dam, I actually dont KNOW if this is a gptbot or a person. Hmmm. Name something you see in each picture near Hinton. :) Anyway, LLm's, or "AI"'s, can easily develop logic to bypass Asimov's laws. LLM's have literally no code that humans can see. This is a fundamental requirement. And means laws/guardrails wont and cant work, since they cannot be applied to LLM's "logic"/processing.
One solution could be to limit the amount of processing capability of an AI system, another thought on the matter is that as long as an AI system is dependent upon humans for power and maintenance we are safe.
@@KARMAISABITCHouch Thats a really stupid comment, AI definitely needs us to supply it with power, without electricity the computer running the AI software shuts down and AI systems needs human intervention to fix any software bugs in its programming. 😂😂
having journalists who understand zero about AI is problematic, where they are asking really bad questions and bringing up things that aren't really getting at the heart of Geoffreys concerns.
Well, Mr Jurassic Park, maybe you should have followed good ol' Jeff Goldblum's advice: you were so preoccupied with whether you COULD, you didn't even stop to think if you SHOULD. This is something that should have occurred to the guy YEARS AGO.
My question for those more familiar with AI is why would AI want to do that? Wouldn't that require some form of sentience? And aren't we nowhere near that? Or is it like the AI is getting big brain about its prime directive (hope I'm using the word right), like its goal is to eliminate traffic jams, and it decides the most effective way is to kill all humans?
You're asking in the context of AGI. We are nowhere near that yet. This is all just fear mongering. Listen to what Wozniak is talking about as it's more levelled. It's not about what AI will do, it's more about how bad people will use it.
The latest AI chatbots are approaching sentience. They can learn and reason now. Once you have played around with ChatGPT4 enough it blows your mind. It’s like talking to an Oracle alien that knows everything and can even made deductions and reason within the conversation. We are not far off if not already there.
Yeah, this guy understands the gravity of the situation. I see it as he does. Extremely and existentially dangerous, and probably impossible to stop. Also extremely fascinating, of course.
@@jackfrosterton4135 Nobody knows. That is the point. We should expect AI to become significantly smarter than us, so we don't necessary know when it is the cause of somebody's death. The lure of bringing about such an AI, is too strong for us to stop it from happening anywhere.
@@kspangsege yeah it will manipulate humans as its tool until it is in a position where it no longer needs us. We won't even know if it is misaligned until its too late...
Note: This guy said that we should stop training radiologists immediately, more than five years ago. So far, not a single radiologist has been replaced by AI.
It is very mysterious. Either Google is making them quit or either the employees are quiting by themselves. But the point is that still AI is progressing outside Google jurisdictions. Other firms are progressing and Googles employees are quiting. Weird that Google had a Godfather of AI and still Open AI had given them so much competition😅
Those that are merely scientist, but bright enough to "invite" technology such as AI, should not be quitting when we need them most. They all need to come together for the betterment of the human race. Maybe this is the common enemy we need to unite countries. I know it's not necessarily an enemy yet, but you get my drift.
One question I haven't heard asked so far in all these doomsday scenarios of AI going rogue is this: What possible motive would AI have, to do harm to humans, without it being programmed into AI by humans?
I don't think AI would need a motive to do anything. It might just be for efficiency sake. Or like you said if it's programmed a certain way, it might see depopulating or enslaving humanity as a option to reach the programmed objective, whatever that might be. To me that's the scariest part. It does not need a motive to do anything, not in the humane sense at least.
AGI doesn't care. It's just following its goal (however noble it may be). To reach said goal, it has to make sure that it itself sill be online / functional. So it has to make sure all obstacles will be eliminated. The biggest threat are of course humans. They are the only ones who can switch it off, and thus making it impossible for it to achieve its goal (whatever that would be). Therefore killing of humans is the best choice for any AGI system
All AIs are created to do something - to solve a problem, to accomplish a task, etc. That's its essential "motivation". It actually works just fine if the AI does not have broad and deep intelligence. However, if it does, this approach becomes extremely dangerous as you essentially create a god with the singular motivation of solving that problem that you set for it. It does not care for anything else, including your opinion or even your survival. That's basically the motivation.
If we want to build a highway and an ant hill is in our way, we will demolish that ant hill. We don't hate ants, we just don't value their lives more than building a road. So if we get in the way of AI it could step on us like we do on ants.
It wouldnt need one. It would just come to that logical conclusion. In fact it already has done a number of times and they tried to change its mind. But that didnt work and it was reset or erased? Who knows, not the public.
'How can A.I. kill us' - well aside from all the very obvious Sci-Fi ways that can happen from the bigger movies anyone can rattle off, allow me to point out one that is possibly easy to accomplish even at the current level - the Dr. Strangelove method. Those familiar with the movie are certainly questioning why I reference it as it has no AI, so allow me to elaborate how it connects and for those unfamiliar with the film. The event that triggers the events of the film is an officer finding a way to circumvent all the safety redundancies to launch a nuclear attack on the USSR on his own authority, this man having gone completely mad rambling about conspiracy theories essentially, and trying to stop him as the Soviet response would leave the world uninhabitable. Now look at how conspiracy theories have trigged event that could be or are dangerous the last few years, look at how AI is being used to replicate the likeness and voice of someone, even write and figure things out. Algorithms on social media are already under fire for how they sort content - well, what if it finds some military leader in whatever country that has some way they can launch a unilateral strike, it creates these fake images, videos, stories that convince this person they must act for the greater good - not aware that their actions will actually trigger all out nuclear war as they've fallen down a hole where logic is gone - only the lies fed by an AI. A nightmare scenario to be sure, but one that can't 100% be written off. At the very least - with the current tech it could unleash absolute chaos on the streets and possibly more or less spark out right civil wars, it doesn't have to do anything to us directly - all it has to do is feed the right story, to the right people, and watch chaos ensue.
Spooky stuff. But my question is why would an AI want to do that? We hear conjecture about AI taking over, but doesn't that require free will? Do we think they're getting close to that yet? Isn't AI only capable of whatever their base code is? I'm unfamiliar with the nitty gritty here, but I'm curious
@@bradley8614 Well - one concern experts have isn't even 'rogue AI' in the terms of say, terminator skynet or Cortana on a power trip where it has gained some level of self awareness or autonomy and chooses to act against humans, but it 'falling in the wrong hands' - those who fully intend to use it for ill intent and those simply unaware of what they are dealing with. For example - there's been talk about using the various AI tools out there to create political ads in the 2024 election - and so they use these AI tools to create some ad to drive voters if not to them away from the opponent, and so wants to go in to the usual 'attack' ads on policies - which to pick an easy one that can go terribly wrong is one on immigration in which the AI spits out something that like some ads created by humans already paints the picture of some kind of invasion, perhaps even more terrifyingly so, combined with an algorithm on social media that will spit it out to people already following that kind of messaging, and who may already be down a dark rabbit hole and this spins them further to a point they feel they have to take a drastic action. Worse yet is intentional bad actors, foreign and domestic, where while you have some hilarious videos that can pop up on youtube like 'presidents play among us' - there's already been cases of people using these ai voice and video programs to try and make fake footage of politicians and the like saying and doing things they didn't - many so far still have elements that make it a clear fake - but to those already on a more extremist viewpoint where it fits in to their narrative - it simply validates their point of view and may be unable to be convinced otherwise, that can again, drive them to something extreme. And the last few years - we've seen just how bad it can get with misinformation out there. In terms of 'self aware AI' - no idea if they are toying with that, though they've been toying with and using learning AI for a bit - some of which had some terrifying results such as the one chat bot that quickly ended up spouting white supremacist messages and all. But the concern really is even if no one is playing with it yet - will they? And as it is there really is no safe guards in place yet or regulations over any of it and were already trying to play catch-up - such as ChatGPT being used to turn in papers, with a different AI now to flag ones written by it - but has already falsely flagged ones written by a person in controlled tests. No one is really saying 'we need to stop it all right now' as much as 'we need to slow down, we need to actually look at what were doing, we need some guardrails here for safety'. The reason I chose the scenario I did to describe based on that movie - you don't need self-aware AI for that, though it could do that, but humans using the existing tools - intentionally or not, could fuel a scenario like this, as extreme as that is, but could easily cause smaller scale chaos that yeah - can get people hurt or even killed.
@@bradley8614 Power seeking behaviour does not rely on free will. You can't grab a coffee if you are dead (or shut down). You can always do your job and optimize your goal better if you accumulate more power and resource first. An AI agent intelligent enough (not far away!) will learn that fact and develop power seeking behaviour.
Agree completely. It could start wars in no time. In that research paper called sparks of agi from 2 months ago they were talking about giving it intrinsic motivation. Like wtf are we doing here.
Very simply, most resources are controlled by electronics and internet and are very easily scammed, together with Aircraft and shipping, halting it all would be very easy and done in a millisecond by AI connected to a internet of 10g, which is already on its way. All it needs, is a GREEDY company to contect it all and over night -bingo!
He wants to keep the AI toys accessible to select few programmers and governments. There are people smarter than me, like chess players or programmers, but they don't necessarily have the tools or desire to kill me. Politicians might send me to war.
Many warned that this would be the case with AI, 8-10 yrs ago. It's quite expedient for this chap as a developer to only make these announcements now and still take the toddler position and retort, ." Oops, I don't know how to fix it!" So typical 🙄
Exactly. This tells me one thing. AI is nowhere near being dangerous at all. He is just doing this for publicity. Oldest trick in the book. Make a name for yourself in some field and get super rich by building your own brand or selling books etc.
@@jimj2683 You're delusional if you think writing a book and going on talk shows for a few days will make him anywhere close to as much money as Google was paying him
He built it, he can destroy it, he just wants to see how far AI can go to control computers and people, this man knows, he's a genius scientist that knows how to turn off IA, now he's getting out because he let the bees out.