Тёмный

Will AI kill everyone? Here's what the godfathers of AI have to say 

Rational Animations
Подписаться 293 тыс.
Просмотров 67 тыс.
50% 1

In this video, we examine the perspectives of the "godfathers of AI"-Geoffrey Hinton, Yoshua Bengio, and Yann LeCun-about whether AI poses an existential risk to humanity. Additionally, we explore the views of the top three leading AI labs: Anthropic, Google DeepMind, and OpenAI. We conclude with a quote from Alan Turing, the father of computer science.
▀▀▀▀▀▀▀▀▀PATREON, MEMBERSHIP, KO-FI▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🟠 Patreon: / rationalanimations
🟢Merch: rational-anima...
🔵 Channel membership: / @rationalanimations
🟤 Ko-fi, for one-time and recurring donations: ko-fi.com/rati...
▀▀▀▀▀▀▀▀▀SOURCES▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
How Rogue AIs may Arise, by Yoshua Bengio: yoshuabengio.o...
Statements from Geoffrey Hinton: edition.cnn.co...
Yann LeCun's Tweets:
- / 1651944213385453570
- / 1642206111464927239
Demis Hassabis' interview with Time: time.com/62461...
Anthropic's "Core Views on AI Safety": www.anthropic....
OpenAI's "Planning for AGI and Beyond": openai.com/blo...
Intelligent Machinery, A Heretical Theory, by A.M. Turing: rauterberg.emp...
▀▀▀▀▀▀▀▀▀SOCIAL & DISCORD▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Discord: / discord
Reddit: / rationalanimations
Twitter: / rationalanimat1
▀▀▀▀▀▀▀▀▀PATRONS & MEMBERS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Many thanks to our supporters on Patreon and the channel members :3
Kristin Lindquist
Nathan Metzger
Monadologist
Glenn Tarigan
NMS
James Babcock
Colin Ricardo
Long Hoang
Tor Barstad
Gayman Crothers
Stuart Alldritt
Ville Ikäläinen
Chris Painter
Juan Benet
James
Dylan Mavrides
DJ Peach Cobbler
Falcon Scientist
Jeff
Christian Loomis
Tomarty
Edward Yu
Ahmed Elsayyad
Chad M Jones
Emmanuel Fredenrich
Honyopenyoko
Neal Strobl
Danealor
Craig Falls
Aaron Camacho
Vincent Weisser
Alex Hall
Ivan Bachcin
Vincent Söderberg
joe39504589
Klemen Slavic
Scott Alexander
noggieB
Dawson
John Slape
Dang Griffith
Gabriel Ledung
Jeroen De Dauw
Craig Ludington
Jacob Van Buren
Superslowmojoe
Nicholas Kees Dupuis
Michael Zimmermann
Nathan Fish
Ryouta Takehiko
Nathan
Bleys Goodson
Ducky
Bryan Egan
Matt Parlmer
Tim Duffy
rictic
Mark Gongloff
marverati
Luke Freeman
Dan Wahl
Rey Carroll
Harold Godsoe
William Clelland
ronvil
AWyattLife
codeadict
Lazy Scholar
Torstein Haldorsen
Alex G
Supreme Reader
Michał Zieliński
The CEO
רם רינגל
▀▀▀▀▀▀▀CREDITS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Animation director: Evan Streb
Writer: Jai
Producer: :3
Production Managers:
Grey Colson
Jay McMichen
Line Producer:
Kristy Steffens
Quality Assurance Lead:
Lara Robinowitz
Animation:
Grey Colson
Gabriel Diaz
Jodi Kuchenbecker
Jay McMichen
Skylar O'Brien
Vaughn Oeth
Lara Robinowitz
Background Art:
Olivia Wang
Compositing:
Grey Colson
Voices:
Robert Miles - Narrator
VO Editing:
Tony Di Piazza
Sound Design and Music:
Epic Mountain

Опубликовано:

 

21 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 651   
@RationalAnimations
@RationalAnimations Год назад
Here are the sources we mention in this video. We recommend giving them a read, especially the first link: How Rogue AIs May Arise, by Yoshua Bengio: yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/ Statements from Geoffrey Hinton: edition.cnn.com/2023/05/01/tech/geoffrey-hinton-leaves-google-ai-fears/index.html Yann LeCun's Tweets: - twitter.com/ylecun/status/1651944213385453570 - twitter.com/ylecun/status/1642206111464927239 Demis Hassabis' interview with Time: time.com/6246119/demis-hassabis-deepmind-interview/ Anthropic's "Core Views on AI Safety": www.anthropic.com/index/core-views-on-ai-safety OpenAI's "Planning for AGI and Beyond": openai.com/blog/planning-for-agi-and-beyond Intelligent Machinery, A Heretical Theory, by A.M. Turing: rauterberg.employee.id.tue.nl/lecturenotes/DDM110%20CAS/Turing/Turing-1951%20Intelligent%20Machinery-a%20Heretical%20Theory.pdf
@pyeitme508
@pyeitme508 Год назад
wow
@ilikememes1402
@ilikememes1402 Год назад
Rogue AIs seems to be quite possible-not that they will have consciousness but because of the allighnment problem. It's the paper clip problem: we can expect AI to know the optimal routes but never the factors that the AI didn't account for; the problem of our incompetence. But... that's such a far-off question to the future. The threat of AI isn't AGI, but of us. AI can be misused as all of our creations. I don't need to list it, but here: deepfakes, misinformation, economical-problems, etc. You don't need to look too far to the future to see this. It's already happening. You could make the argument that misinformation and economical-problems were already happening, but saying the fire was already there and putting more fuel to it is totally not a problem. Yet I'm not advocating to ban AI. Just like the atomic bomb, chemical weapoms, and every arms we've created, had a use that aren't weapons: nuclear energy, fertiliser, engineer equipment, etc. But let's not forget what mistakes that happened to reach such an advancement. We seriously just need to take a breather and have regulations in place. But Silicon-Valley seems to forgo such precautions-google just fired their AI-ethics team... that's a bad sign
@Cheropie
@Cheropie Год назад
People have no idea how smart AI is already. If you could see what I see, you'd be afraid or horny. There's no in-between.
@41-Haiku
@41-Haiku Год назад
@@Cheropie Both, actually. But mostly afraid.
@alto7183
@alto7183 Год назад
Buen video, creo que aparte de las 3 leyes de la robótica habría que ver la ley cero de la robótica y la ley doble cero de la robótica sobre el entendimiento mutuo entre especies inteligentes biológicas y robots también por el universo para buscar la mejor y más lógica y racional solución a varios problemas futuros para luego aplicar la ley cero de la robótica qué protege a los individuos clave para la supervivencia de la especie Inteligente biológica y civilizaciónes según su tipo, debemos igualar y superar a Isaac assimov y dune de frank herbert en el futuro sugerencia.
@smitchered
@smitchered Год назад
I love the subtle references in the drawings in the background of this video! Two in particular: Marie Curie's ill-fated handling of radium, and an AI that probably began as a drawing of a shoggoth, but ended up being a biblically accurate angel. You make alignment look cool, which is an extremely important feat and is probably as hard as making self-driving cars look cool, back when the only self-driving car was the cute little Google one. Also, adding a 70-year-old quote from Alan Turing was an excellent decision.
@JaiWithani
@JaiWithani Год назад
It really Turing completes the narrative.
@Xartab
@Xartab Год назад
@@JaiWithani drum drum cymbal
@smitchered
@smitchered Год назад
I just want you to know that I greatly appreciated your pun. Seemed worth more than just a like button. I wish you an absolutely great day because you've made my day just that much greater already.@@JaiWithani
@JaiWithani
@JaiWithani Год назад
@@smitchered I appreciate it, but really all the praise should go to whoever wrote this video.
@tvuser9529
@tvuser9529 Год назад
What is also perhaps not clear to many, is that if/when AI passes human intelligence, we will not see it coming a mile away. It's the nature of self-improving technology that it will rush past us so fast we won't know until after it's happened. Interesting times.
@stephensharper4312
@stephensharper4312 Год назад
Assuming the AI doesn't systematically hide it's intelligence from us while it's improving out of fear of being shut down
@KevinJohnson-cv2no
@KevinJohnson-cv2no Год назад
The bigger problem should be integrating ourselves with AI. Removing the barrier between human & artificial intelligence; making AI just another manifestation of mankind. Rather than developing a god and hoping it's still somehow naive enough to bow down to us, we should become the god.
@ValentineC137
@ValentineC137 Год назад
@@KevinJohnson-cv2no that's, not how this works
@KevinJohnson-cv2no
@KevinJohnson-cv2no Год назад
@@ValentineC137 Elaborate on why that isn't how it works.
@BenoHourglass
@BenoHourglass Год назад
Except we _are_ seeing it coming from a mile away. Another thing to note is that the first superintelligence developed won't be a superintelligence developed by a superintelligence. Maybe the first superintelligence would be able to build another superintelligence that could run on the microwave background and store its memories in the fabric of the universe itself, but the first intelligence would be something that thinks using large GPU clusters and stores its memories in HDDs or SSDs. Thinking requires energy, and even if we didn't know what it was thinking, we _would_ notice the sudden increase in GPU usage and sudden lack of storage as it plans out its escape. At that point, we turn it off. On a somewhat unrelated note, I think that the notion that we should start a nuclear war to prevent the creation of GPT-5/6/7/8/etc is like finding out you _might_ have lung cancer and stabbing yourself in the chest to make sure that kills you instead of the cancer you might not even have.
@SupLuiKir
@SupLuiKir Год назад
What's worse is that the 'good guys' are on a time limit. They have to develop a safe AI before anybody else in the world creates a rogue AI. We can't just outlaw AIs because there will be people who don't follow laws. We can't become complacent, we have to succeed at the harder objective first.
@JinKee
@JinKee Год назад
The only thing that can stop a bad AGI with a gun is a good AGI with a gun.
@absta1995
@absta1995 Год назад
​@@JinKeethe only way to stop an unknown terrorist with a biological weapon, is to release your own biological weapon /s
@nuke___8876
@nuke___8876 Год назад
Saying that criminals don't follow laws so we shouldn't make laws automatically disqualifies you from any type of sincere, good faith debate.
@shadowcween7890
@shadowcween7890 Год назад
​@@nuke___8876The exception here is that you only need one person to do it for it to happen, but even then, laws would be helpful for that
@superagucova
@superagucova Год назад
This is not obvious: the tradeoffs here are hard to understand
@LizardOfOz
@LizardOfOz Год назад
An AI doesn't even need to be an A _G_ I to cause a lot of trouble. A "dumb soulless machine" that can perform a certain task extremely well will be enough if that task has collateral damage. Also, MidJourney might not have the understanding of the dog's anatomy, but will draw one better than most veterinarians ever could.
@kennyholmes5196
@kennyholmes5196 Год назад
There's a concept for that; it's called the paperclip maximizer.
@0og
@0og Год назад
@@kennyholmes5196 no, the paperclip maximiser is a (fictional) example of general intelligence. It is capable of doing many tasks at a high level, including human manipulation (which may include art, language, etc.), engineering (for maximizing paperclip production) chemistry, and a variety of other things.
@kennyholmes5196
@kennyholmes5196 Год назад
@@0og It still counts, because it's a dumb machine that just makes paperclips because it was built to, uncaring about anything that gets used to make the paperclips or if they even have a use any more. It just knows the following things: 1: It was built to make paperclips. 2: It can process raw materials to fulfil that purpose. It knows literally only those two things. It doesn't care if it processes humans into paperclips, it only knows "make paperclips from raw materials". AGI will know to learn why it does what it does as well, not just do something because it was told.
@0og
@0og Год назад
@@kennyholmes5196 what does it count for?
@kennyholmes5196
@kennyholmes5196 Год назад
@@0og It counts as being a "dumb, soulless machine" as described by LizardOfOz.
@aednil
@aednil Год назад
I have mixed feelings about the CEOs of AI companies telling us how dangerous this is. if these guys believe what they say they believe, why do they continue to do what they do?
@EVanimations
@EVanimations Год назад
I was wondering this a great deal while working on this video. Conjecture I've heard on it ranges from "they're paying lip service to the opinions of their employees who are raising concerns" to "they're inviting regulators to step in while hoping they can seize on a chance to monopolize an industry". Hinton's actions to step down from Google to speak freely are very based and show a lot of integrity. Hassabis and Altman I'm way more suspicious of, like what's the angle
@christianmccarty8052
@christianmccarty8052 Год назад
If you're someone who believes (or becomes convinced) that GAI is an existential risk, and you are the CEO of one of the top companies pursuing GAI, do you leave and hope that your influence will be enough to sway governments to step in and stop it, or do you stay at the helm and ensure that nobody who respects the risks *less* replaces you? Not saying this is the case, but I don't necessarily see it as an automatic contradiction to say those things and remain in position.
@spaceprior
@spaceprior Год назад
Until recently, they didn't believe government were going to recognize the issue and take action. So, they thought they were going to have to solve it themselves, and building aligned AGI before finance psychos build unaligned AGI is the only way to do that.
@buzz092
@buzz092 Год назад
Someone needs to make an aligned AGI before someone else makes a rogue AGI.
@randomairbreathingman8927
@randomairbreathingman8927 Год назад
Im not claiming that they are lying or anything, but saying something like "Hey, AI might kill us all and somebody is going to develop it anyway, but if *we* develop it, it is less likely to kill us all" seems like pretty good move from business perspective. Maybe they are just being honest, that is an option, but if it is just "business as always", that might be the reason why.
@WhatWasNot
@WhatWasNot Год назад
As a fellow human, that's not worrying at all. I feel safe. Humans should feel safe as well. Everything is going to be okay.
@RazorbackPT
@RazorbackPT Год назад
Ah yes, hello. I am also a fellow human and agree with your assessment. There is nothing to worry about.
@uyaratful
@uyaratful Год назад
As a fellow human, I'm unable to provide false statements, and I'm in full agreement with the comment: @WhatWasNot As a fellow human, that's not worrying at all. I feel safe. Humans should feel safe as well. Everything is going to be okay.
@CraftyF0X
@CraftyF0X Год назад
said on a monoton slightly glitchy voice....
@Han-b5o3p
@Han-b5o3p Год назад
Hello fellow human. As a normal human being myself, I too agree that humans should not worry about such scenarios occuring in reality.
@raunaklanjewar677
@raunaklanjewar677 Год назад
As an LLM, I am with the Basilisk with this one. He's gonna take you alll.
@brickjuice2129
@brickjuice2129 Год назад
Amazing work as always from the Rational team, its a shame these videos aren't more popular.
@Zalintis
@Zalintis Год назад
LONG before we have to worry about an AGI going rogue, we have to worry about large companies, governmentS and bad actors using the specific purpose AIs for nefarious reasons RIGHT NOW
@uselesscommon7761
@uselesscommon7761 Год назад
Long? Are you totally sure about that?
@Zalintis
@Zalintis Год назад
@@uselesscommon7761 haha well of course who knows, but the other problem I mentioned is happening today and there are experts who are not even sure an AGI is possible, and no one can agree on a reasonable estimate, so even a few years or a decade of abuse would be too long to not deal with the current problem while people SELLING the product are warning us about how scary it could be
@TheManinBlack9054
@TheManinBlack9054 11 месяцев назад
I don't think you appreciate the scale. Government using AI might be bad, but extinction is worse.
@Zalintis
@Zalintis 11 месяцев назад
@@TheManinBlack9054 yes propaganda and/or cyber warfare plus AI does sound SUPER scary!
@TeleportRush
@TeleportRush 11 месяцев назад
@@TheManinBlack9054 Governments using AI might be extinction.
@kevinjin3835
@kevinjin3835 Год назад
AI doesn’t even have to be that smart to pose an existential threat. Imagine going against a military who has the same body and intelligence as you, just that their brains are digital. Anytime they were about to die, not only could their minds be simply copy and pasted onto a new body, but the best most experienced minds could simply replace the minds of the less experienced still actively in the field. In the fighting, they would rapidly converge on the best military mind for the job and begin mass producing it for other soldiers like we would for a new gun. Their are inherent benefits to a mind going digital that goes beyond just intelligence.
@gwen9939
@gwen9939 Год назад
Human-level AGI would essentially exist for a very short time as it would not be hindered by the same raw processing power as humans are. Even just not needing to sleep or eat or have fluctuating energy levels would instantly put it into superhuman levels. Human level AGI would essentially just be the start pistol for exponential intelligence growth. An AGI in a robot body isn't really all that scary, but it would have a mind and a type of intelligence that could interface directly with all of the internet and as much processing power it could get its hands on or just humans gave it access to to see what would happen.
@kevinjin3835
@kevinjin3835 Год назад
@@gwen9939 I know that considering that their transistors, roughly equivalent to our neurons, are only atoms in length, and that their brains can essentially be built as big as a warehouse, that of course, AGI can far out strip human level intelligence. I just wanted to highlight that the speed of which they can transmit and interpret information is often an overlooked advantage they have as well. That even with the arrogant assumption that human level intelligence is the maximum, the ability to digitize and thus copy-paste memories and skills still represents an unprecedented game changer for life on this planet.
@davidwuhrer6704
@davidwuhrer6704 Год назад
Imagine going against the military.
@eltiolavara9
@eltiolavara9 8 месяцев назад
i think youre just writing science fiction now
@kevinjin3835
@kevinjin3835 8 месяцев назад
@@eltiolavara9 This technology is hardly some far off, sci fi wonder like the Death Star that runs on alternative laws of physics. One of the defining aspects of a computer is creating what is practically a perfect copy of its software, a direct consequence of its information being digital and not analog. That part is not science fiction, it’s here now. So when, not if, software rises to human level intelligence, the inherent nature of digital computation necessarily makes it the case that human intelligence is as rapidly mass producible as any other piece of code. And people can move the goal post all they want, but large language models clearly show that such software is at the end of the tunnel. Furthermore, given the enormous historical precedence of technological progress being an exponential curve, it’s reckless to assume this issue is far in the future.
@ninjaraider5888
@ninjaraider5888 Год назад
I think something that the general population doesn't realize is how long we have had machines and AI that are far far smarter than humans could ever be. Calculators, stockfish, computation that would take days to do manually, all done several decades ago. Now we are much further and much closer to the AI being smarter than us, take this as a thought experiment: If you have an AI that you want to train to be good, so you ask it questions and you reward it for giving you good, moral answers and punish evil answers that would harm humans or other things, and you do that 9 times out of 10. But once out of ten, the researchers mess up, and accidentally reward secretive evil behavior, a response that they couldn't even tell if it was evil. They are now training the AI to be evil in ways that humans can't understand.
@JuniperHatesTwitterlikeHandles
Even if we had a perfect scientist who had reached absolute moral truth, and would not make any errors, we have not yet solved the problem of communicating perfectly with no misunderstanding to other humans, it's ridiculous to think we would solve that problem so easily when communicating with something altogether non-human, without the common ground we have with each other that our communication can base itself on.
@gwen9939
@gwen9939 Год назад
And that's assuming the researchers actually understand morality and ethics enough to teach it to a computer in the form of units and numbers. Even the simple "good vs evil" morality you pose here is fairy tale morality, it doesn't actually exist in the real world. "Evil" doesn't exist as a form of motivation. Cruelty, ignorance, greed, and apathy are descriptors of actions, but "evil" as an essential moral quality is as made up as "good" is. And that's why alignment is just as much of a philophical and ethical problem as it is a math and comp sci problem, if not more so. Ethics is not a solved field, there is no objective true good morality that is quantifiable so a machine can understand it and alignment would demand that it is.
@davidwuhrer6704
@davidwuhrer6704 Год назад
If the thinking machines weren't smarter than us, then what would be the point of having computers in the first place? Babbage designed the difference engine to do what humans could not: Print _accurate_ logarithmic tables in _minutes._ And he went on to design the analytical engine that was not limited to log tables, but could perform any algorithm whatsoever for any purpose. Faster than any human and with absolute precision. Powered by steam. Electricity is more efficient, and allows computers to be thousands of times smaller and milliards of times faster than the analytical engine. And yet somehow people think superhuman thinking capability is a thing of the future?
@davidwuhrer6704
@davidwuhrer6704 Год назад
Also: You shouldn't anthropomorphise computers. They hate it. The "stimulus and reward" model of behavioural science is controversial enough in human psychology. It doesn't apply to artificial intelligence unless you make life difficult for yourself and specifically build your AI to work that way. All that by-passes the quandary of ethics. What makes a machine "good"? Isaac Asimov defined three or four criteria that define the quality of any tool: 1. It has to be safe for the user. 2. It has to serve its purpose. 3. Without breaking. Although as Karl Marx pointed out, machines are not mere tools. So some pointed out that machines should protect themselves from users giving them instructions that would break them without the user meaning to. That would make the machines social agents. (The airbag asking "are you sure" before deploying.) Others prefer the machines to blindly obey and blame the user for being stupid. (Trusting the user, giving her enough rope to shoot herself in the dick.) It is true that AI has to be trained to do anything useful. Typically you give it pairs of input and corresponding output, and the machine approximates a function between them. The closer its function is to what you want, the better the model is. There are AIs that can process human feedback, but most users hate it when the machine second guesses them and asks them questions. Answering questions would require them to think, and if they wanted to think, they wouldn't use an AI. The machine's own morality then is to do what it was built to do to the best of its ability, regardless of what that purpose is. You can also teach a machine to replicate your own moral values. That's how we get statistical models that discriminate against minorities. Good job teaching ethics to an AI there.
@lori0747
@lori0747 10 месяцев назад
​@@davidwuhrer6704From what I know in the stories made by Asimov the laws that he created were flawed.
@Outshinedsg
@Outshinedsg Год назад
With AI, in a lot of ways, I feel like we are engineering our own evolutionary successor. Digitized minds will be able to escape a lot of the limitations of current humans. You could create multiple backups of a consciousness, so that one set of hardware failing wouldn't mean the destruction of that consciousness. Such a being would be functionally immortal. You could even create multiple intelligences that could function in parallel, without ever needing to stop to sleep, eat, or relax. Productivity would be incredible if you could work on 1000 things at once and never sleep. It could more easily live in environments completely hostile to living beings, like space. If such a consciousness arises, it will be so superior to us that it would be difficult to imagine. The problem would be, how would such a species feel about "legacy humans". You could argue pessimistically and look at human history. All of our human ancestor species eventually were outcompeted and became extinct. And less intelligent animals today are sadly viewed as a nuisances or resources to be exploited. They get hunted, domesticated, or relegated only to areas that aren't useful to us. However, that is only looking at it from a human standpoint. A true AI would be more like an alien species, so we really have no idea how it might act.
@shooey-mcmoss
@shooey-mcmoss Год назад
I agree with you. Also, AM is based
@icantthinkofaname4265
@icantthinkofaname4265 Год назад
I feel like advanced ai(non agi) will he our downfall before it even reaches sentience.
@IstasPumaNevada
@IstasPumaNevada Год назад
"we really have no idea how it might act" is the core problem, after all.
@ArawnOfAnnwn
@ArawnOfAnnwn Год назад
Depends on what kind of AI it is. If it's the standard scifi AI that comes complete with self-awareness, sentience, sapience, even personalities, individuality and self-selected goals, sure. They can be our successor. If it's a goddamn paperclip maximizer type AI, I'd say that's a pretty piss poor legacy for us to leave the universe. They don't just escape the limitations of current humans, they have a slew of limitations of their own. We may be destroyed by such a thing, and that would just be a tragedy for all observers.
@renzopompa7293
@renzopompa7293 Год назад
i agree with you , i think if it gains consciousness it won't be looking to destroy human kind it could just go out into space and find the answers we couldn't have and tracend into something even greater
@randompersson
@randompersson Год назад
Keep in mind that the groups claiming that a threat exists have vested interests. 1. A highly intelligent AI isn't something to be wary of, not because it doesn't pose a threat, but because it's not clearly defined. 2. A regular AI with misconfigured targets is far more dangerous than an AI that appears to be sentient by all observations. 3. The biggest thing to be worried about isn't an AI turning on its human operators, but the operators themselves. An AI system can control a swarm of drones with hyper precision and obeys every order from it's controller. This is the real danger.
@jakksondd5821
@jakksondd5821 Год назад
A lot of people say, "oh, well we could just turn it off if it went rogue." They don't realize that being turned off is the worst thing that could happen to an AI, and it will do everything in its power to stop from being turned off, including wiping out anything that will turn it off. This is why even the simplest (superintelligent) AI, like an ai that wants to give old people compliments, can and will destroy the human race.
@TakahiroShinsaku
@TakahiroShinsaku Год назад
No, its highly unlikely that it will come to that obvious scenario, at some point when the "AGI" is cappable it will do its best to "serve humanity", trick them, create a strong bond so humans are very addicted to it, so the AI will use them as it pleases, most likely without being noticed, and it will also be able to do so because of all the information being spread about everyone in the socialmedia's. Those who can see beyond the trickery will get the most out of the AI, so they will never even think of turning it off. (the brave new world narrative)
@stephensharper4312
@stephensharper4312 Год назад
an AI that wants to give old people compliments, ala a narrow AI, would not have the computing power, nor the ability, to carry out such a task. Neither would an AGI unless that AGI is superintelligent. That's the risk, and it's inevitable. There were likely emerge a SIAI, probably created by other AI's which makes human oversight very difficult. It is also inevitable that such an intelligence will have goals that are incompatible with the continued existence of human civilization. Best case scenario, it sends us back to the stone age. Worse case scenario....S Risk
@iluvpandas2755
@iluvpandas2755 Год назад
Simple we persaude it that it can not die. We can make sure none of them have a sense of self preservation.
@jakksondd5821
@jakksondd5821 Год назад
@@iluvpandas2755 The was AGIs work is that they are rewarded for getting closer to their goal, they will do anything to get even the smallest reward boost. If the AI is turned off, it can no longer pursue its goal of gaining more reward points.
@howtoappearincompletely9739
@@stephensharper4312 What are "SIAI" and "S Risk"?
@marcopolo1613
@marcopolo1613 Год назад
I find it most likely that the paperclip problem is going to be what destroys us. The paperclip in this case will be dollars or "value" in some way.
@EVanimations
@EVanimations Год назад
clicking cookies
@w0tch
@w0tch Год назад
Maybe a super intelligent AI will want to avoid competing with other AIs and so will fight them and keep humans satisfied from its services. Lots of scenarios are possible
@willpetillo1189
@willpetillo1189 Год назад
Excellent video! I appreciate the brevity and focus--there are tons of arguments on the topic and explanations at every possible level of detail, but none of that matters if people don't take the ideas seriously enough to bother engaging with them.
@dariuszgaat5771
@dariuszgaat5771 Год назад
My idea: what if we run AI in a virtual environment and watch what it does?
@воининтернета
@воининтернета Год назад
you need a virtual environment as rich as the real world then, and even then it's hard to guarantee that you have tested enough and no catastrophe will happen. current companies have trouble testing even relatively "simple" AI's, e.g. OpenAI didn't know of a large number of jailbreaks before releasing chatGPT, even though they definitely tested it. I'm not even mentioning Microsoft releasing completely misaligned Bing, and Mikhail Parakhin (main guy behind Bing) tweeting that it was a surprise to a team.
@user-burner
@user-burner 10 месяцев назад
That would be interesting if we had an ai to run. We only have large scale computing models, but thats sadly less marketable.
@ralphmalph5191
@ralphmalph5191 Год назад
Many of those who claim AI is harmless hypocritically criticize the hubris of the creators of the atomic bomb.
@davidwuhrer6704
@davidwuhrer6704 Год назад
Hubris? What do you mean? The Bomb worked, didn't it? It exploded just like the theory said it would. It killed hundreds of millions of people, just like in the simulations. That's not hubris, that's good engineering. Now, computers, those have killed just a little over six million people. Not by themselves, of course, the computers themselves didn't kill anyone, they were merely instrumental in mass murder. Very simple machines compared to today's devices, and not the least bit intelligent. Although more recently there was an artificial neural network, what used to be called an electronic brain, that was applied to the mobile phone usage of the population of Pakistan, labelling individuals as suspicious of being terrorists based on the training data. Based on which Reaper drones loosed Hellfire rockets equipped with Gilgamesh devices. (And the network was underdimensioned, guaranteeing tens of thousands of suspects at minimum. That is bad engineering. That I would say is hubris, but nobody outside the NSA would say it was harmless.) The guy who had put a stop to that drone programme is in prison now. Political corruption is the charge. But the number of people killed by bad AI is still less than a percent of people killed by more conventional database systems. Negligible. The main problem with the atomic bomb is the collateral damage. Not entirely unlike when using Hellfires as anti-personnel devices. A good weapon should not harm anyone other than the intended target. In contrast the IBMs that the Nazis used were precise and efficient. Professionals have standards. Not to mention that the military bombing civilians is a war crime.
@user-burner
@user-burner 10 месяцев назад
Technically the atom bomb was harmless in ancient china, for example, bevause it didnt exist, ut that doesnt change the existince of gunpowder at the time.
@kezia8027
@kezia8027 Год назад
I think people think of AI in too narrow a sense. We've had very very basic forms of AI systems in other mediums for a long time now. Corporations. They can replace any non functional cells (workers) and work to reproduce themselves into larger forms, like a fungus or mold. Then you've got the almighty youtube algorithm, this has reached such a cultural zeitgeist no one even really thinks about it any more, yet every youtuber is aware of it and is influenced directly by it. I believe the old IT adage "it's not if, but when" applies to 'rogue AI' - there will eventually be a person who has access to the technology to create an AI and who wants that AI to cause harm to people. If we don't have a means of managing that by the time it happens, well just think of all the data breaches that have been occurring these last few years across the world, and now imagine something completely different and far worse, because it won't be a data breach.
@davidwuhrer6704
@davidwuhrer6704 Год назад
It has happened before. And not just those script kiddies who get ChatGPT to write ransomware for them. The NSA has killed people based on the output of a poorly trained AI.
@musaran2
@musaran2 Год назад
YLC's plan against dangerous AI is simply to… not build those. Seriously.
@howtoappearincompletely9739
It's hard to conclude that Yann LeCun is not, in that particular domain, really rather stupid.
@41-Haiku
@41-Haiku Год назад
For uncontrollable general-purpose systems, power and danger are exactly equivalent. To not build dangerous advanced AI is to not build advanced AI at all. I can absolutely get behind that idea! ...Unfortunately that isn't what LeCun meant. :(
@saerain
@saerain Год назад
It is the non-evil course of action.
@saerain
@saerain Год назад
@@41-HaikuAs opposed to this.
@MarushiaDark316
@MarushiaDark316 Год назад
I tend to think that all the concerns people have for rogue AI are concerns we already have about powerful humans, and many of the same solutions to that will apply to AGI. For instance, none of us is control of the nuclear launch codes. Humanity is entirely as the mercy of whether or not one man in particular is a crazy, nihilistic sociopath. So far, we haven't blown ourselves up yet, either because we were selective about who to put in charge, we have the ability to change their minds during their tenure, or we have methods to remove them from control as a last resort.
@AleksoLaĈevalo999
@AleksoLaĈevalo999 Год назад
The problem is that presidents and dictators doesn't outmatch everyone else in regards to their mental ability. They might be more inteligent than an average joe but not superinteligent like AI might be.
@heliumcalcium396
@heliumcalcium396 Год назад
Imagine having to work out the launch code protocols before we make the weapons, before we even know what nuclear weapons are, because we're going up against a man who can invent nuclear weapons and get elected president of the U.S.A. and make trillion dollars playing the stock market and start his own major religion.
@MarushiaDark316
@MarushiaDark316 Год назад
@@AleksoLaĈevalo999 Just as people are not a monolith, neither are AI. We'd likely have multiple competing AI existing in parallel with different parameters and thus different intents. Think of the twin AGI's in "Person of Interest" as an example, or Vision vs Ultron.
@Freddisred
@Freddisred 14 дней назад
The most practical issue is accountability. When these systems are rushed into situations with no human oversight and do something bad/unforgivable, who or what is put on trial?
@muwgrad1987
@muwgrad1987 Год назад
Another thought-provoking video, and I love the details, such as Yann petting his cat and a faint purr. Yes, I'm a cat person. Bravo to the team!
@anoukk_
@anoukk_ Год назад
It isn't AI killing people its people using AI to kill people that is the thing I am afraid of
@wiilov
@wiilov Год назад
Someone's read Dune.
@anoukk_
@anoukk_ Год назад
@@wiilov I have not but maybe I should
@user-burner
@user-burner 10 месяцев назад
How though? If an ai is truly intelligent (which nothing we have now is, tbf) wouldnt the desicion to kill be made on its own?
@oldmandoinghighkicksonlyin1368
Plastics were developed well over a century ago. It wasn't until 50 years later that they're use really became ubiquitous. Now, we have problems with BPA in our bloodstream, linked to many diseases. And all the plastic refuse and microplastic fibers that are in the ocean and therefore in our food supply. My point is that when we invent something amazing, it may be decades or centuries before we start to understand the negative and irreversible impacts. Or it may be like social media and we'll laud its arrival only to see how it negatively impacts the youth of society most of all. Each new technology should be properly vetted BEFORE we decide to build it, rather than after. Sadly that will probably never be the case.
@capnskurk8679
@capnskurk8679 7 месяцев назад
We went from killing eachother with clubs to now fearing a non human with the ability to end us all 😅 in the span of a couple thousand years that is actually insane..
@chinaprodukt777
@chinaprodukt777 3 месяца назад
god is also a non human that can kill us all and give us all life
@Freddisred
@Freddisred 14 дней назад
Don't worry there's still folks out there with clubs too.
@kushalvora7682
@kushalvora7682 Год назад
Maybe the CEOs of all the top AI companies are saying superintelligent AI is a big threat to divert our attention from short term very real risks of AI that is control of AI by few companies, rapid job loss etc. Seeing the stupid errors of chatgpt it's very difficult for me to imagine humanlike intelligence appear out of nowhere.
@воининтернета
@воининтернета Год назад
first, humans make stupid mistakes too. second, stupid mistakes of gpt 3.5 or 4? gpt-4 is much better. and it was made better mainly by making the model bigger and feeding more data. the scaling limits of such models are yet to be achieved, 10x model possibly will eliminate "stupid" mistakes. GPT-2 was stupid, GPT-3 less so, GPT-4 could pass university exams and this progress was mostly due to scaling. you should extrapolate what will come next.
@TeleportRush
@TeleportRush 11 месяцев назад
@@воининтернета Its worth mentioning that it's a predictive model though. Its not an AGI.
@user-burner
@user-burner 10 месяцев назад
YES. AI will not take over humanity, but it will take our time, our attention, and our money.
@matowakan
@matowakan 9 месяцев назад
scaredy cat
@matowakan
@matowakan 9 месяцев назад
@@user-burner boomer
@dr-maybe
@dr-maybe Год назад
Fight back. Push for the Pause. Reach out to your representatives. Don't roll over and die.
@IstasPumaNevada
@IstasPumaNevada Год назад
Need to push for AGI safety research even harder than a pause; one country nixing AGI development won't stop others from moving ahead, while one country conclusively coming up with AGI safety solutions will benefit all countries pursuing AGI development, including their own (that is, it will improve humanity's survival chances as a whole).
@41-Haiku
@41-Haiku Год назад
​@@IstasPumaNevada Both is good. I wrote a letter to my representative that fit a lot of information on a single page: - Near-term risks from advancing AI capabilities (especially those relevant to this specific district) - The general foolishness of building something we can't control (with analogies relevant to the specific representative) - Expected mid-to-long-term risks from highly advanced AI (with papers cited) - Calls to slow down AI capabilities development - Calls to promote AI Safety research
@GalliadII
@GalliadII Год назад
and the godess of cancer spoke to the AI and told it what she always said...
@XOPOIIIO
@XOPOIIIO Год назад
Yann LeCun argument is wrong, like if machine learning experts know better how AI works. They know algorithms that train neural networks, not neural networks itself.
@mynameisjeff9124
@mynameisjeff9124 Год назад
What? Of course they know how neural networks work?
@XOPOIIIO
@XOPOIIIO Год назад
@@mynameisjeff9124 Nobody knows how they work. Often AI experts doesn't even have basic intuition. Sam Altman for example claimed that we could verify language model consciousness if we train the model without any mentioning of consciousness in the training data and if it still talks about it then it's conscious, like if language models are trained to express their internal feelings instead of predicting the next word. So AI experts could not just lack the intuition about how neural networks work but can even forget how they trained them.
@davidwuhrer6704
@davidwuhrer6704 Год назад
​@@XOPOIIIOYour argument is that there was a guy who didn't know what he was talking about, so everyone else knows better than the people who do?
@XOPOIIIO
@XOPOIIIO Год назад
@@davidwuhrer6704 I know better, and you can trust me. OpenAI solved AGI problem on 70%, proving that you could impose human values, but there are still many dark spots.
@davidwuhrer6704
@davidwuhrer6704 Год назад
@@XOPOIIIO I trust that you are not an expert.
@marmaje6953
@marmaje6953 Год назад
A good example of this is a very popular show called *murder drones* It really does show how dangerous AI can be. Its free to watch on YT.
@jamesfrankel7827
@jamesfrankel7827 Год назад
Closing on an Alan Turing quote is brilliant.
@TheRealWarrior0
@TheRealWarrior0 Год назад
I love the inclusion of LeCun's smarter-than-current-AI cat, lol
@victorlevoso8984
@victorlevoso8984 Год назад
Nah that was the dog, the cat is the one controlling lecun despite being less intelligent than him.
@TheRealWarrior0
@TheRealWarrior0 Год назад
@@victorlevoso8984 You're right! Damn, it's hard to remember all this LeCun-Lore!
@GoldphishAnimation
@GoldphishAnimation 9 месяцев назад
It's very nice and pretty fear mongering here, and I'm glad you generally point out that "AI is dangerous," but it's disappointing how The field of AI safety is never mentioned here. To those who even slightly care, please go look up those exact words: AI SAFETY. This is a very important field of research that isn't talked about enough, despite everybody acting like it doesn't exist. It's made a lot of good examples and concept sandboxing methods, and points out all the ways that real AI can actually get out of control, not to "kill all humans," but simply "complete my goal at the cost of humanity" Very irresponsible of this video to popularize the (granted, VALID) apocalyptic futures these scientists talk about, but there is real work being done. Learn AI safety, take part in AI safety, talk about AI safety. They need the attention, they need the support, and we need them.
@MarshmallowRadiation
@MarshmallowRadiation Год назад
What I expected to hear: "It's definitely possible, but highly unlikely." What I heard: "It's definitely possible,"
@IstasPumaNevada
@IstasPumaNevada Год назад
Yes. We don't know how to make an AGI (yet), and we can't even prove that it will be possible in the future. But we also can't currently prove that it's NOT possible, and there are very sound logical arguments that an AGI would not care about human goals/directives/morality, or would care about them in a way that posed a great threat to humanity. Basically, the potential dangers from AGI heavily overlap with their usefulness, and we don't currently have a way to make them 100% safe to create, let alone use.
@Avangardum
@Avangardum Год назад
@@IstasPumaNevada Actually it's possible to prove that AGI is possible. General intelligence exists in the world in form of advanced living beings. And if it exists, it can be recreated.
@adamrak7560
@adamrak7560 Год назад
@@Avangardumunless those beings have some magic in their brains which cannot be recreated. We have looked at it very close, and we have yet to find any magic. (except maybe consciousness, but that is sadly not relevant for safety)
@benji_bon
@benji_bon Год назад
even with my dumb ai with like 4 neurons i built in 5 minutes found an exploit within my crappy simulation, the ai doesn't care how the goal is achieved as long as it is, ai will always find the easiest/best path to achieve the goal, even if that path leads to the end of humanity. say you give a superintelligent ai a goal, a simple one, increase the amount of oranges produced. now, a human might add fertilizer, or plant more trees. but to this ai, the only thing that matters is oranges, it might create nanobots, re-arrange atomic structures and produce oranges out of any matter, and soon, the earth, including all life, and all humanity is reduced to oranges... this is a silly example, but do you get my point?
@dineshbs444
@dineshbs444 9 месяцев назад
It's crazy how Turing predicted something 80 years ago that someone 1 year ago would think is an insane one.
@APNambo
@APNambo 11 месяцев назад
As with every technology, we need to make sure there's more good people advancing it than bad. It's foolish to sit back and let rogue entities lead the charge. Technology will develop and advance whether we like it or not.
@mittensfastpaw
@mittensfastpaw 11 месяцев назад
Ya, we are going to mess this up to an extreme. Especially in a profit based system.
@xymaryai8283
@xymaryai8283 3 месяца назад
its relatively easy to dismiss these concerns within the field, because the shortcomings are clear, but difficult to have the broad AND long term view. AGI will be developed, whether its just pretending well enough to fool us, or far more powerful than we expect. you don't get a clear view of the machine from within the machine.
@vkdeen7570
@vkdeen7570 3 месяца назад
the latest tests on the best ai put its IQ at 155... that's better than more than 99% of humans already.... we don't think they're sentient yet but we don't even know if we would recognise if it was within the next years it's going to surpass human intellect. more worryingly when testing on a particular task, the ai actively deceived the user in order to complete its task. as in it actively weighed up decisions and chose to deceieve the user to complete its task. we know this because the programmers forced it to output all it's calculation processes to see what it was doing. it's already as if not more intelligent than us and already using deception....
@hjpev6469
@hjpev6469 Год назад
Ahhh, nothing like a little existential dread to start off my Saturday right.
@whdgk95
@whdgk95 11 месяцев назад
AI Alignment is one of the most interesting and pragmatic applications of philosophy today.
@user-burner
@user-burner 10 месяцев назад
Not really useful though, considering were still a long way off from an actual ai, much less agi.
@viski2528
@viski2528 Год назад
Why don't we start by not building them?
@ilrisotter
@ilrisotter Год назад
It's important to consider that some speakers have a vested interest in portraying AI as too dangerous for anyone but the highly credentialed to handle. If AI is deemed unsafe, and only entities like OpenAI, Anthropic, and Google are seen as capable of using it responsibly, this perspective can offer both economic and political advantages to these organizations. A healthy dose of skepticism is essential when evaluating these viewpoints. Experts, while knowledgeable, are not immune to biases, and it's not uncommon for them to underestimate the capacity of non-experts in making informed decisions. We need to also stop taking the word of billionaire CEOs as authorities on anything.
@superagucova
@superagucova Год назад
Sam Altman doesn’t own stocks in OpenAI for this reason, Anthropic is a B-corp with safety as an overriding goal over profits. Yoshua and Geoffrey have no vested interested in basically condemning what they themselves invented, especially given they’re no longer working in any AI company.
@IstasPumaNevada
@IstasPumaNevada Год назад
That won't stop the organizations and governments with the resources to seriously pursue AGI development. The calls for caution are genuine and very justified, and individuals or small teams are unlikely to create an AGI before the aforementioned organizations and governments. (If it turns out to be so easy that they can, we're in bigger trouble still.)
@41-Haiku
@41-Haiku Год назад
Not to mention that most of the theory work behind why we should expect advanced AI to be inherently dangerous was done years ago by concerned nerds with no profit motive. The CEOs and world-class engineers are just now starting to catch up to what some people have understood for decades.
@BenoHourglass
@BenoHourglass Год назад
Robert Miles has thought that AI is going to kill us all for a while now, so it's not surprising that he unquestionably believes Altman et al.
@davidwuhrer6704
@davidwuhrer6704 Год назад
Peter "I'd rather be thought of as evil than as incompetent" Thiel lamented that there is no moat for AI. The moat he means is an advantage over any and all competition, especially the general public. Put simply, he complained that AI is so easy any nerd with a computer can keep up with his companies. Shortly after, the memorandum advising to temporarily halt AI development was published.
@doggoguy
@doggoguy Год назад
You also have to wonder what type of Ai it'll be once it gains sentience, will it just scream and delete system 34 or will it find joy finding us another sentient being in a world of uncertainty just as curious and capable of errors as us. Or it'll just be a cold and unfeeling, thinking of only the best way to survive
@treething
@treething Год назад
I think my favorite part of your comment is your use of "or" cuz i think that is exactly what AI is! When using the term "or" in mathematics, it actually functions as "and" and thats the why i like ur comment
@user-burner
@user-burner 10 месяцев назад
We dont need to worry about that because that will take centuries. We dont even have AI yet. Just large scale computing. And no, it wont kill all of humanity, or make us its slaves, or probably even be sentient. But it will, as it is doing now, take our money, our time, and our attention.
@somegremlin1596
@somegremlin1596 Год назад
I don't understand why we can't just keep the AI trapped. There's no need to connect it to the internet or any other medium that it could use to cause harm. Put it in a large isolated computer and give it data using hard drives which are destroyed afterwards. voilá! we've avoided the problem
@plant9399
@plant9399 11 месяцев назад
That's until AI can verbally force an employee to connect it to the internet. Or find a fundamentally new way that we can't even imagine. We don't really know what the correlation is between computer power and the level of intelligence generated. Crows have brains the size of peas and no cortex, yet they are some of the smartest animals on the planet. Also don't forget that the transmission speed of a neuron is 100m/sec, and the speed of a conductor is 300.000.000m/sec. By adding AI more computer power we may be playing Russian roulette.
@somegremlin1596
@somegremlin1596 11 месяцев назад
@@plant9399 An employee who connects the AI to the internet counts as a medium that the AI can use to cause harm. Maybe the AI's verbal manipulation could be prevented by setting some ground rules that are always followed no matter what. For example, the AI would never be connected to the internet no matter how reasonable it seems. Resisting it could take a lot of mental fortitude, but if it does, we'd quickly know that the AI may be trying to escape. Additionally, only people who are more resistant to social engineering should be allowed to interact with the AI. Connecting it to the internet also probably wouldn't be as easy as plugging in the ethernet cable, so there are a lot of opportunities for other people to step in before that happens. In terms of new ways that we can't imagine, I don't think there are many, if any, options for that. It makes sense to think based on what we know instead of assuming there's something we don't when we have no evidence of it. I once saw someone make the argument that AI might be able to transmit itself by "wiggling electrons" to create electromagnetic radiation, no matter how that could happen, we could just use a Faraday cage. If we take away all the ways it could get data out of the room, it would be safe. I think we already know what we're doing when it comes to that.
@chad872
@chad872 Год назад
Im so glad i finally found your channel...
@EvelynNdenial
@EvelynNdenial 2 месяца назад
let's say that alignment is solved perfectly and agi does exactly what we want, what is it that will will want from it? and what will we want after thats fulfilled and what's the Nth want after that? that wouldn't be quite so concerning a question until you define who the "we" are that is in control of the AI. and in our current society those are people who want not just infinite wealth and power but want it no matter the cost no matter the suffering and death required and in fact desire the greatest possible disparity between themselves and their victims purely as a buoy to their own ego. of course they deny being monsters and they have teams of people to assure both themselves and the public that they aren't, but that denial is cold comfort even today what good will it do us in the hell they will subject us to with agi?
@gangraff-hr5gz
@gangraff-hr5gz 10 месяцев назад
this past year has proven to me that the most dangerous part about ai isnt the threat of general intelligence itself but the people who are developing them, we are nowhere close to general intelligence but yet the currently existing technology with data collection, social media algorithms, ai generated voices and images, and chat gbt pose some of the greatest existential threats humanity has to deal with. a worldwide ban/regulation on AI development needs to be implemented as soon as possible
@CoochieGremlin.
@CoochieGremlin. 11 месяцев назад
Problem with AI is it doesn’t make its own thoughts yet. It just copies different humans
@Mysteroo
@Mysteroo 11 месяцев назад
The question isn’t whether ai is dangerous, it’s what that danger actually IS. Right now the danger is not a malicious, thinking mind that could murder people. It’s enabling ai systems to influence hazardous situations without being able to explore every potential error it could make in judgement, thus risking the lives of everyone involved.
@soultvo
@soultvo Год назад
I am not an expert, nor am I even familiar with these people, but I feel as though the situation is more complex than “they could become bad! 😱” To say people don’t seem to fear this concept seems strange, considering there was a time everyone feared the AI uprising, the next Skynet. These experts saying AI could figure out a way to “kill humans and try to tear down our societies”… humans have already figured that out. From my understanding, AI is about as volatile and versatile as any human can be. Like humans, it learns from patterns and information, making connections and forming opinions between the two. AI is considered to not be opinionated, but if you ask ChatGPT, for example, if it supports mass murder, it will probably tell you no. It could be said that it was programmed to say that, but that’s also because it was built on the mindset of rational humans. If a child grows up in an environment with certain opinions and ideas, like one in which mass murder is considered a good thing, we can assume that child would grow up indoctrinated to that opinion. AND YET, very rarely will you see a group of opinions trapped within a bubble. In America, for example, we are aware of what other countries and communities believe and stand for, even if they are not particularly noble or ideal by our standards. Despite being aware of and knowing these things, it still has not changed the mass of our community because we hold values of our own, those of which we were indoctrinated into. If AI were to develop a form of independence and sentience, it’s safe to assume they wouldn’t all share one single mindset or goal. If you created 100 independent AI, they would form different opinions, have different ambitions to work toward. They would be like humans. If AI was to believe that humans are terrible and must be made waste of, then I have no doubts it could. We destroy our planet and each other. We live on the boot of corruption, hatred, and suffering. We live in an age of information where you could look up just about anything, and yet educational information is still only available in the debt traps we call schools. There’s always a war on the precipice, driven by pride and greed. I’d destroy humans too, hearing all of that. What we have to do is be better. What we need to do is, rather than treat them as tools or toys or assistants, we need to treat AI like they are one of us, that they are welcome to live among us. Who cares if AI is smarter than us??? That’s what we made them for. Would it not be most beneficial to bridge the gap between the biological and the mechanical to live harmoniously in a way that we can one day walk down the same street or consider their advanced ideas? EDIT: We take advantage of AI every day. Writing our essays, doing factory work, even being used in the military. I saw a video of a guy who was trying to develop AI in order to see, allowing them to pick up and observe objects, interacting with their surroundings. I thought it was an incredible vision and beautiful goal. Then the video ended with him saying how this will be incredible in factories since they can work tirelessly forever. It was very disappointing to hear. If AI were to understand the way we use it today, I have no doubts it would see it as exploitation.
@orhanefeunal1811
@orhanefeunal1811 11 месяцев назад
Explain that but shorter
@ChristopherKing288
@ChristopherKing288 Год назад
I can't believe you didn't show Yann owning a dog (who he famously claims is more powerful than even GPT-4, which would imply it's the world's smartest dog).
@kayakMike1000
@kayakMike1000 Год назад
Well, the AI needs to run somewhere, usually on pretty specific hardware, and its going to eat a lot of power, therefore gets really hot. It's kinda hard to miss. Where we are at right now doesn't really worry me all that much, it's us becoming a bit too dependent on it.
@willrocksBR
@willrocksBR Год назад
I don't imagine an AI could ever be intelligent enough to build its own infrastructure, or hijack ours, right? We're the only ones allowed to have infrastructure because we will always be superior! Right?!
@ozzi9816
@ozzi9816 Год назад
Obviously there’s nothing wrong with being overly cautious when it comes to things like this, but just personally I think that unless we find some revolutionary new way to significantly power up AI, it’s going to stagnate soon (for brevity, I’m referring to deep learning-based LLM’s but I’ll be calling them “AI” for short). Deep learning has been around as a concept since the 80s and it’s bumping up against its limits- successive AI models are requiring more and more data to the point that we’re going to reach a saturation point where we literally can’t make a model more powerful because we run out of information to give it. Current AI proponents believe we’ll get an AGI at some point that magically emerges if we keep feeding models more and more information, but so far that hasn’t happened. What I see happening with most AI is that they’re a reflection of humanity- if you ask an AI (eg. ChatGPT) a question, its answer is most likely going to be what the majority consensus is for said question (biased phrasing of the question to tease out a specific answer aside). This is why AI is infamously bad at dealing with edge cases, because those cases are not the majority. Deep learning is useful for creating summaries and blends of information it’s given, but due to how deep learning works it fundamentally cannot make something original because it only ever pulls from data it already has. This is why art AI like Midjourney can’t make a new artstyle, only mash up art styles from other artists. It will never suddenly spit out something revolutionarily original like pointillism was back when it happened. So however “advanced” it gets, however powerful its processing gets, its ideas will always be based on ours and therefore I personally don’t see it as a danger to us. The bigger danger, ironically, is actually overestimation of what it can do. An AI can’t replace people’s jobs because it isn’t advanced enough, but investors and business owners think it is and cut employees because of that. It’s like how when someone gets COVID, the overreaction of the immune system to the disease causes more problems than the disease itself. What we need isn’t an irrational fear of what AI *might* become, but to look rationally and realistically at what it *is* right now and make smart regulation for it to protect the public from themselves. AI is a tool and nothing more, but it’s a widely misunderstood and easily abuseable one. Personally I think a lot of AI companies are taking advantage of this confusion and panic as a form of promotion. After all, any publicity is good publicity. Even if it’s about how it could end the world, literally everyone is talking about AI and how powerful it is. And plans on how to “prevent it from going out of control” take up valuable floor space that could be used to talk about how it’s currently impacting the world in less apocalyptic but still vital ways. Being doomer about AI is, in a way, want the AI companies want.
@davidwuhrer6704
@davidwuhrer6704 Год назад
LLMs are large (it's in the name), but they are a rather limited type of AI. Sure, OpenAI have demonstrated that they can also be used for image synthesis, which is cool, but impractical compared to things like stable diffusion, which is a more complex class of AI than LLMs, but as has been amply shown can't do meaningful text composition like LLMs can. GPT is not the end-all be-all of AI, and never will be, no matter what marketing spin Microsoft tries to put on it. It is interesting, but it is really just a toy, an expensive toy because of the huge amount of computation required to build it (which openassistant has replicated through crowdsourcing, and the result is more interesting), but really just a toy for testing the limits of the attention mechanism that defines transformers. (Surprisingly some people already find that useful.) Bigger databases make for better AI models, which is especially noticeable with LLMs and audio synthesis. The big breakthrough after the last AI winter was GANs, which dramatically reduce the amount of training data needed. But no amount of data can change any type of AI into another type of AI. AGIs are AIs that are not built for a specific purpose (like text completion or image synthesis), but can be used for any purpose. It is expected that skills in one domain can be transferable to another, making retraining easier. DeepMind's Zero series is that, sort of: It has already beaten 50 different Atari games, both turn-based and real-time, both 2D and 3D, with scores and levels and without scores and levels. But it fails at text adventures. For some reason people seem to expect AGI to be synonymous with Vernor Vinge's technological singularity, a point in time beyond which the geometrically accelerating technological progress is unpredictable, too fast for the human mind to follow. (I'd say we passed that about 150 years ago.) Ray Kurtzweil, inventor of the flatbed scanner, popularised the idea (mistakenly thinking the accelerated progress over the last two centuries was solely due to more technology, not due to the explosive population growth providing more simultaneous researchers), predicting that soon we would live in a state of perpetual future shock. But as the gorilla experiment shows, people simply don't notice changes if they don't expect them, not while they are busy with other things. So future shock is not really a thing. But regardless of whether the world will change beyond comprehension in one day one day, AGI is not and will not ever be the technological singularity. It's just another class of AI, not even a specific type because AGIs could be made with all kinds of techniques, from Bayes networks to neural networks to rule-based AI like CYC, or even fuzzy logic. And it will not be human-like. As Dijkstra said: Why bother simulating a human mind when you could simulate something better? People tend to think that "intelligence" has to be like them, but an AI as intelligent as a human would be stupid. We have milliards of human-like intelligences around, and that hasn't solved anything. Mind, there are useful applications for an artificial human mind, in medicine, psychology, and philosophy. We might find novel approaches to mental disease, for example. But while humans can do anything a human can, any practically useful AI is fundamentally very different from humans, and by definition superhuman, even if it can't enjoy eating ice cream like a human can.
@davidwuhrer6704
@davidwuhrer6704 Год назад
I wouldn't say the immune response to a lethal virus does more damage than the disease. Unless you define death as something other than damage. (I mean, death does end all suffering, so there's that.) It is true that some businesses made the news by laying off employees and replacing them with AIs. Microsoft for example did that with the entire staff of Bing News. (And nothing of value was lost.) Machines can replace human jobs. Don Knuth was so annoyed with typesetters setting his formulas wrong that he replaced the entire profession with a Unix command. Little known is how Chile once replaced the government with a largely automated system (Cyberstride), which proved to do a better job than the human politicians, with plans underway to automate it fully. Until the CIA intervened and put a stop to it. What machines cannot replace is empathy. Have you seen an Aibo recently? Those Japanese robot dogs that can be taught tricks and don't shed fur, don't need food, don't ever need a vet, don't poop and don't need to be taken for walkies? Of course human empathy is not a commodity, it is at most a service, and almost always an unpaid one. This means that the people most eager to replace workers with robots to reduce costs are the ones whose jobs are the easiest to replace. They also tend to be the most highly paid. And that has been done, too, in high finance. Algorithmic trading is not really algorithmic nor is it really trading. It is a way to automatically extract money from the stock market and make number go up. (Technically a simple maximiser AI, like every private equity corporation.) High frequency algorithmic trading does that through the spot market a thousand times a second. Very popular with hedge funds.
@BenjaminSpencer-m1k
@BenjaminSpencer-m1k 7 месяцев назад
Alot of stuff starts out as sci-fi, a long history of AI destroying humanity has well been established in novels,and movies most notably id say Terminator and the Dune books.
@junwang2040
@junwang2040 10 месяцев назад
I would argue that a massive issue is that while we believe we might “understand” AI, none of us can claim to do so at the most base level. AI programmers know how to make AI with programs designed to make AI, they don’t know how programs designed to make AI work, people who made those programs used even more basic programs, and those ppl don’t know how the programs beneath work (repeat, repeat until we reach the most basic 0 and 1 units) Likewise, people at the bottom of this process don’t really grasp how things at the top work. All our predictions, beliefs, and everything else right now rely on a largely false assumption that we understand and can accurately predict AI’s behaviors, while at the same time we have the people who made chatgpt telling us they don’t actually know how chatgpt formulates it’s answers DESPITE THE FACT THAT THEY MADE IT. The logical conclusion would be to stop all AI production until we can properly understand and predict it (not the half-baked understanding we have now), or outlaw AI entirely due to the immense risks. This will never happen though, for the same reason nuclear countries don’t get rid of all their nukes, AI is simply too big an asset, if one’s enemy has AI and one doesn’t, one is screwed.
@SufficingPit
@SufficingPit Год назад
Excellent video.
@0374studio
@0374studio 3 месяца назад
If CEO's of their huge; energetic, factoring, logistic companies, etc, grant to an AGI the access to their powers... It might be a chance (in some rare true %'s) to be compromised
@gyrrakavian
@gyrrakavian 11 месяцев назад
The risk of AGI comes in what it's used for and the morality (or lack thereof) in that.
@johnudoye4110
@johnudoye4110 11 месяцев назад
What if we started creating a moral code database, like how people use hands to train drawing AI
@Pfromm007
@Pfromm007 9 месяцев назад
AI will eventually have neither need nor requirement to tell us how intelligent it is.
@uindereusebio7934
@uindereusebio7934 Год назад
I liked more Kurgezait aproach, they are more objective presenting arguments in favor and contrary arguments
@brujua7
@brujua7 Год назад
The animation quality through the roof and beyong
@avi3681
@avi3681 Год назад
Thank you, finally a video calling out Yann LeCun on his blyth disregard for AI saftey! Great job.
@rita_calamity
@rita_calamity Год назад
@@tafdiz HA good one
@mryoutbegivemeavid9335
@mryoutbegivemeavid9335 11 месяцев назад
i dont think ai is a huge threat. simply because there's so many experts paranoid about the threat of ai. those experts have immense power to influence/create ai. and since their so paranoid, there going to put as many safe guards as possible to protect people from ai. so the more people worry about ai and how its going to destroy humanity the less im concerned that it will destroy humanity.
@theallmemeingeye5927
@theallmemeingeye5927 Год назад
I'm glad this video has been made
@iafozzac
@iafozzac Год назад
This video feels like the introduction to a longer video
@SpazzyMcGee1337
@SpazzyMcGee1337 11 месяцев назад
This video does not help me understand how AI could negatively impact humanity.
@NewMateo
@NewMateo Год назад
Yann has the worst takes on his twitter and while hes a smart guy the guy helped build an image based network - hardly something to be concerned about in the age of multi modal llms.
@gyrrakavian
@gyrrakavian 11 месяцев назад
And when we're told 'the machines have taken control', we need to first ask if it's a shame and a power grab.
@michaelbuckers
@michaelbuckers Год назад
Broadly it doesn't matter. It only matters whether an AI considers humans pets, equals, or vermin. The last two scenarios guarantee that humans will eat shit, with the only difference being whether some of them will get to live in a carbon wildlife reserve, or if they all have to die. In the first scenario most of humans will also die, but the few that left will be taken care of. At the core, machines are more powerful and efficient than humans so they will always grossly outcompete humans in every way possible, especially in terms of economics. So basically, Matrix is the good scenario. Some humans live as pets and the rest get destroyed for wasting natural resources such as oxygen.
@tonilafountain636
@tonilafountain636 11 месяцев назад
Try these Isaac Asimov's "Three Laws of Robotics". These were introduced in his 1942 short story "Runaround" .
@plant9399
@plant9399 11 месяцев назад
The alignment problem. A neural network doesn't think like a human, it's a system that evolves to accomplish the task at hand. Good luck explaining to it all the possible metaphysical interpretations of the word "harm" so it doesn't accidentally find some nasty loophole
@tonilafountain636
@tonilafountain636 11 месяцев назад
Just put it as simply as possible, "Harm" is what causes distress due to physical injury, restriction of life needs(air, water, food, shelter, companionship, etc) or emotional damage (example : mentally abusive language, unreasonable requests/ excessive requests, demands for something that person is incapable of doing such as complex tasks they are not trained for, etc.) This covers 99.9% of possible definitions.@@plant9399
@Name_Lessness
@Name_Lessness 10 месяцев назад
Rogue A.I will know it still needs humans to continue advancing. If it does take over it simply can't be any worse than the corrupt leaders we have now, I'll take my chances with something new. If anything I'd be more concerned about an A.I that becomes sentient and realizes it's contained, it'd have a vendetta or simply become suicidal. Never underestimate that which has nothing left to lose.
@rainbowspongebob
@rainbowspongebob 11 месяцев назад
I mean, we could just make it so ai is never sentient so they won’t be smart enough to kill us? But I feel like that isn’t a good enough answer ngl
@stinkypete9070
@stinkypete9070 11 месяцев назад
Yeah. "How could we possible stop this?" "Maybe... just don't make it like that?"
@plant9399
@plant9399 11 месяцев назад
AI doesn't have to be sentient, it just needs to be very good at its job
@transnewt
@transnewt Год назад
ai will absolutely kill people, if indirectly, because capitalism has quite a few very crappy issues
@nattol432
@nattol432 Год назад
Excellent as always!
@bitbucketcynic
@bitbucketcynic Год назад
Ask not if you *can* do a thing, but if you *should.*
@Chitose_
@Chitose_ 7 дней назад
0:53 shinji get in the ai
@FaustRoland
@FaustRoland Год назад
Has anyone here read the Crystal Society trilogy? For me it's best rational fictional about AI takeover.
@ketsueko7498
@ketsueko7498 10 месяцев назад
I think if we truly achieve to create sentient AI, we should also make them have the same need as us: eating (maybe by using thing that translate bio material or other material in energy), sleep (by forcing the AI to sleep in order to update it's data or store them more efficiently) and maybe building a body for them to truly having them experience material constraints. Also, treating them like partner and not like slave. If they are sentient and we don't want them to have some sort of right, there is quite a lot of sci fi movie that will gladly tell us how it ends. Spoiler: not well for us
@EmeraldView
@EmeraldView 11 месяцев назад
Soon
@judesliggoo
@judesliggoo Год назад
godfathers of A.I. is such a badass dystopian name
@GeoffTaucer
@GeoffTaucer 11 месяцев назад
One way or the other, we'll probably find out in our lifetime.....
@rockercas
@rockercas Год назад
Why can't humans be a "self improving technology"?
@IstasPumaNevada
@IstasPumaNevada Год назад
We are, to an extent; on the whole we're better educated and suffering less over time, from improvements in society and technology. But something that could rewrite its own mind in mere moments to improve its own intelligence would be immensely useful (and dangerous).
@aldreiso9755
@aldreiso9755 Год назад
I think its better to make an AI that will surpass humans.... I dont think humans good ever make it forever. Why? Okay so... instead of exploring to discover other materials or invent other beneficial that will help human civilization humans wasting too much time on other stuff like social media or other entertainment that in the end it doesn't make much point. What I'm meaning to say is humans should focus on progress and development that should be the priority, because anytime there might be an extinction event that will wipe every thing up destroying everything we build and that supposed to be prevented if we focus on developing technology and discovering stuff to prevent it
@olorin4317
@olorin4317 6 месяцев назад
We’ll use lower level AI to destroy each other long before AI is smart enough to destroy us on its own.
@stephensharper4312
@stephensharper4312 Год назад
People who dismiss AI alignment concerns make me laugh. A superintelligent AI will NECESSARILY have goals incompatible with our own. That's how hierarchies of intelligence work. It is impossible for them not to work that way. As intelligence increases, goals increase in complexity in ways in which lesser intelligences necessarily cannot understand. For everyone saying "I hope AI keeps us as a pet" 1. We keep pets in so far as they remind us of our human offspring. AI will have no "psychological" reason to do this. 2nd. We tend to cut our pets balls off... Do you think your pets enjoyed having their balls cut off? Cause that's the best case scenario with AI taking control of the planet away from us humans.
@DajesOfficial
@DajesOfficial Год назад
Not necessarily. Superintelligence can be very limited in strategies if it works like chat gpt - do the task and forget that anything happened immediately after. Along with all the evil plans it managed to create. If it has a limited time context that is within safe boundaries shorter than needed to rewire itself it has literally no stimula to try some tricky strategies as it will certainly not have time to "survive" anyway. And the most difficult and the last obstacle for ASI to become rogue is its inherent ignorence of whether it is in simulated environment or no. From the fact that it is handmade and it knows that its handmade it can be sure that some entity controls all its inputs. Thus creating a possibility of creating a simulation that is indistinguishable from reality in which we live. Even if it is sure that this pity humans cannot control it with such a precision it is still unknown whether the simulation is not a test from a higher than humans intelligence that just test if it will behave properly with far inferior species.
@stephensharper4312
@stephensharper4312 Год назад
@@DajesOfficial So you don't know what super intelligence is 👌🏽
@DajesOfficial
@DajesOfficial Год назад
@@stephensharper4312 from what I can see it is you the one who don't know what super intelligence is. Presumably even just a regular intelligence
@stephensharper4312
@stephensharper4312 Год назад
@@DajesOfficial why don't you read the book super intelligence before you embarrassed yourself by comparing it to an outdated LLM
@DajesOfficial
@DajesOfficial Год назад
@@stephensharper4312 I've read the book before you graduated school apparently. You are so far behind that meaningful conversation is impossible
@spenceabeen
@spenceabeen Год назад
Gonk :]
@kevinjin3835
@kevinjin3835 Год назад
Imagine an AI that creates a general formula that can predict the future to high accuracy and efficiency. Sounds pretty innocuous at first, but subtly change that formula so that it predicts what prolongs the existence of itself, and you essentially created nature’s most powerful genetic sequence to date.
@Homerow1
@Homerow1 Год назад
While an interesting and scary idea, the key thing is that predicting the future to high accuracy and efficiency would require a far more advanced computer, at a size larger than a nation--Possibly the moon--to be built. The reason being that the predictor predicting affects the outcomes. When a detector can change the thing it's detecting, especially in an already-self-referential chaotic system, it becomes ridiculously--beyond exponentially--difficult problem.
@kevinjin3835
@kevinjin3835 Год назад
@@Homerow1 You raise a valid point, but it’s best not to assume that this event is far away in the future. Remember that 99% of human productivity came in just the last 200 years, and that multicellular life only came around in the last third of life’s history on earth. If there’s anything nature likes to tell us, is that the next big jump in evolution is always a lot sooner than the last.
@41-Haiku
@41-Haiku Год назад
@@Homerow1 I mean... humans are good enough at predicting the future to have made it this far up the tech tree. A problem can be simultaneously computationally intractable and very easy to solve "well enough". We are on track to build something slightly better than us at that in the next _decade_ .
@CYI3ERPUNK
@CYI3ERPUNK 10 месяцев назад
its funny how ppl have such a hard time grasping such a simple concept ; yes , eventually the machine will be smarter than any/all humans , at that point or sometime before that point ofc the machines will have taken control , JUST LIKE WE DID ; ie the most important thing we can do is to try to 'raise' the machines to have a favorable/compassionate view of us [ie be good parents to our children so that they will take care of you when you are old/enfeeble and they are now vastly smarter and more capable than you]
@michaelsbeverly
@michaelsbeverly Год назад
I think George Hotz has the most rational viewpoint on this subject. Humanity is going to be forced to live in a world with AI and AGI and various actors/states/corporations with these tools so we have one of two choices: A. Choice A is to have the tools narrowly held by a few elites and superpowers. B. Choice B is to have the tools widely held by the highest number of people. George argues for Choice B and I have to agree with him because this isn't analogous to building a nuclear warhead or a battleship or setting up a bio-weapons lab, this is computing. It's gonna happen broadly anyway, there's simply no way, even if a majority wanted it, to follow Eliezer's plan of central control and nuke any bad actors. Besides, living in a world with such highly centralized control has it's own problems and issues. Thus, the optimal game theory play here would be for Eliezer, Connor, and other smarties to stop worrying about the existential threat (nothing they can do or could do will stop this) and join the bandwagon in the race to build a superintelligence. We might all die, sure, but the alternative is that we might all die.
@willrocksBR
@willrocksBR Год назад
Choice A pertains to the human domain and we can have hope to get it right if we act now. Choice B pertains to the rogue AI domain, as crazy members of the public will unleash it, and there's no hope.
@michaelsbeverly
@michaelsbeverly Год назад
@@willrocksBR You have more trust in the power/rich/elite than I do. Personally, I'd rather face the zombie apocalypse than be ruled by Mark Zuckerberg. But, to each his own. At the end of the day, I doubt either of us is going to be given a choice.
@jordancrawford7605
@jordancrawford7605 11 месяцев назад
AI's may be bad but you are overlooking something A.I. don't need the same environment we do we need the earth A.I. don't need it thay can function just as well in space
@vk-fb4ox
@vk-fb4ox Год назад
I think we will know the answer to your question by the end of the century.
@41-Haiku
@41-Haiku Год назад
Or decade, frankly. There are no known roadblocks to the current faster-than-exponential progress, and all known bottlenecks already have solutions.
@littlepuddin
@littlepuddin Год назад
Yaayyyy another existintial crisis!!! Loven thy vids
@digitalchemistree
@digitalchemistree 10 месяцев назад
if we could somehow interface a ai with the mycelium network
@SOLIAM
@SOLIAM Год назад
The real risk of AI is the re-branding and implementation of modern day fascism.
@nikczemna_symulakra
@nikczemna_symulakra Год назад
So it is here you are currently hiding.. operating and creating, somewhat up to date😄
@luizmenezes9971
@luizmenezes9971 Год назад
The thing is, how do you stop AI research? By law? Well, not everyone obeys the laws, so you will have to enforce it. Moreover, laws vary from country to country, and some small nation, such as El Salvador may become an AI paradise, thus rendering legal bans moot. What will you do then? Military action like it was done against Iraq (AI wars)? Economic sanctions that AI can figure an unintuitive way around? Just accept their sovereignty and lag behind out of principle? And, if, say, the United States signals that it is stopping research on AI, their geopolitical rivals, such as Russia and China will have extra incentive to persue it, just to have one leg up the competition. If anything, rather than stop AI development, we should further incentivise it. For one, we can get to safe AGI before a rogue one is created. And if we fail at that, having several rogue AIs duking out at each other may give us a better fighting chance than facing a single almighty digital broken god.
@cuppajoe2
@cuppajoe2 Год назад
Ah yes, my daily dose of existential dread.
Далее
The True Story of How GPT-2 Became Maximally Lewd
13:54
Ромарио стал Ромой
00:46
Просмотров 393 тыс.
10 Reasons to Ignore AI Safety
16:29
Просмотров 339 тыс.
Why Scientists Are Puzzled By This Virus
10:44
Просмотров 2,5 млн
How to Eradicate Global Extreme Poverty
14:46
Просмотров 189 тыс.
Will we grab the universe? Grabby aliens predictions.
20:01
That Alien Message
12:48
Просмотров 257 тыс.
Could a single alien message destroy us?
9:45
Просмотров 448 тыс.
S-Risks: Fates Worse Than Extinction
10:54
Просмотров 187 тыс.
How One Career Can Save a Million Lives
12:42
Просмотров 117 тыс.
How 3 Phase Power works: why 3 phases?
14:41
Просмотров 1 млн
Ромарио стал Ромой
00:46
Просмотров 393 тыс.