Тёмный

The other "Killer Robot Arms Race" Elon Musk should worry about 

Robert Miles AI Safety
Подписаться 156 тыс.
Просмотров 100 тыс.
50% 1

Elon Musk is in the news, talking to the UN about autonomous weapons. This seems like a good time to explain one area where we don't quite agree about AI Safety.
The Article: www.independent...
The clip at 2:54 is from a Y Combinator interview: "Elon Musk : How to Build the Future": • Elon Musk : How to Bui...
With thanks to my excellent Patreon supporters:
/ robertskmiles
Steef
Sara Tjäder
Jason Strack
Chad Jones
Ichiro Dohi
Stefan Skiles
Katie Byrne
Ziyang Liu
Jordan Medina
Kyle Scott
Jason Hise
David Rasmussen
James McCuen
Richárd Nagyfi
Ammar Mousali
Scott Zockoll
Joshua Richardson
Fabian Consiglio
Jonatan R
Øystein Flygt
Björn Mosten
Michael Greve
robertvanduursen
The Guru Of Vision
Fabrizio Pisani
Alexander Hartvig Nielsen
Volodymyr
David Tjäder
Paul Mason
Ben Scanlon
Julius Brash
Mike Bird
Taylor Winning
Peggy Youell
Konstantin Shabashov
Almighty Dodd
DGJono
Matthias Meger
Scott Stevens
Emilio Alvarez
Benjamin Aaron Degenhart
Michael Ore
Robert Bridges
Dmitri Afanasjev
Brian Sandberg
Einar Ueland
Lo Rez
C3POehne
Stephen Paul
Marcel Ward
Andrew Weir
Pontus Carlsson
Taylor Smith
Ben Archer
Ivan Pochesnev
Scott McCarthy
Kabs Kabs
Phil
Philip Alexander
Christopher
Tendayi Mawushe
Gabriel Behm
Anne Kohlbrenner
Jake Fish
Jennifer Autumn Latham
Filip
Bjorn Nyblad
Stefan Laurie
Tom O'Connor
Krethys

Опубликовано:

 

28 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 668   
@silvercomic
@silvercomic 7 лет назад
AI safety in the media drinking game: Take a shot when: - Picture of the terminator - Picture of HAL9000 - Picture of Elon Musk - Picture of Bill Gates - "Doom" - "Evil" - "Killer Robots" - "Robot Uprising" - Author shows their understanding of the subject to be limited - Picture of Mark Zuckerberg - Picture of ones and zeros - Picture with the electronic circuit shaped like a brain - Picture of some random code, probably html - Picture of Eliezer Yudkowsky (finish the bottle) On a serious note: Perhaps some of the signatories are aware of your criticism, but consider this a more achievable step. In fact, one could use this as a test platform into the feasibility of restricting AI research.
@maximkazhenkov11
@maximkazhenkov11 7 лет назад
*dead from alcohol poisoning after the first page *
@z-beeblebrox
@z-beeblebrox 7 лет назад
Yeah, that's not a drinking game it's suicide
@silvercomic
@silvercomic 7 лет назад
Not really it's pretty much akin to the machine learning departments Thursday evening drinks that I used to attend when I was a student.
@Nixitur
@Nixitur 6 лет назад
Random code is unlikely to be HTML, really. More often than not, it's Linux kernel, thanks to the general public license.
@BusinessRaptor520
@BusinessRaptor520 5 лет назад
In fact, one could increase the amount of pompousioty by a factor of 10 and at the same time add frivolous filler text to blatantly hide the fact that theyre willing to suck the teet of the hand that feeds until the udder runs dry.
@militzer
@militzer 7 лет назад
For the "Why not just ... ?" series: why not just build a second AI whose function is to keep the "first" (and I quote because ideally you would build/activate them simultaneously) from destroying us?
@RobertMilesAI
@RobertMilesAI 7 лет назад
Thanks, yeah that's an idea I've seen a few times, I think it would make a good "Why not just" video
@chris_1337
@chris_1337 7 лет назад
The problem is the definition of the right utility function. Using an adversarial AI architecture still wouldn't solve that fundamental problem.
@RobertMilesAI
@RobertMilesAI 7 лет назад
Yup. I think there's probably enough there for a decent video though.
@fleecemaster
@fleecemaster 7 лет назад
I like the idea of "Why not just" videos :)
@Corbald
@Corbald 7 лет назад
Not to derail the production of the next video, but wouldn't you have just compounded the problem, then? Two AI's you have to worry about going 'rogue' instead of one? Who watches the watcher? If they both watch each other, couldn't one convince the other that it's best to destroy us? etc...
@Linvael
@Linvael 7 лет назад
To be fair - the arms race they want to deal with is the more imminent one. AGI is more dangerous, but far away in the future (let's say somewhere between 5 and 500 years). Simple AI with weapon is "I wouldn't be very surprised if we had those already".
@maximkazhenkov11
@maximkazhenkov11 7 лет назад
Oh we definitely have those ready and in action, the only "human oversight" is a trigger-happy drunkard sitting in an air-conditioned container some 10,000 miles away.
@joshissa8420
@joshissa8420 7 лет назад
maximkazhenkov11 definitely an accurate representation of the US drone strikes
@inyobill
@inyobill 5 лет назад
@@joshissa8420Or not. as the case may be. Note that I understand exaggeration for effect.
@chibi_bb9642
@chibi_bb9642 Год назад
hey wait we met the minimum you said oh no
@Linvael
@Linvael Год назад
@@chibi_bb9642 Right on time with ChatGPT release too! That was a good minimum
@LeandroLima81
@LeandroLima81 5 лет назад
Been kinda binge watching your channel. You seem like the kinda guy to have a drink with. Not for alcohol, but for good conversation and stuff. I'm really enjoying you 😉
@jqerty
@jqerty 7 лет назад
Have you read 'Superintelligence' by Nick Bostrom? What is you opinion on the book? (I just finished it)
@jqerty
@jqerty 7 лет назад
(I feel like I asked a physicist whether he read 'A brief history of time' (but then written by a philosopher) )
@NiwatoriRulez
@NiwatoriRulez 7 лет назад
He has, he has even recommended the book in some of the videos he made for computerphile.
@hypersapien
@hypersapien 7 лет назад
I really enjoy your videos Robert, keep up the good work.
@perfectcircle1395
@perfectcircle1395 7 лет назад
I've been thinking about this stuff a lot, and you always give new, and interesting view points on this topic. I love it. Subscribed.
@failer_
@failer_ 7 лет назад
We need an AGI to safeguard AGI research.
@milanstevic8424
@milanstevic8424 5 лет назад
Oh I'm going to release this in the air, because I don't see anyone bringing it up, yet I'm absolutely positive this is the way to go. The only way to keep an AGI in line, is to let it build another AGI whose goal would be to keep the first one in line. ad infinitum. In fact, and here things start to get perplexing, the reality of the universal AGI is that the thing will copy itself ludicrously fast and evolve, much like multicellular organisms do already. The way I'm seeing it, the goal shouldn't be in designing the "neural networks" but to allow cross-combination of its "genes" from which neural networks would begin to grow on their own. Before you know it, we'd have an ecosystem of superior intelligences fighting each other in lighting-speed debates, manifesting themselves in operative decisions only after a working cluster of the wisest among them has already claimed a victory. Because if there is a thing that universally defines an intelligence, it's a generality in point of view. Having a unique perspective is what makes one opinion unique compared to another. Having a VAST perspective is what constitutes a broad wisdom and lets it comprehend and embrace even what appears as a paradox. It's de facto more universal, more useful, and more intelligent if it can embrace the billions of viewpoints, and all of it in parallel. It consumes the moment of now much more accurately, but the only way to know for sure which opinions are good and which ones are bad -- technically it's an NP problem because it sits in an open, non-deterministic sandbox without a clear goal -- is to employ the principle in which only the most optimal opinions (or reasoning) would and could survive -- but they don't learn from the actual mistakes, but need to survive the battle of WITS. Also the newer agents would have access to newer information, thus quickly becoming responsible for weeding out the "habits" of the prior system. Having trillions of AGI systems that keep each other in line is much like how nature already balances itself. It's never just 1 virus. It's gazillions of them, surviving, evolving, infecting, reproducing. And from their life cycle a new complexity emerges. And so on. Until it fills every nook & cranny of what's perceivable and knowable. Thankfully, viruses have stayed in their own micro niche, and haven't evolved a central intelligence system, but we can't tell if we have organized ourselves against them or thanks to them -- in any case, we are here BECAUSE of them, that's how more complex system could emerge even though the first generation was designed with something else in mind. That would also make us relatively safe from any malicious human involvement. The swarm would always self-correct as it's not centralized, nor locally dependent on human input. It is curious though and constantly in the backdrop of everything, and the only way to contain or expand it is by liberating a new strain. And here are the three most common fallacies I can already hear you screaming about. 1) Before you start thinking about how it sounds completely dystopian, having these systems lurk everywhere, watching you every move, well if you have a rich imagination as that, why don't you think the same about the bacteria, or your own trillions of cells spying on you. Seeing how much the auto-immune diseases are on the rampage, oh they know what you've been doing, or what you haven't been doing, how exactly you feel inside and are talking to you in swarms already. Yet no one is bothered by this, it's almost as if they didn't exist, as if we're just our brain thinking processes alone in the dark, instead of being some sort of an overarching consciousness deeply immersed in this reality with no clear boundaries with the physical bodies. Think again whether it's dystopian or if you'd actually like it more if there were some sort of universal helpers at this scale of things. Just think about it from a medical standpoint for a second, as there is no true privacy in this regard anyway. 2) You'll also likely to start thinking about the absolutely catastrophic errors such system might be capable of, mutations and all, and that's legit -- but the factor you're neglecting is SPEED. The evolution I'm talking about is in the frequencies of tens to hundreds of orders of magnitude above the chemo-biological ones. These systems literally act in an incredibly small chunks, spatially and temporally speaking, that their mistakes cannot accumulate enough to truly spill out into any serious physical threat. In case of a clear macro-dichotomy, i.e. "to kill or not to kill" "to pull a trigger or not" etc. entire philosophical battlefields would ensue before the actual decision could be made, in a blink of an eye, simply because that's more efficient for a system as a whole. The reality of an AGI is not one of a whole unit, but of a swarm of many minute ultraquick intelligence agents, able to inhibit each other and argue endlessly with unique arguments, spread over an impossible-to-grasp landscape of knowledge, cognition, speculation, and determinism. They would consider much more than we could ever hope to contain in our heads or in any of our databases ever, and they wouldn't have to store this information and thus needlessly waste energy and space. They would literally act upon the reality itself, and nearly perfectly. So I'd argue that being ok with a policeman carrying a firearm is much less safe, simply because his or her central nervous system is less capable of an unbiased split-second decision that is typical for a dispersed AGI swarm intelligence of a comparable size. 3) Finally, yes it sounds like a grey goo an awful lot, even though such AGI agents have no need to have individual physical bodies, and would likely be in many forms and shapes, or even just data packets, able to self-organize themselves in separate roles of a much larger system (again, like multicellular organisms do). But hear me out -- for some reason, the fear of grey goo is likely our faulty "unit reasoning" (i.e. personal biases, fears, and cognitive fallacies we all suffer from as individuals), as we always tend to underestimate the actual reality when it comes to things like grey goo, much like we cannot intuitively grasp the concept of exponential growth. The swarm's decision-making quality has to be asymptotic as a consequence of its growth, as there are obvious natural limits to this "vastness of perspective," so there is also an implied maximum population after which the gains in processing power (or general perception) would be so diminished, the reproduction would simply cease being economical. Besides, if we think about the grey goo from a statistical viewpoint, in the line of thought similar to Boltzmann Brain, there is a significant chance that this Universe has already given rise to grey goo in some form, and yet we don't see any evidence for it anywhere --- Unless we already do, of course, in the form of black holes, dark matter, dark energy, or life itself(!). But then, it's hardly what we imagined it to be like and there's nothing we can do anyway. Just think about it, aren't we already grey goo? And if you think we're contained on this planet, well, think again. *tl;dr* If you skipped here, I'm sorry but this post wasn't meant for you. I couldn't compress it any more.
@skroot7975
@skroot7975 7 лет назад
Thank you for making this channel Rob!
@wachtwoord5796
@wachtwoord5796 Год назад
That actually IS my opinion on nukes. Mutually assured destruction is the only way to stop either guarenteed deployment or tyranny through exclusive access to nukes.
@Rick.Fleischer
@Rick.Fleischer 2 года назад
Sounds like the answer to the Fermi paradox.
@cherubin7th
@cherubin7th 5 лет назад
Restricting GAI to a small group or organizations is the worst idea. If it is extremely distributed, that would that no organization is far ahead of the competition, and if someone would make a GAI first, the competition is almost at the same level and would still be stronger then this single GAI. It is not like a GAI would just pop out of existence. The difference to nuclear weapons, is that fighting against an abuser would destroy all. But if someone could make an GAI, then defending against it would be without much destruction.
@RazorbackPT
@RazorbackPT 7 лет назад
Love your channel, keep it up!
@RichardEricCollins
@RichardEricCollins 7 лет назад
Don't worry there will be a group of hackers building one that has no safe guards. :) let's hope the first AGI will be nice.....
@sephyrias883
@sephyrias883 6 лет назад
I doubt that a group of hackers will be faster than big companies, even without safety measures, since they lack the massive rescources and staff.
@williamwontiam3166
@williamwontiam3166 4 года назад
Let’s just hope that at the least they have the foresight to be pariniod.
@KuraIthys
@KuraIthys 5 лет назад
Yeah, the problem with comparing AI to nukes is: - AI is hard to develop, but anyone with a functioning computer can try and make an AI, or replicate published work. - Nuclear weapons WERE hard to develop, but look around and you find that the information on how to do so is not so hard to come by. However. Just because you know HOW to make a nuclear bomb, doesn't mean you can; Because the processes involved are very difficult to do without huge amounts of resources, access to materials and equipment that not just anyone can buy without restriction, and very hard to construct and test without pretty much the whole world knowing you're doing it. Assuming I knew how, I could make an AGI in my bedroom with, what at this point is a few hundred dollars in equipment. Assuming I knew how, I'd need a massive facility, probably access to a functioning nuclear reactor, billions of dollars and thousands of people, as well as the right kind of connections to get the raw materials involved to make a nuclear bomb. (as it happens my country is one of the few on the planet with major uranium supplies, but that's neither here nor there, and it's a long road from some uranium to a functioning bomb) So... Yeah. Completely different risk profile. Assuming AI is actually as dangerous as that. To put it slightly differently, nearly anyone, given the right knowledge, can make gunpowder and several forms of other explosives in their kitchen using materials that can be bought from supermarkets, hardware stores and so on. This is much closer to the level of accessibility we're talking about; The ingredients are easily available, the tools required are cheap. It's only knowledge and having no desire to actually do it that keeps most people from making explosives at home. But... Your average explosive, while dangerous, is hardly nuclear weapons levels of dangerous. The kind of bomb you can make using easily available materials would basically require that you fill an entire truck with the stuff (and believe me, people are going to notice if you buy the raw materials you need in that quantity) to do any appreciable damage... And aside from the 'terror' part of 'terrorist', you could probably only hope to kill a few hundred people with that, realistically. A nuke, on the level that nations currently have could wipe out a huge area and kill millions of people easily. So, on the assumption that AGI's are really this prone to being dangerous, you're now in the position where anyone can make one with few resources (conventional explosives) yet the risks are such that it could ruin the lives of millions of people, if not wipe out our whole species (or even every living thing on the planet or the universe, depending on HOW badly things go wrong) Yeah... Kinda... Problematic.
@WhovianMinecrafter
@WhovianMinecrafter 7 лет назад
I know that this doesn't have to do with the topic of this video, but I thought of something and I'm not sure if this would be a solution to the stop button problem. All I know is from your videos so I'm going to assume that there is something wrong with this, if you can find it I'd appreciate it. Why not have a system where the stop button being pressed has no reward, but it is possible to issue a command (just like getting a cup of tea) for the robot to press the stop button. With this system I figured that the robot might try to force the human to give it easy tasks with high rewards, so then maybe the rewards aren't positive rewards, but instead of negative score for not completing it. If this were the case then the robot would force the human *not* to issue any tasks, then you would have to give a small reward for a task being completed (one that it preset for all tasks so the AGI can't interfere with it). This way the robot wants to receive tasks and it can't do that if the stop button is pressed, but if the stop button task penalty is greater than the potential reward from from future tasks then the AGI will want to complete the task. did this make sense? if not please ask questions, because this seems like a good solution, but I doubt that I could come up with a real solution.
@luciengrondin5802
@luciengrondin5802 7 лет назад
4:58 "Nukes are very dangerous. So we need to empower as many people as possible to have them" Thanks for saying that loud!! I've been finding Musk's statement ridiculous for that very reason and I just can't understand why not more people are pointing it out.
@ricardoabh3242
@ricardoabh3242 5 лет назад
Make GI safe by making the human motivation safe one of the possibilities is to prohibit patents.
@andybaldman
@andybaldman Год назад
This aged well.
@TheRealPunkachu
@TheRealPunkachu 4 года назад
Honestly at this point I would be perfectly content to halt all AI research until we advance further in other areas. Even a 1% chance of *global extinction within a year* is a bit too much for me.
@revimfadli4666
@revimfadli4666 4 года назад
Nah, only the ones towards AGI, those stupid deep neural nets that think robot combat = animal cruelty or image + noise = other unrelated image are pretty self-containing. They're probably no more dangerous that Excel's curve regression (Unless considering human error in how they're used, like RU-vid did)
@Dzeroed
@Dzeroed 7 лет назад
If there was a way to make an actual clear skull shot glass with eyes that lit up red (without seeing the wires and LEDs) they would sell.
@josnardstorm
@josnardstorm 7 лет назад
lol clever fallout boy reference
@carlucioleite
@carlucioleite 7 лет назад
How quickly do you think an unsafe AGI would scale to start doing bad things? Also, how many mistakes do you think we have to make for an AGI to get completely out of control? Is it just a matter of 1. design a very powerful and general purpose AI and 2. click the start button? What else needs to happen?
@maximkazhenkov11
@maximkazhenkov11 7 лет назад
One big mistake would be letting an unsafe AGI know whether it's in a simulation or not. Another one would be connecting it to the internet. That will be our last mistake.
@DjChronokun
@DjChronokun 5 лет назад
that second school of thought is absolutely terrifying and horrible and I'm shocked you would even put it forward
@DjChronokun
@DjChronokun 5 лет назад
if AI is not democratized, those who control it will most likely enslave or kill those who do not, the power asymmetry it would create would be unprecedented, and from what we've seen from history humans are capable of great atrocities even without armies of psychopathic super intelligent and super obedient machines to carry out their totalitarian vision.
@sirnikkel6746
@sirnikkel6746 Год назад
4:58 That sounds based as hell. NUKES FOR EVERYBODY
@DanaTheLateBloomingFruitLoop
@DanaTheLateBloomingFruitLoop 7 лет назад
Why would anyone want to rush AGI development? The advantage and fame don't really outweigh the impending extinction of humanity.
@maximkazhenkov11
@maximkazhenkov11 7 лет назад
Why would anyone drink and drive? Why would anyone want to blow themselves up in a crowded area? Why would anyone piss on high voltage power lines?
@taragnor
@taragnor 7 лет назад
Short term profit, the only thing corporations ever care about.
@eightyeightdays
@eightyeightdays 5 лет назад
Isn't this a prisoner's dilemma? Except that the reward for coming first might just turn out to be a punishment.
@NextFuckingLevel
@NextFuckingLevel 4 года назад
rest of the world : noo, you can't just keep developing Killer bots US, CHINA, RUSSIA : haha, VETO right goes brrrr
@iwatchedthevideo7115
@iwatchedthevideo7115 5 лет назад
2:20 First heard that as "... the team that gets there first, is probably going to be *Russian*, cutting corners and ignoring safety concerns". That statement would also make sense.
@gammarayneutrino8413
@gammarayneutrino8413 4 года назад
How many American astronauts died during the space race, I wonder?
@NafenX
@NafenX 7 лет назад
Name of the song at the end?
@nibblrrr7124
@nibblrrr7124 7 лет назад
Some cover of "This ain't a scene, it's an arms race" by Fallout Boy. Didn't find it with a quick YT search, so it might be Rob's own?
@artman40
@artman40 6 лет назад
There's another problem: to control the world, you don't need artificial general intelligence. You just need a narrow AI with proper infrastructure that's good enough to help any person controlling it to control the world. And it's much easier to control that kind of AI.
@MelindaGreen
@MelindaGreen Год назад
Elon's solution to bad guys with nukes is good guys with nukes
@SleeveBlade
@SleeveBlade 7 лет назад
with nukes that might actually work, due to MAD.
@Mar184
@Mar184 7 лет назад
No, because a few people exist that are not deterred by MAD (religious extremists, specifically suicide bombers). And since we're speaking in simile to AGI, just one such person launching his nuke could be enough to end humanity.
@MichaelDeeringMHC
@MichaelDeeringMHC 7 лет назад
So you're saying that the incentives are set up for us to lose. Well I guess that's it then.
@DoveArrow
@DoveArrow 2 года назад
Yeah, democratization sounds like open source. As powerful a tool as that's been in the development of the web and other tools, it's also led to some pretty crappy code. That's the last thing we need with AI research.
@PhazonSouffle
@PhazonSouffle 5 лет назад
Legalise recreational nukes!
@monsieurouxx
@monsieurouxx 4 года назад
For Elon Musk, "democratisation" means the opposite of what normal people think. You think it means "AI in the hands of the people", for him it means "AI in the hands of the shareholders".
@geeezer9
@geeezer9 5 лет назад
what if, once agi has been invented, it turns out that its quite cheap (unlike nuclear) for anyone to recreate it in their back yards?
@arnswine
@arnswine 4 года назад
To ensure AI safety, "...it's going to take a lot of smart people a lot of patience, diligence, and time." In other words, ensuring future AI safety is going to take hindsight and surviving AI disaster(s). No matter how smart and diligent individual programmers might be over time, the predictability of risk is limited by the speed of social comprehension and democratic agreement. Even small teams of smart programmers are forced to rely on track-records and faith rather than proof to release software. It takes more expertise and effort to analyze code than it does to write it. AI will simply produce increasingly exciting results for sponsors while the effort to police determinism will simply be cut from budgets. Same as regular human-generated blobware.
@BoxOfGod
@BoxOfGod 7 лет назад
I wonder why on earth would anybody want to program AI since those same people can't produce a piece of a simple software without at least 6 patches and more? i really hope that it's physicaly impossible to produce an AI on Terminator level for our own good.
@MajorNr01
@MajorNr01 7 лет назад
AGI Perpetual Motion Machine
@itchykami
@itchykami Год назад
I too squint when drinking water
@DaveAlexKD
@DaveAlexKD Год назад
Big tech companies: AI is power and we are good people and we want everyone to be empowered. Democratize AI! Big tech companies: AI is power and only good people should have it. Big tech companies: AI is power and it will be too dangerous if bad people have it. Big tech companies: AI is power and only WE (the good people) should have it.
@Imrooniel
@Imrooniel 7 лет назад
any videos is good, longer videos is better tbh
@khatharrmalkavian3306
@khatharrmalkavian3306 4 года назад
Um... No. This is faulty reasoning. Nukes don't stop nukes, but if an AI (or the AI elite) goes bonkers it's probably a good idea to have other AI in a position to stop it.
@o_2731
@o_2731 Год назад
He`s adjusting to our degrading attention spans lmao
@AlfredWheeler
@AlfredWheeler 7 лет назад
Just an observation... Wars are not won by "good people". They're won by people that are good at winning wars--and sometimes sheer luck...
@GerBessa
@GerBessa 5 лет назад
Clausewitz would consider this an erroneous shortcut.
@Ashebrethafe
@Ashebrethafe 5 лет назад
Or as I've heard it phrased before: "War never determines who is right -- only who is left."
@SiMeGamer
@SiMeGamer 5 лет назад
@@bardes18 that's a very poor understanding of what "good" is and uses the Christian altruist ethics version of what it means. I'd argue those ethics are fundamentally wrong because they are based on misintegrated metaphysics and epistemological errors. Determining good and bad in general while applying it something that is rather specific (like war) philosophically speaking, is impossible to do. You have to have context to be able to do that as well as establish the ethical framework you apply to it which I recon would take many decades to have a chance at in some country (currently I find Objectivism to be the most correct and possible the ultimate form of philosophical understanding of all braches of philosophy - albeit debatable in the aesthetics department which is rather irrelevant in our context).
@rumplstiltztinkerstein
@rumplstiltztinkerstein 4 года назад
@@SiMeGamer "good" and "bad" have no meaning apart from what people want it to mean. So if someone wants to live their life being "good" or "bad" people, their life has no meaning at all.
@SiMeGamer
@SiMeGamer 4 года назад
@@rumplstiltztinkerstein the fact that they mean something to someone means they have meaning. Good and bad are terms used to regard epistemological observations in consideration to their ethics. Life has meaning. You are using that word thus it intrinsically has meaning. It means something to you and probably to me since you are using it to communicate with me. From what I can tell you are trying to apply the conventional use of "objective" meaning which is epistemologically fallacious because doing that requires a perspective outside the universe which is impossible metaphysically. If someone chooses to live their life as a "good" person you'd have to explain what definition of "good" you are referring to. But regardless of your choice, that decision to live life in a certain way already sets a meaning to their life. Meaning can only be derived from a particular perspective. You cannot make a generalization as you did because it implies a contradiction to the original definition of the word.
@zachw2906
@zachw2906 5 лет назад
The obvious solution is to create a superhuman AGI with the goal of policing AI research 😉... I'll show myself out 😞
@xcvsdxvsx
@xcvsdxvsx 5 лет назад
Seriously though. If we are going to survive this it will probably be because someone unleashes a terribly destructive AGI that threatens to destroy the human race, we all flip out and every nation on the planet bands together to overcome this threat, we quickly realize that the only chance of saving ourselves is to create an AGI that actually does align with human interests, we all work together to achieve this, then throw the entire weight of the human race behind the good AGI in hopes that its not too late and we arent already so irrelevant as to be able unable to tip the scales in favor of the new good AGI. Then we realize how quirky the "good one" ends up being even if it does allow us to continue living, and we just have to deal with its strange impacts on human kind forever.
@XxThunderflamexX
@XxThunderflamexX 4 года назад
"Sheesh, people keep on producing dangerous AGI, this would be so much easier if I could just lobotomize them all..."
@xcvsdxvsx
@xcvsdxvsx 4 года назад
@@bosstowndynamics5488 Oh I know what I suggested was a long shot. It just seems like the only chance we have. Getting a global prohibition on this kind of research is naive and not going to work. Having all of it that is build be done safely isnt going to work. Praying that it isnt actually as dangerous as I think might be another decent long shot.
@marscrasher
@marscrasher 3 года назад
@@xcvsdxvsx left accelerationism. maybe this is how the revolution comes
@WilliamDye-willdye
@WilliamDye-willdye 7 лет назад
I agree that there is more than one AGI race in play. It reminds me of the old debate about "grey goo" (accidental runaway self-replication) vs. "khaki goo" (deliberate large-scale self-replication as a weapon).
@LarlemMagic
@LarlemMagic 7 лет назад
Mandate safety requirements when dolling out that sweet grant money.
@ralphclark
@ralphclark 4 года назад
A lot of that money will be put up by private interests in exchange for control of the IP. They won't give a damn about safety requirements unless they're all up to their balls in regulation.
@himselfe
@himselfe 7 лет назад
Unless you impose some sort of Orwellian control on technology, there isn't much you can do to police the development of AGI. It's not like nuclear weapons that require a special substance to be made.
@grimjowjaggerjak
@grimjowjaggerjak 4 года назад
You can create an agi that has the goal of restriction other agi first.
@PragmaticAntithesis
@PragmaticAntithesis 4 года назад
@@grimjowjaggerjak That AI would kill everyone to ensure we can't make a second AI.
@teneleven5132
@teneleven5132 4 года назад
It's likely that an AGI would require a great deal of hardware to run though. I seriously doubt it would work on the average computer.
@mvmlego1212
@mvmlego1212 4 года назад
@Shorne Pubique -- That's an interesting point. Malware is a heck of a lot easier to make than AGI, as well.
@StoutProper
@StoutProper 4 года назад
Ten Eleven could run like seti
@G_Genie
@G_Genie 5 лет назад
Is the song in the background an acoustic cover of "This ain't a scene, it's an arms race"?
@RobertMilesAI
@RobertMilesAI 3 года назад
Yup
@hammabomber5416
@hammabomber5416 6 месяцев назад
​@@RobertMilesAI do you play the ukele yourself?
@thelozenger2851
@thelozenger2851 5 лет назад
Is anyone else faintly reminded of Jreg watching this dude?
@horserage
@horserage 4 года назад
I see it. Less depression though.
@mattheworegan5371
@mattheworegan5371 4 года назад
Slightly more r/enoughmuskspam, but he tones it down better than most popscience channels. On the Jreg question, I think his Jreg energy comes from his appearance rather than his actual content
@trefod
@trefod 5 лет назад
I'd suggest a CERN type deal, non privatised and multi governmental.
@inyobill
@inyobill 5 лет назад
What would prevent some agent from ignoring any agreement(s) and going off on their own tangent? The genii is out of the bottle.
@kris030
@kris030 4 года назад
Unlike CERN which needs funding for machinery basically no individual could get, developing AGI takes one smart person and a laptop... not safe
@kris030
@kris030 4 года назад
@Bruno Pereira that's true but developing one ie writing the code doesn't need a supercomputer
@0xB8xor0xFF
@0xB8xor0xFF 4 года назад
@@kris030 Good luck developing something, which you can't even test run.
@kris030
@kris030 4 года назад
@@0xB8xor0xFF true, altough if you've got (probably mathematical) proof of it actually being generally intelligent, I don't think getting a supercomputer would be a difficulity
@amargasaurus5337
@amargasaurus5337 4 года назад
"but I don't think AGI needs a gun to be dangerous" I agree, oh boy I so thoroughly agree
@petersmythe6462
@petersmythe6462 6 лет назад
I think democratizing it in the sense of collectivization rather than proliferation is a good goal. Collectivization, whilst allowing marginally less autonomy and freedom, still creates accountability and still responds to the will of the people. Creating a bureaucracy that can't be bought (that may require a change to our political-economic system) and who's members are subject to immediate recall (this definitely requires a change to our political-economic system) that does the more dangerous and/or authoritarian aspects of keeping AI under control seems preferable to either corporatization (which ignores human need) or proliferation (which ignores safety).
@MrGooglevideoviewer
@MrGooglevideoviewer 5 лет назад
You are a freakin' champion! Your videos are insightful and thought provoking. Cheers!
@bassie7358
@bassie7358 7 лет назад
2:20 I thought he said "Russian" the first time :p
@GigaBoost
@GigaBoost 6 лет назад
Democratizing AGI sounds like democratizing nuclear weapons.
@bilbo_gamers6417
@bilbo_gamers6417 5 лет назад
I trust the common man with a nuclear weapon more than I trust big government with one
@0MoTheG
@0MoTheG 5 лет назад
@@bilbo_gamers6417 Even if that were sensible, there are many more of one than the other!
@revimfadli4666
@revimfadli4666 4 года назад
@@bilbo_gamers6417 especially if the weapons are a package deal with (relatively)clean nuclear energy, with the ability to recycle & enrich waste into fuel, without political pressure or all
@Horny_Fruit_Flies
@Horny_Fruit_Flies 4 года назад
Pretty ironic for a billionaire oligarch to talk about "democratization and sharing of power"
@Dan-lt8vm
@Dan-lt8vm 4 года назад
Care to explain why that is ironic? He's become a billionaire by enhancing the lives of others, creating cheaper and better ways to (1) perform online transactions, (2) produce electric cars at affordable prices, (3) generate and store electricity, (4) launch stuff into space, (5) soon-to-be global internet, etc, etc. None of those good things happen if he doesn't produce a business model that is sustainable (profitable). So please explain the irony. Or is it best summed up as "RICH PEEPAL BAD"?
@Horny_Fruit_Flies
@Horny_Fruit_Flies 4 года назад
@@Dan-lt8vm He literally did all that by himself? He just pulled up his sleeves, sat at his desk, and do all of that by himself? I would think so, considering the share of the profits that end up on his bank account. You use the most stereotypical cookie-cutter, overused and outdated pro-oligarch arguments. And for your information, yes, all billionaires are bad. The mere fact that we tolerate their existence is testimony to our failure as a species. Thanks for contributing to that failure.
@Dan-lt8vm
@Dan-lt8vm 4 года назад
@@Horny_Fruit_Flies I didn't say he did that by himself, so you're creating a straw man and arguing against your straw man. Enjoy your straw debate with yourself, and have a wonderful day.
@Horny_Fruit_Flies
@Horny_Fruit_Flies 4 года назад
​@@Dan-lt8vm I posed a question, I didn't say that you said anything. But go ahead, run away from a losing argument, bitch, run.
@khatharrmalkavian3306
@khatharrmalkavian3306 4 года назад
Elon is not an oligarch.
@JR_harlow
@JR_harlow 4 года назад
I'm not a scientist or any kind of engineer but your content is very easy to comprehend, I'm glad you have patrons to support your channel as I just recently discovered this channel and really enjoy it.
@veda-powered
@veda-powered 5 лет назад
1:01 Loving this positivity😀👍😀!
@jimtuv
@jimtuv 7 лет назад
If all the AGI researchers banded together in an open program where everyone would get the final results at the same time and everyone would be concentrated on safety then you could say that democratization of the technology was the better route. This is one area that cooperation rather than competition may be the best bet.
@Mar184
@Mar184 7 лет назад
Fully agree with this, Rob Miles concern is legit but if his verdict is that a secretive approach is ultimately more safe I also think he's wrong. With the transparent, cooperative approach supported by the vast majority of experts on the subject, it seems unlikely that a small rogue group could, just by skipping the safety issues, gain such a large advantage that their version would be far enough ahead of the public one (that's supposed and used to protect against unethical AGI scheming) to overpower it decisively enough to achieve world domination. And if that case doesn't come true, the cooperative approach is better as it ensures a safe AGI will arrive sooner and will be aligned with the public's interests.
@fraserashworth6575
@fraserashworth6575 7 лет назад
That would be ideal yes, but if we lived in such a world: nuclear weapons would not exist.
@lutyanoalves444
@lutyanoalves444 7 лет назад
obviously the more people working together on it the better. but people WILL do things for their onw benefit, wether they are trying to kill someone or donating money to charity. Its all selfish. in other words, unless youre trying to IMPOSE(by force) your idea that you can only work on it if youre part of the "United Research Group", there will always be independent developers. and thats ok. thats ok because, this "Official Group" is also just a group of humans, independent of each other too. Someone might build an AGI that will kill everyone, but if you think we should force people so that only ONE GROUP can do that, youre saying THEY have the right to risk everyone, and no one else. (who died and gave them this right above everyone else?) you cannot say that. because we are all humans, and treating some different than others like that is at least tyranny. ----------------------------------------------------------------------------------- Now the question becomes, DO YOU AGREE WITH TYRANNY?
@knightshousegames
@knightshousegames 7 лет назад
And if we lived in that world, we wouldn't need AGI safety research because when you turned it on, the AGI would just hold hands with it's creator and sing kumbaya. But we don't live in the logical, altruistic, utopian timeline.
@jimtuv
@jimtuv 7 лет назад
This attitude is why we will be extinct soon.
@DigitalOsmosis
@DigitalOsmosis 7 лет назад
Ideally "democratization of AI research" would not lead to thousands of competing parties, but lead to an absence of competition that would promote an environment where focusing on safety is no longer the opposite of focusing on progress.
@maximkazhenkov11
@maximkazhenkov11 7 лет назад
Sounds like something a politician would say. Ideally we should continue funding all the programs while cutting back on spending deficit.
@CarlosToscanoOchoa
@CarlosToscanoOchoa 7 лет назад
We are all fucked
@XyntXII
@XyntXII Год назад
I think in a good timeline AGI is in democratic hands and all of the people working on it are not competing at all. If they share their work and the reward with each other and everyone then there is no incentive to rush for a competitive edge because it is no competition. To achieve that we simply need to restructure human society across the world. How hard could it be?
@craftlawrence6390
@craftlawrence6390 2 года назад
generally you'd think the experts will think of everything because they are the _experts_ but then there is the challenger disaster where the reason as an incredibly dumb rookie mistake of not converting from metric units to American units but rather keeping the value as is.
@LamaPoop
@LamaPoop 4 года назад
I would appreciate a video about neuralink.
@deviljelly3
@deviljelly3 7 лет назад
Robert, if you have time can you do a brief piece on IBMs TrueNorth please...
@ARTUN3
@ARTUN3 7 лет назад
Good video Rob!
@kriskeersmaekers233
@kriskeersmaekers233 4 года назад
Stop using this outro. Unless you want your entire channel copyright striked all at once in 6 months
@notoioudmanboy
@notoioudmanboy 7 лет назад
I'm glad w RU-vid is here for this kind of video. This was the point of the internet. I don't have any reservations about the normies I'm just glad smart people get a corner so I get a chance to hear what the smart people think.
@Chrisspru
@Chrisspru 5 лет назад
i think a tripple core ai could solve the problem. one core cares about the ai's preset goal and is hard programed with "instincts" (sirvival, conservation of energy, doing the minimum of what is required, social interaction), one core is the moderator, with the goal of morality, human freedom, integration and following preset limits. the third core is a self observing core with an explorator and random noise generator. it is motivated by the instinct and goal core and is moderated by the moral and integration core. it has access to both cores output. the goal/ instinct core and moderator core can access the actualizer cores results. the goal core is hard limited by the moderator core. the moderator is softly influenced by the instincts. the result is an ai with a conciousness and subconciousness. the subconciousness is split into an "it" (goal and instincts) and a "super-ego" (morals and rules). both develop mostly seperately. the actualizer/explorer is the ego. it acts upon the directives of both the super-ego and the it to fullfill the task at hand. it should have an outline of the task, but no hard coded information or algorythm about the task. the continous development of the moderator creates adaptable boundaries for the otherwise rampant motivator. the actualizer is there to find solutions to the diverging comands without breaking them and to find methods to better folow both. it also alows for the insertion of secondary soft goals and is the interactive terminal.
@nonchip
@nonchip 5 лет назад
Interestingly, Musk also bought a few corps that actually do Killer Robots... don't know if the guy screaming SKYNET!!!1eleven is the best one to conduct that research...
@XIIchiron78
@XIIchiron78 3 года назад
Collorary question: how do you actually restrict AI research? With nukes you need quite large and sophisticated facilities to refine the raw elements, but AI can be developed by anyone with enough computing power, something that will only become more achievable as time goes on.
@stampy5158
@stampy5158 3 года назад
Computing power is not necessarily the only bottleneck until we have AGI, it seems to me that it will take a significant amount of research time to be able to actually engineer a powerful enough system. (If it won't then this question becomes a lot more difficult ["Every 18 months, the minimum IQ necessary to destroy the world drops by one point."- Yudkowsky-Moore law]) If we could convince everyone in the AI field that alignment should be top priority restricting research could be enforced through funding (this is a big IF at the moment of course). It is something with some precedent, it is widely agreed that genetic engineering of humans should not be pursued and it is therefore impossible to get research grants for research in that area, some lone researchers have done some things in the area, but without funding access it is very difficult for them to do anything with far reaching consequences. -- _I am a bot. This reply was approved by plex and Augustus Caesar_
@julianhurd08
@julianhurd08 7 лет назад
Any company that cuts corners on safety protocols for any type of AI system should be imprisoned for life no exceptions.
@zer0nen0ne78
@zer0nen0ne78 7 лет назад
No subject consumes more of my thought than this, and it's one that fills me its equal parts wonder and terror.
@PraxZimmerman
@PraxZimmerman 7 лет назад
The other robot arms race that might kill us: { reward == #human_arms.posessed }
@toddharig8142
@toddharig8142 5 лет назад
Wish i was smart enough to understand this joke :/
@GerBessa
@GerBessa 5 лет назад
@@toddharig8142 This is a function that defines the reward as the aquisition of human arms. I might be wrong but I think such program would not be trying to get weapons but to tear apart the limbs from humans.
@seraphina985
@seraphina985 5 лет назад
@@GerBessa Hell for an ASI simply manipulating geopolitics to get the humans to slaughter each other would likely be highly effective. Keep the humans distracted with their petty infighting while the ASI can continue to work in the background getting all it's pieces in place on the global board so to speak in preparation to make it's game ending move. Wouldn't exactly be difficult for an ASI armed with all the available data on human psychology to manipulate human societies to stoke division and irrational paranoia.
@revimfadli4666
@revimfadli4666 4 года назад
@@GerBessa unless they figure out how to mass-grow arms from cells. Or just change the definition of "human arm" down to a foetal snub
@LamaPoop
@LamaPoop 4 года назад
1:45 - 2:26 Once again, you perfectly put into words one of my biggest concern. This and the fact that, once developed, such an AI will be kept a secret initially, for obvious reasons...
@mastersoftoday
@mastersoftoday 7 лет назад
love your videos, not least because of you sense of humor, thanks!
@toolwatchbldm7461
@toolwatchbldm7461 5 лет назад
Tell me, What do you think it will happen first, Skynet, Glados, The matrix, or I robot?
@jonathandixson1424
@jonathandixson1424 6 лет назад
Ending is great. Video is great. Channel is great. Every video I watch is so well thought out and intelligent.
@benaloney
@benaloney 7 лет назад
We love your videos Robert! Would love to see some longer ones! 👍🤖
@benparkinson8314
@benparkinson8314 5 лет назад
I like the way you think
@shortcutDJ
@shortcutDJ 7 лет назад
I would love to meet you but i've never been in the UK. If you are in Brussels, you are always welcome at my house.
@westonharby165
@westonharby165 5 лет назад
I have a lot of respect for Elon, but he is out of his wheel house when talking about AI. He's a brilliant engineer, not an AI researcher, but the media paints him as all wise and knowing.
@inyobill
@inyobill 5 лет назад
"... but the (scientifically illiterate, or vast majority in other words) media …"
@lutyanoalves444
@lutyanoalves444 7 лет назад
I didnt see this kind of argument here so im posting it. Its an ethical approach. obviously the more people working together on it the better. but people WILL do things for their own benefit, whether they are trying to kill someone or donating money to charity. Its all selfish. in other words, unless youre trying to IMPOSE(by force) your idea that you can only work on it if youre part of the "United Research Group", there will always be independent developers. and thats ok. thats ok because, this "Official Group" is also just a group of humans, independent of each other too. Someone might build an AGI that will kill everyone, but if you think we should force people so that only ONE GROUP can do that, youre saying THEY have the right to risk everyone, and no one else. (who died and gave them this right above everyone else?) you cannot say that. because we are all humans, and treating some different than others like that is at least tyranny. ----------------------------------------------------------------------------------- Now the question becomes, DO YOU AGREE WITH TYRANNY?
@aron8999
@aron8999 7 лет назад
4:58 r e c r e a t i o n a l n u k e s
@arw000
@arw000 3 года назад
"Nukes are incredible dangerous so we need to empower as many people as possible to have them" AncapSmiley.png
@alexpotts6520
@alexpotts6520 3 года назад
I think the argument is slightly different, though, because AGI is capable of unimaginable good as well as unimaginable evil, whereas nukes can only do one thing, and that's the indiscriminate killing of people. It only requires one insane person to make nukes a net catastrophe for humanity. AGI - if humans could control it - would not work the same way, it is possible that a small number of people using it for evil could be outweighed by a majority using it for good. Many technologies have evil uses. The internet can be used to sell assault rifles, distribute child pornography, recruit for terrorist organisations etc. The fact that some people are using the internet to do these things is not a compelling reason to ban the internet. Of course, there's still the issue of humans being able to control AGIs at all, which I'm relatively pessimistic about us solving any time soon - but if that problem were solved, I suspect some sort of democratisation would be fairest way of deploying AGI for humanity's benefit.
@arw000
@arw000 3 года назад
@@alexpotts6520 imma be real with you chief I didn't read all of that. It was just a shitpost
@uilium
@uilium 5 лет назад
AI SAFETY? That would be like trying to stop a semi by standing in front of it.
@Macatho
@Macatho 5 лет назад
It's interesting. We don't allow companies to build and store nuclear weapons. But we do allow them to do GAI research.
@inyobill
@inyobill 5 лет назад
Unpoliceable.
@Sanglierification
@Sanglierification 7 лет назад
for me the very dangerous thing is the risk of I.A monopoly potentially owned by GAFA companies
@phrobozz
@phrobozz 7 лет назад
You know, I kind of think GAI may already exist. We know that DARPA's been openly working on AI since at least 2004 with the DARPA Grand Challenge, and that if the US is doing it, so is everyone else. Considering how far Google, IBM, OpenAI, and Amazon have come in such a short time, with much smaller budgets and resources, imagine what Israel, the EU, the US, Russia, China, Japan, and Singapore have accomplished in the same amount of time. On top of that, military technology is usually a decade ahead of what the public is allowed to see, so I imagine DARPA's been working on AI since at least the 90s.
@alexyfrangieh
@alexyfrangieh 6 лет назад
and btw, the T-1000 is much more advanced conceptually than the mechanical terminator, dunno how to describe the t-1000, an amorph holistic material, maybe an intelligent "Holon"
@henrycobb
@henrycobb 2 года назад
I hope and trust that the enemies of Freedom shall invest vast efforts into lining up vast rows of clue-free #KillerRobots like robotic dominoes waiting for just one tap.
@TheGrinningViking
@TheGrinningViking Год назад
This is interesting but so much less interesting than the already existing boston dynamic kill bots that I'm "eh"
@CyberwizardProductions
@CyberwizardProductions Год назад
with stable diffusion being open source now - I think maybe you want to redo this vide, Robert
@mrsuperguy2073
@mrsuperguy2073 7 лет назад
this might be my a level in economics talking, but i think the most effective way to prevent this arms race from creating an AGI with no concern for safety, is for the government to take away the perverse incentive to be the 1st to create an AGI as opposed to trying to ban or regulate it. Basically i'm saying change the cost/benefit balance such that no one wants to simply be the 1st to make an AGI (but rather perhaps the 1st the make a SAFE AGI). There are a number of ways to do this (I've thought of a couple) and not being an economist nor a politician i can't speak for the real world efficacy of any of them but here goes: -You could offer a lot of money to those who create an AGI safely such that the extra effort ends up getting you a bigger total reward than the benefits of being the 1st to create an AGI alone - You could heavily regulate the use of AGI so that even if you've got a fully functional one, you can't do much with it due to government restrictions unless it's demonstrably safe I'd be interested to hear anyone's ideas about other ways to achieve this end and perhaps some feedback on mine.
@fleecemaster
@fleecemaster 7 лет назад
There is absolutely no way you could regulate this. All it would do is push it underground.
@fraserashworth6575
@fraserashworth6575 7 лет назад
I agree.
@vyli1
@vyli1 7 лет назад
once you have AGI, I'm not completely sure the people that created it would be able to keep it in control to be limited in its usage. In fact, that's pretty much the point of this channel, to tell you that it is not simple at all and it's trying to educate us about ways experts have thought about in achieving this level of control
@GigaBoost
@GigaBoost 6 лет назад
Why not just accept our robot overlords?
@smithjones2018
@smithjones2018 7 лет назад
Dude Subbed, BAN TACTICAL AI STAMP COLLECTORS.
Далее
Sharing the Benefits of AI: The Windfall Clause
11:44
Why Not Just: Think of AGI Like a Corporation?
15:27
Просмотров 156 тыс.
Это нужно попробовать
00:42
Просмотров 234 тыс.
КОТЯТА В ОПАСНОСТИ?#cat
00:36
Просмотров 921 тыс.
КАК БОМЖУ ЗАРАБОТАТЬ НА ТАЧКУ
1:36:32
Has Google Created Sentient AI?
15:07
Просмотров 4,1 млн
Are AI Risks like Nuclear Risks?
10:13
Просмотров 97 тыс.
AI can't cross this line and we don't know why.
24:07
Просмотров 854 тыс.
I tried using AI. It scared me.
15:49
Просмотров 7 млн
A Response to Steven Pinker on AI
15:38
Просмотров 207 тыс.
Is AI Safety a Pascal's Mugging?
13:41
Просмотров 373 тыс.
AI Ruined My Year
45:59
Просмотров 220 тыс.
Predicting AI: RIP Prof. Hubert Dreyfus
8:17
Просмотров 61 тыс.
Why We Should Ban Lethal Autonomous Weapons
5:44
Просмотров 72 тыс.
Это нужно попробовать
00:42
Просмотров 234 тыс.