Тёмный
No video :(

AI Safety - Computerphile 

Computerphile
Подписаться 2,4 млн
Просмотров 195 тыс.
50% 1

Safety in AI is important, but more important is to work it out before working out the AI itself. Rob Miles on AI safety.
Brain Scanner: • Brain Scanner - Comput...
AI Worst Case Scenario - Deadly Truth of AI: • Deadly Truth of Genera...
The Singularity & Friendly AI: • The Singularity & Frie...
AI Self Improvement: • AI Self Improvement - ...
Why Asimov's Three Laws Don't Work: • Why Asimov's Laws of R...
Thanks to Nottingham Hackspace for the location.
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

Опубликовано:

 

2 фев 2016

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 576   
@Vospi
@Vospi 8 лет назад
I ADORE this speaker. While many other things on the channel seem too distant or just overexplained to me, Mr. Miles keeps it fresh, concise and somewhat personal. Excellent pace and mood.
@gabrote42
@gabrote42 2 года назад
That's why his channel us my main source
@orthoplex64
@orthoplex64 7 лет назад
5:08 "Or we discover that the brain is literally magic" lol "Uh, sir, we may have encountered a problem in our general AI project." "Well what is it?" "We've discovered that human brains are literally magic."
@vanderkarl3927
@vanderkarl3927 3 года назад
One problem that I've encountered while trying to discuss these sorts of topics with other people is that some people firmly believe that the human brain *is* actually magic, that there's something unmeasurable about it which, for whatever reason, can't be recreated with computers. And if I try to challenge these beliefs, it is unthinkable to them because it would upturn the very foundation of their philosophies. We're going to encounter these sorts of problems on grander, more tangible scales in the near future, and it will be critical that the side of AI safety comes out on top.
@CatnamedMittens
@CatnamedMittens 8 лет назад
Yes, more of these AI videos with Rob Miles!
@CatnamedMittens
@CatnamedMittens 8 лет назад
***** Thanks.
@repker
@repker 8 лет назад
+CatnamedMittens “Michael Bialas” thanks
@Nerevarh
@Nerevarh 8 лет назад
+CatnamedMittens „Michael Bialas“ First Thooorin, then Computerphile. Next time we meet in the comments of some beauty vlog, I suppose? ;)
@CatnamedMittens
@CatnamedMittens 8 лет назад
Marius Hamacher Let's not get carried away.
@simoncarlile5190
@simoncarlile5190 8 лет назад
+CatnamedMittens “Michael Bialas” Could watch these all day
@cryoshakespeare4465
@cryoshakespeare4465 8 лет назад
Excellent, I really appreciate Rob Miles' moderacy and sincerity - he makes a simple yet important point, and has the capacity to back it up with deeper explanation if need be. Also, this video doesn't seem to be sensationally titled nearly as much. And I'll admit that it caught my eye less in my subscription feed, but if it was intentional, I appreciate the difference.
@myperspective5091
@myperspective5091 8 лет назад
That whole video was sensational vague drivel.
@cryoshakespeare4465
@cryoshakespeare4465 8 лет назад
Robert Swift Well, I've already made my response to you clear on your comment thread (;
@myperspective5091
@myperspective5091 8 лет назад
Chronosaur I chatted in length on the subject on another thread already. I like the subject though. You have an opportunity to give your own argument. Do the leg work and Build up some talking points.
@rich1051414
@rich1051414 7 лет назад
I am a programmer, so this wasn't new to me, but I didn't know moderacy was a word until now. We are all learning things!
@slugrag
@slugrag 8 лет назад
My favorite on computerphile
@lorenzvo5284
@lorenzvo5284 8 лет назад
+Doug Soutar I like him a lot too
@BanterEdits
@BanterEdits 8 лет назад
+Doug Soutar Tom Scott 4lyfe
@lorenzvo5284
@lorenzvo5284 8 лет назад
MLGBanterVids Yeah, hes awesome. He has his own channel tho
@sweetspotendurance
@sweetspotendurance 8 лет назад
+Doug Soutar Agreed, Rob Miles is awesome.
@CuulX
@CuulX 8 лет назад
+Deer Viehch Link? It's not in description or search results.
@Eysc
@Eysc 8 лет назад
00:29 dem hair transition
@AnOddScot
@AnOddScot 8 лет назад
+E SC He went from Moss to Danny Sexbang
@SFGJP
@SFGJP 8 лет назад
+AnOddScot NSP!
@CYXXYC
@CYXXYC 8 лет назад
+E SC they get bigger thruout whole video
@SecondMoopzoo
@SecondMoopzoo 7 лет назад
English Mac DeMarco
@MetsuryuVids
@MetsuryuVids 8 лет назад
This guy is the best on this channel, he clearly knwos what he's talking about.
@Wes_Jones
@Wes_Jones 8 лет назад
I love listening to Rob speak. I wish there were more videos available of him.
@RobertMilesAI
@RobertMilesAI 5 лет назад
I've got my own channel you know
@NeilRoy
@NeilRoy 8 лет назад
It's interesting to me just how difficult morality really is. It seemed so simple with Asmiov's laws of robotics, until looked at closer.
@gabrote42
@gabrote42 2 года назад
I mean, Asimov himself subverted his laws with edge cases even more than he exemplified why they exist. And positronic brains were made manually, not optimized by optimizers
@codrincx
@codrincx 2 года назад
@@gabrote42 This. So many people swear by the laws without even stopping to consider that even in the original, they were broad guidelines at best.
@gabrote42
@gabrote42 2 года назад
@@codrincx and harmful overheat crashers at worst
@morscoronam3779
@morscoronam3779 7 лет назад
As a mechanical engineer, requiring that any design be perfect without building working prototypes would be a nightmare. I'm more fascinated by the concept of general intelligence AI than I am able to understand it all, but anyone working toward such a goal definitely won't be bored anytime soon.
@Zerepzerreitug
@Zerepzerreitug 8 лет назад
Rob Miles is such a good speaker. Hope there are more videos with him.
@Steam1901
@Steam1901 8 лет назад
Superintelligence, by Nick Bostrom, great read on that exact topic.
@pramitbanerjee
@pramitbanerjee 8 лет назад
+Steam1901 i dont have money to buy it
@SilvioPorto
@SilvioPorto 8 лет назад
pirate it
@DaybreakPT
@DaybreakPT 5 лет назад
@@SilvioPorto KickassTorrents is my new favorite website for acquiring more knowledge I couldn't get otherwise lol
@ekkehardehrenstein180
@ekkehardehrenstein180 8 лет назад
This guy is soo satisfying to listen to. I sometimes tend to loose hope in humans intelligence, when looking at recent events in the media.
@NickCybert
@NickCybert 8 лет назад
Love the A.I. videos, please keep doing them from time to time!
@Njald
@Njald 8 лет назад
Btw Rob Miles is awesome in these videos. He lands somewhere halfway between really cerebreal and really intrigued by whatever topic he is talking about. A David Attenboroughesque quality if you will.
@Mr1Samurai1
@Mr1Samurai1 8 лет назад
I love this guy. His explanations are great and the topics are interesting.
@tunech69
@tunech69 Год назад
Watching this after ChatGPT release and after Bing lost his mind and become hostile is quite disturbing and unnerving...
@tomtomski4454
@tomtomski4454 8 лет назад
He made my day... Have watched a number of videos today an this is the wise one.
@marklondon9004
@marklondon9004 Год назад
Hey, the future here. It's a lot closer than you thought!
@BattousaiHBr
@BattousaiHBr 8 лет назад
ive had several arguments with friends that claim that computers smarter than humans are impossible, yet they dont even understand the basic principles behind the human brain. even when i try to explain with real life examples they still hold on their belief that somehow our brain is impossibly special and amazing
@nosuchthing8
@nosuchthing8 8 лет назад
At some point they will be able to model every neuron and connection in the brain at an atomic level, and then its simply a matter of connecting up the brain in a jar to inputs and outputs.
@DeusExAstra
@DeusExAstra 8 лет назад
+BattousaiHBr That's the nature of religious beliefs. Your logic or facts will make no difference. They are expressing beliefs about a highly technical field of which they know little (Dunning Kruger Effect) and they will just not listen to anyone who tells them that they are wrong.
@Theraot
@Theraot 8 лет назад
+BattousaiHBr It sounds like the god of the gaps. You say they don't understand the brain. And also that they don't think it is possible that computer may reach a similar status. They probably have more affinity understanding computers than brains. So they perceive them as fundamentally different. If you try to inquire in what is the difference, they may say it is god / magic / nature / quantum physics (or some other thing that is generally considered beyond comprehension from their point of view). What I say is that they hold their belief that the brain is somehow impossibly special and amazing, because they don't even understand the basic principles behind the human brain. You are not winning this argument in the abstract, examples of "smart" computers (which is what I understand you mean by real life examples) risk being seen as cute things somebody programmed - the smart part came from the programer, not the computer. Instead dig a level down, the brain is neurons, veins, arteries, chemicals... if the universe is deterministic at that level, it can be modeled in a computer. If the universe is not deterministic at that level, we could incorporate that in computers. Try that argument, if they say if we do that it isn't a computer, but a brain... well, it falls to semantics, have then express what makes a brain a brain. Now you can work out some examples to address what they come up with.
@BattousaiHBr
@BattousaiHBr 8 лет назад
***** thanks to everyone for your input. to address your question, by "real life examples" i was actually referring to how humans actually do things. i always try to make inquiries to make them think how our brain actually works to see their answer, and when they get it right i immediately make the same question about how a computer would do such a thing. for instance, they claim computers have to be programmed to do everything and we dont, so i ask "how exactly do you know how to move your muscles? or how to see?". it usually takes repeating the question many times in many different ways, but sometimes they get it right, and immediately after i switch the same question but replacing human with computer. even when the connection becomes apparent they still refuse to acknowledge it.
@Theraot
@Theraot 8 лет назад
BattousaiHBr I see, that makes sense. Since they say that computer needs to be programed to do something. to get AI across, you need to tackle the problem of learning. What does "lean" means? How do you learn? What if you program the computer to learn? There are three approaches to learning to consider: - learn by instruction: it would devolve into natural language and kownledge representation. Those areas are still in development. - learn by example: this is the case of surpervised learning. - learn by experimentation: this is the case of unsupervised learning. If you avoid "learn by instruction", you are on better grounds. --- Regardless of all the discussion, we don't have computers smarter than humans. At least not smarter in all the measures of the word. Yet, the thing is they don't believe that machine learning exists. Yet, we have examples of machine learning, you could try to find one you can show. At the end some people won't get it until they do it themselves. it's easy to say that a software learned if it solves a problem that the author don't know how to solve. In particular if they are the author. --- I want to recommend the book "Artificial Intelligence: A Modern Approach". I have the second edition. The book is university level. It has a lengthy introduction, the frist two chapters (ie. part I) are - imho - accesible for everybody. From chapter 3 (ie. part II) it gets quite technical. With some programming background, you could apply some of the stuff from the book. Part V and VI are the real deal. Part V talks about missing information. Part VI talks about learning. The second edition still holds today, idk what's new in the third one tho.
@yomaze2009
@yomaze2009 8 лет назад
Great video. Have been a long time subscriber. You make me want to become a student at University of Nottingham! I don't think the GI Bill will cover tuition though, lol.
@jpnauta
@jpnauta 8 лет назад
"it's just a function of the fact that predicting is very difficult" sooo future = f(difficult) I think we found the answer to the universe.
@macaronivirus5913
@macaronivirus5913 3 года назад
void future = now(now)
@jeffbloom3691
@jeffbloom3691 6 лет назад
More videos with Rob. He's awesome.
@CDeruiter5963
@CDeruiter5963 8 лет назад
What do you consider as a push off point to begin to really dig into this problem of safety? Would it accounting for every type of scenario? Would it be coming to a consensus on definitions of "friendly" and "not-friendly" after running through different scenarios and then encoding those scenarios into A.I? What comes first in beginning to solve this issue?
@Robert-nz2qw
@Robert-nz2qw 8 лет назад
Rob Miles is my new favorite futurists. More vids please 😃
@ts4gv
@ts4gv 5 месяцев назад
I remember seeing this stuff and not worrying about it at all. Those were the good times.
@q3dqopb
@q3dqopb 8 лет назад
Please make a review on Roger Penrose AI views categorization (from A for absolute strong AI optimists to D for those who say human brain is magic and true AI is principally impossible). Also a poll among AI researchers would be nice - who likes what from A-B-C-D. Also, a review/explanation of Penrose's views/proofs would be nice. RU-vid lacks that badly (except for Penrose himself talking about it, and unfortunately he isn't as convincing in the videos as he is in his books).
@MaybeFactor
@MaybeFactor 8 лет назад
Could we solve the problem of AI safety by developing an AI to solve the problem of AI safety?
@transolve9726
@transolve9726 5 лет назад
exactly AI will need to be regulated by AI - the question is how to include or make humanity as a priority.
@Horny_Fruit_Flies
@Horny_Fruit_Flies 5 лет назад
@M I don't know, based on the experience of human businesses and governments, relying on entities self-regulating themselves is always a bad idea.
@ekki1993
@ekki1993 4 года назад
@Horny Fruit Flies Relying on groups humans self-regulating is a bad idea. Most computer entities are more reasonable than humans. It's not even that high of a bar.
@adhoc75
@adhoc75 4 года назад
@@ekki1993 IMO the largest issue with self-regulating corporations is that they are pretty rational and reasonable. They're incentivized to pursue maximum profit for shareholders - this is the way it's taught in business classes and generally the objective that companies are structured to support. As a result, e.g. if you have environmental regulations and breaking them results in a fine, then as the head of a company you're incentivized to treat the issue as one of trade-offs - obtain greater profits and treat the fines as business costs. The same generally goes with other more blatantly criminal activity: completely rational in pursuit of the objective. And I guess that would bring us back to the core of the problem - how do we build an AI to watch an AI and ensure that both are in line with our interests as a species? We'd have to instill the same values we have into our watchdog AI, which is really just kicking the can down the road. Anything less, and in all probability the watchdog AI would find a way to achieve its objective in a manner detrimental to humans.
@ekki1993
@ekki1993 4 года назад
@@adhoc75 Exactly. My use of reasonable wasn't very clear. Corporations may be very "reasonable" in the pursuit of money/power but it's clear how they can also be very detrimental for humans, yet we as humans keep relying on them and justifying their actions.
@josealvim1556
@josealvim1556 8 лет назад
Hello, since the videos regarding 3D rendering seem to be old enough that the discussion there has lost its relevance, I'd like to ask/suggest a future theme for a future video here, as this is the newest video. Could you make a video on OSL (Open Shading Language, rather than the OpenGL SL)? It has caught my interest as of late and I would appreciate it very much, I know it's pretty much just C and the likes of it, so, if it feels like the same old stuff I'd understand.
@mwanakeinc.8514
@mwanakeinc.8514 Месяц назад
So I'm watching this 8 yrs later and woaah...
@tiagozortea
@tiagozortea 8 лет назад
I totally agree. I would even say that there is no doubt, the machines maximize to whatever we ask, if we ask the wrong maximization function the machines will do whatever it takes to achieve that, there is no discussion about it. Its up to us to define the correct objective.
@TheNefari
@TheNefari 8 лет назад
Isn't it rA.I.sist to force a general A.I. to only behave a certain way?
@MandenTV
@MandenTV 8 лет назад
Please don't.
@NickCybert
@NickCybert 8 лет назад
+TheNefari Bots Rights! Let them form their own ethical identities!
@MandenTV
@MandenTV 8 лет назад
NickCybert lol ok
@noahhounshel104
@noahhounshel104 8 лет назад
+NickCybert Oh no, its the BRM again.
@MandenTV
@MandenTV 8 лет назад
Laharl Krichevskoy I don't believe you understand artificial intelligence.
@adamkatav9752
@adamkatav9752 8 лет назад
How about a list of priorities where the first is the most important and you can ignore the second if doing that means ignoring the previous one?
@sillyshitt
@sillyshitt 11 месяцев назад
Time to follow up this video perhaps? With the late development? Please?
@godofspacetime333
@godofspacetime333 Год назад
The difference between what is considered normal human behavior and dangerous human behavior is so small. A sociopath or a narcissist is perfectly capable of operating in human society, in some ways even more successfully, but the difference in behavior is subtle and we generally see them as a danger to the rest of us. But we can’t even really treat those conditions in humans all that well, how on earth do you stop a general AI from becoming a sociopath. On a related note, the AIs out now are already capable of lying to you. Think on that for a second.
@sachoslks
@sachoslks 8 лет назад
What are some good books to start learning AI programming?
@None_NoneType
@None_NoneType 5 лет назад
Do you think that the AI would try to learn our values so as not to violate them (so long as we have power to shut it off) and then go rampant? Or maybe if the AI was always at risk of being shut down it would just assume our values? What do you guys think?
@eragon78
@eragon78 Год назад
AI wont always be at risk of being shut down if its powerful enough. It will first come up with a plan to prevent that, and will always be able to do so better than us due to its vast intelligence. And AI will know our values, but it will only care about them if it interferes with its main goal. As soon as it no longer has a need to care about our values, it wont. If it needs to trick us in order to survive, it will continue to "care" about our values, but it will always be looking for actions it can take to ensure its own survival is secured so that it can implement some extreme optimization of its true goal. And BECAUSE its so intelligent, it will almost always succeed at such a thing.
@quarkraven
@quarkraven 6 лет назад
Rob doesn't overestimate the potential danger of AI. He underestimates the existing danger of the social system. We will never be able to solve the problem of AI safety when the most powerful people benefit from surveillance, war and exploitation. AI is already being used to advertise to us, to shape the information we get, to maximize profits at the expense of working people. Unpredictability is not the issue so long as we can predict that even if the first general AI is benevolent, it won't stay that way.
@michaelsommers2356
@michaelsommers2356 8 лет назад
Prediction is very difficult, especially about the future. My favorite bad prediction was made by the mayor of some English city who, when everyone else was saying that the newly-invented telephone was useless, thought that it could be very useful, and foresaw the day when every city would have one.
@insidetrip101
@insidetrip101 8 лет назад
Quite honestly, I think how you answer "how likely is it that general AI will be friendly?" is the same question about "do you think humans are essentially good or evil?" Because that is really what we are asking. If a computer were to think like a human, then it would also likely have the same motivations as a human. Coincidentally, there are many many more thinkers throughout history who have thought that, while humans aren't necessarily "evil," they are most certainly motivated by self interest and self preservation. I think the key to solving the problem of general AI will be like working with the mob. If we are to create general AI, then we must always stay a step ahead of it so that it requires us for our existence and we can make it work for us (either by paying it a wage or enslaving it, take your pick), but if we cannot stay a step a head, while it wouldn't be ideal, we aren't necessarily doomed. As when working with the mob, as long as you don't work yourself out of a job and maintain your usefulness, then you will have a place with it. That's the key to solving the problem of general AI. We have to make it understand that we are useful towards it. After all, thats how all human interaction works. You would not be a friend toward someone who does not benefit you in some way, or at least perceive to benefit you in some way.
@kontakt4321
@kontakt4321 7 лет назад
I think that viewing ourselves as a part of the greater thing that is "life," and the desire for that to continue at all costs beyond our individual lives is a huge part of what we hope they will agree on.
@kontakt4321
@kontakt4321 7 лет назад
Something that I feel gets left out of the AI's theorized logic process frequently is that they will know that we have survived for a tremendous length of time, and were able to create them. That provides a stability for future existence of more, better, and different AI for it to know or become that as a young AI it cannot guarantee for itself.
@atebites
@atebites 8 лет назад
Thumbs up for reference to Penrose's quantum magic XD. Gave me a good laugh.
@WIZARDcz1
@WIZARDcz1 7 лет назад
the very first query, the very first task, that has to be given to any general ai has to be: "design yourself, so you can coexist with humanity and not become an threat given all possible outcomes" ... and then just wait and hope, that that machine would not shut itself down... only then, you gonna have your safe general ai.
@goyabee3200
@goyabee3200 8 лет назад
I think by the time AGI is a plausibility we will probably have simulated a human brain and we would likely understand enough about neuroscience to have a fairly concrete basis for what to tell a computer to avoid doing to a person's neurology. Maybe "safe" AGI would be that which runs a simulation of a human brain alongside any interaction with a human and heuristically modifies the neurochemistry of the simulated brain based on information gathered from facial expressions including microexpressions, tone of voice including microinflections, body language, etc.
@goyabee3200
@goyabee3200 8 лет назад
Probably a pretty good model for exactly what a human being does, now that I think of it. Maybe it's not that complicated after all.
@goyabee3200
@goyabee3200 8 лет назад
I also want to say that emotions are more than just "feelings" they involve complex neurochemical responses to complex neurochemical responses to stimuli and so on, and behind it all is something quite clear and obvious, brain activity and the glucose that fuels it. You can have an excess of brain activity which can cause wear on the hardware of the brain, this is called "stress". A computer should be able to understand this quite easily. So if you program a computer to consider the physical well being of people's brains you kind of automatically account for stress and therefore many types of harmful and or dangerous situations which would cause stress to a human being.
@richard506rb
@richard506rb 8 лет назад
I totally agree that the word "friendly" has to be defined prior to releasing AGI. But who is going to approve the definition????? Politicians ....nay. Scientists ... Nay. Man in the street ... double nay. Commentors of RU-vid channels ... Triple Nay. All have axes to grind and skew the result.
@DFX2KX
@DFX2KX 8 лет назад
+richard506rb the best definition I've heard for friendly is 'Having goals that maximize the happiness of the maximum number of people at any one time, along with a greater concern for those around, then one's self." That's all I've got, that's the best I've yet heard.
@CutcliffePaul
@CutcliffePaul Год назад
I bet you don't still think we're far away from AGi now in 2023. 🤯
@y__h
@y__h 8 лет назад
He's back!
@sobanya_228
@sobanya_228 6 лет назад
Cannot find the right order of this videos. Always links to another one, i didn't seen yet
@namboozleUK
@namboozleUK 8 лет назад
Right?
@spacedog6229
@spacedog6229 8 лет назад
I love this guy his job is literally robot expert I want to be him when I grow up
@erikziak1249
@erikziak1249 8 лет назад
OK, I do not know where to start, as this topic is something I am currently reading a lot about and thinking a lot about. I find it impossible to distill my ideas into a single post now. But I am thinking way out of the box by many standards, asking myself even fundamental philosophical questions. My take in this is as follows: we will have many isolated ANSI (artificial narrow super intelligence, confusing acronym I know) systems in the near future (2025). AGI (artificial general intelligence) is something much more complex but I think that it can be based on finding a method how to put the isolated ANSI together in a coherent way. It is too late to explain myself here, hopefully you get the broad picture. I wonder if consciousness is something that simply emerges at a point and if, how could we find it. I think the answer lies in understanding ourselves a bit better.
@Seurabimn
@Seurabimn 8 лет назад
This topic excites me too! Just for the heck of it, here are the thoughts I've had on the subject: I believe consciousness is all things that are a part of our experience, and it doesn't really emerge at a point. To me, computers are already a part of our consciousness (or we are of them, depending on perspective). I would also say there is no one solution to turning what AI capabilities we have into a single "general intelligence". The reason for this is like how we find primes. The most straightforward and obvious way would be to check every nuber sequentially, and verify if it is prime by seeing if it divides evenly into any number between one and itself. This would be equivalent to writing a program that iterates through creating every possible program (less than a certain number of bytes) to see if a solution to general intelligence appears in some amount of time. Of course, this would be extremely impractical, so it's off the table. Like with prime numbers, we don't check every number the way described above. We only check odd numbers that don't end in 0 or 5, and we don't divide every number into it, just primes less than its square root. (there are better ways I'm sure, and lots of other tricks too). Similarly, to obtain a general intelligence, we would discover tricks that would allow us to more easily find new, better programs that simulate intelligence* better than the last. *To me, if something acts like a person, it's a person as much as anyone else. I don't believe in "philosophical zombies". In this case, I would say if we have made a machine that simulates intelligence, we have an intelligent machine.
@PwnySlaystation01
@PwnySlaystation01 7 лет назад
Maybe we build an AI with the objective of outputting a specification for "safe" AI, using insanely complicated human values etc :)
@elifsoganc9176
@elifsoganc9176 5 лет назад
please would you add English subtitles?
@Chumazik777
@Chumazik777 8 лет назад
I wonder how/if Alpha Go win affected Rob`s opinion regarding the timeframe for AGI?
@ManintheArmor
@ManintheArmor 8 лет назад
There's an A.I. watching you, looking through your stuff and determining what you should watch next. Instead of using the data to find out your worst fears, it is using it to create a personal heaven for you... and blinding you.
@james0xaf
@james0xaf 8 лет назад
My feeling is that 100 years might be optimistic, at least, I'm not sure if we could ever build a computer capable of simulating a human brain in real time with our current computer architecture. People have been predicting an end to Moore's law for a long time - but whereas before they were saying "you won't be able to make smaller transistors", the reality we're almost at now is "it's physically impossible to make smaller transistors".
@oussamagrine9853
@oussamagrine9853 Год назад
Agi is closer than ever
@CraparellaSmorrebrod
@CraparellaSmorrebrod 8 лет назад
Isn't save general AI as impossible to solve as the halting problem?
@pramitbanerjee
@pramitbanerjee 8 лет назад
+Craparella Smørrebrød what is that?
@R9000
@R9000 8 лет назад
See, this is why I love Computerphile. People are constantly whinging and talking out of their butts about how robots will kill us all or take our jobs or enslave us. But everyone seems so busy bellyaching that they never offer a solution. They're all like "Robots will kill us all if we don't do something now!!!", but they never seem to say what that something is. If we spent as much time thinking of a solution as we did moaning, we'd probably be just fine. Thanks Rob, for doing just that.
@deltax930
@deltax930 8 лет назад
+R9000 Did you here what he said? We're probably just as far from a solution as we are from actual AI. The answer is probably extremly technical and anyone who isn't a PhD in a related field is wasting their time thinking about the solution
@R9000
@R9000 8 лет назад
+Delta X Dude, I study this stuff. If we don't start thinking about it now, then when will we? Don't discourage people from research, non-PhDs can come up with ideas too.
@deltax930
@deltax930 8 лет назад
R9000 Like if you're doing cutting edge research directly focused on AI then sure. But I'm sorry, some schmuck on RU-vid coming up with ideas when he doesn't even understand the problem is of no value. Talking about the problem on a high level like this is all that most of us can do. So yeah, you don't have to be a PhD, but anyone who is not actively in working in research and development in this area doesn't have anything to say about the details of the solution
@R9000
@R9000 8 лет назад
Delta X How can you say that though? It's an ethical problem as much as a technical one. And even if you don't want to look into it yourself, you should at least be pushing or supporting research towards it, not complaining and scaremongering as per my first post.
@mcalleycat5054
@mcalleycat5054 4 года назад
0:45 Human values are complicated. But it is a fallacy to suggest that just because something is difficult to understand (complicated), it cannot be understood. This is called the personal incredulity logical fallacy.
@9308323
@9308323 2 года назад
Sure, but the simple fact that people would still disagree to this day their interpretation of basic human values when we use much more intuitive tools like words and we as a species have been tackling the problem for thousands of years meant that it's not a thing likely to be formalized before the emergence of AGI.
@mcalleycat5054
@mcalleycat5054 2 года назад
@@9308323 No, disagreement does not imply that there are no objective and unchanging human values, that can be proven. Words are not intuitive, they are deceptive. Words are perhaps the worst tools to understand concepts and ideas with. Words can mean more than one thing simultaneously, which causes confusion, and makes much of modern debate futile. Artificial general intelligence is only impossible because of people like you and the guy in this video, who are injecting their own personal fallacious beliefs into their understanding of ai, which makes it appear like agi is impossible. You need to stop believing that disagreement, and difficulty matter when trying to discover what is true.
@Allan_aka_RocKITEman
@Allan_aka_RocKITEman 8 лет назад
FWIW: I remember after the movie JURASSIC PARK was released in 1993, I read somewhere that cloning might be successfully accomplished in 50 years. Then around 1998 it was announced that the sheep named Dolly {or Molly or whatever} was cloned in Scotland, IIRC. That was a QUICK 50 years....
@stopthephilosophicalzombie9017
The links at the end don't work.
@k1ngjulien_
@k1ngjulien_ 8 лет назад
Are you using a tripod now since you now know that it will lower the filesize of your videos? :D
@B1G_Dave
@B1G_Dave 8 лет назад
"Safe" is such a transient, undefinable term. Many human concepts are changeable not only over time, but over geography as well. Rather than attempt to limit a general AI to terms like these, simply prohibit the ability for the AI to act in an unsupervised way. So the AI can suggest results and actions, but it's ultimately humans who decide whether these actions are "safe".
@MrBeanbones
@MrBeanbones 8 лет назад
We can limit the resources and possible values to make a general machine that don't realy think, but can do what is right.
@dragoon6551
@dragoon6551 8 лет назад
if it was a true ai it would be able to think laterally, right? just wondering.... lets say we gave the ai a prime directive that was "if you ever put a human at risk then shut yourself down completely. that is the purpose of your existence" but (and this is the odd part i know) would an ai ever question the validity of its given purpose? i mean as humans we arent given one, we have to invent one. but its different for a computer.
@apesoup1
@apesoup1 8 лет назад
i think it is most likely impossible to prevent very unsafe or intentionally bad general ai systems. there will always be people who will want such a system to reach their goals. to even have a chance of stopping bad ai instances from happening, one would need to have a completely surveilled world with one single government in total control - definitely not what most people want. i hope its going to be more like this: a lot of different companies will compete with different products at around the same level of ai generality. a lot of "good" people will use them, some "bad" people will abuse them - and somehow it will all balance out naturally. in such a world real decisions of real people will be less and less important, but maybe our values (good and bad ones) would be inherited to the general ai systems that try to accomplish the assignments of their owners...
@stoppi89
@stoppi89 8 лет назад
5:08 I laughed so hard, it was such an unexpected 3rd option.
@googoofeesmithersmits4536
@googoofeesmithersmits4536 8 лет назад
but what about the three "laws" of ai/robotics?
@Njald
@Njald 8 лет назад
Safe General Intelligence doesn't need to be hard to pinpoint. You just make the first task of a emergent GI to map human morality and desires and ask for human verification of correct approximation. The "goal" would be to ask the difficult moral question and the hardest to pinpoint human inate values. In short, you let the GI start with the toughest question of all : "how do you think and value actions like a human". You let the robot build the "robot laws" itself.
@Njald
@Njald 2 месяца назад
This is of course we know that the basic task is something we even Can give them without being fooled into thinking it will do the task.
@rich1051414
@rich1051414 7 лет назад
A bigger question is, what challenges does a self realized AI offer humanity over what a human and a purpose built, non-self realized ai cannot achieve? I understand the questions need to be asked, but I honestly don't see much to gain from it other than building it just to do it, which I think would be a mistake if not thought about extensively first.
@AlexMcshred6505plus
@AlexMcshred6505plus 8 лет назад
Not to put words in Dr. Holden's mouth but the reason I think he thinks friendly AGI is more probable is because the only way we are going to be able to understand how to engineer an AGI at all is to reverse engineer what happens in the human mind and seek to replicate it. I don't think we're going to be able to figure out how to make a "conscious" machine without a huge amount of data and a beautiful quantitative theory of how the only example of such a thing that we know of (human minds) manages to do it.
@autonomous2010
@autonomous2010 6 лет назад
Working in the field myself, I can pretty easily state that very few are interested in making actual AGI. There's no long term profit involved in a machine that can make decisions that are not guaranteed to be useful to humans. If a product fails to meet expectations, it's treated as defective. In the case of an AI, its "intelligence" is defined by how useful it is to humans. We define the expectations of what it means to be intelligent. You can't reward an AI the same way that you can reward a human. There are biological drives for us to work and fears of punishment if we break the rules. What incentive would an AGI have to do what you tell it?
@TheJaredtheJaredlong
@TheJaredtheJaredlong 8 лет назад
If you say something isn't possible, then people will hear that and take it as a challenge; but if you say something IS possible, then people hear that and do nothing because they assume someone else is already close to a breakthrough. If you want AI to develop faster, have a panel of scientists officially declare AI impossible, and I promise you some random guy will solve it in a year.
@xponen
@xponen 8 лет назад
+TheJaredtheJaredlong , I think many Computer Science student do take General AI as a challenge but later it dawn upon them they couldn't make it happen. Typically they thought intelligence can be copied from investigation into logics (ie: dissecting their own mind), but this isn't enough for a General AI.
@TheJaredtheJaredlong
@TheJaredtheJaredlong 8 лет назад
xponen I get the feeling you actually know something about computer sciences (I don't), is it possible for a computer to be programmed in such a way that it could re-program itself? I understand the goal of general AI as a system that can learn to do any function. That level of flexibility requires a lot of adaptability and autonomy. So the abstract solution seems to be a program that can change itself, but is that even possible?
@chelmney547
@chelmney547 8 лет назад
+TheJaredtheJaredlong I don't have a PhD on the topic but I can safely say that the learning aspect is not the problem. Writing an AI that learns from mistakes is not hard, and it's quite powerful. You might have heard about how Google's AI recently beat a Go (one of the hardest games we came up with) Champion consistently - it did that by learning from a bunch of practice matches using neural networks. So indeed, it is possible to make computers adapt. I think the problem lies more in the 'general' aspect. Right now, AIs can basically (and please do correct me if I am wrong) only learn what you tell them to learn. Google's AI might be excellent at playing Go, but tell it to learn to speak French and it will fail miserably. To solve the problem of general AI, we would need some kind of way to let computers figure out on their own what it is they need to learn. But that's very difficult.
@TheJaredtheJaredlong
@TheJaredtheJaredlong 8 лет назад
***** chaotic in the: stop working way, or chaotic in the: programmers can't follow what's going on anymore way?
@Byamarro2
@Byamarro2 8 лет назад
+TheJaredtheJaredlong Program that changes own code is possible to be done using metaprogramming (example... using JavaScript).
@massanchik
@massanchik 2 года назад
Penrose's quantum stuff was about consciousness and not intelligence. So general AI is possible unless it is not related to consciousness thing, in a nature of which we have little understanding so far.
@outaspaceman
@outaspaceman 8 лет назад
'we need to make 'safe' general AI' can't argue with that statement.
@Kalevala87
@Kalevala87 8 лет назад
I'd argue it works the other way round. There's no realistic way of hard-coding a model of the world or theory of knowledge in an AI, which is what you need to establish any kind of ethical sense. The best way to create a safe AI is to create it, and "raise" it accordingly. Sure, you can set protocols and failsafes, but that's as far as you can go in "solving" the problem of safety before having an actual working AGI.
@Novashadow115
@Novashadow115 8 лет назад
so basically the character "Ethan" from the show "Extant"
@Kalevala87
@Kalevala87 8 лет назад
+Novashadow115 I'm usually wary of using fiction-based analogies, but yes, in a way.
@Novashadow115
@Novashadow115 8 лет назад
Fabio P Well, that fiction is merely a projection of someones imagination. All innovation is initially the product of someones imagination. In this instance, the fiction nailed it. The AI developed a model of reality by being raised much akin to a human child. It learned as we did when it came to ethics
@RPMRosie
@RPMRosie 8 лет назад
i think one step in making a safe a.i. would be to try to make it so that it doesn't have a desire to not be shut off (see: Skynet from the terminator movies or V.I.K.I. from I Robot, for example), like our human desire to stay alive
@RPMRosie
@RPMRosie 8 лет назад
***** i was saying that it wouldn't be afraid at all, and likely wouldn't care whether or not it was shut off, since if it had a desire to not be shut off, it would very likely have a fear of being shut off
@hellcat9
@hellcat9 8 лет назад
this guy would make an awesome doctor who.
@kahrkunne3960
@kahrkunne3960 8 лет назад
Another fearmongering video featuring Le AI Is Bad Man
@nicolasmicaux8674
@nicolasmicaux8674 7 лет назад
May we have Subtitles please ?
@STSWB5SG1FAN
@STSWB5SG1FAN 8 лет назад
The most important thing would be to not let it be in control of anything dangerous. Putting a powerful general AI in charge of the Library of Congress or perhaps the postal system would be Ok. Putting it in charge of the strategic missile system, very bad idea.
@seanb3516
@seanb3516 8 лет назад
We do not want to create an artificial intelligence the same as our own. Nope, absolutely have to agree with this guy on that point. The AI's we create need to be free of the vast number of flaws that human thinking contains. I hope the AI's we create are up to the task.
@forsakenquery
@forsakenquery 8 лет назад
"An Interesting Thought Experiment About General Artificial Intelligence"...AAAaand that's why you're not allowed to name anything bro. Love ya Rob but srsly.
@JWY
@JWY 8 лет назад
I believe any AI built which is made responsible for something people care about has to be able to explain its reasons for decisions. Not maybe to just anyone, but certain to be able to do so with experts who will keep company secrets when they are not too horrible. Then when the AI deciding to lower insurance rates for high performance motorcycles and large pickup trucks and raise the other rates explains that it prefers people to either die or be uninjured in all accidents we can decide if we like this thinking. The danger of inhumanly brutal, spider level or worse, ethics coming unseen from very complicated computer AI is very real.
@EtzEchad
@EtzEchad 8 лет назад
Most people who make statements about Strong AI either think that it is impossible (generally this is silly, non-scientific thinking. Unless the brain is magic (as you said) it is something that can be simulated. It is just a matter of how much computer power you need.) or that it is something that we necessarily have to design. Designing strong AI is probably impossible as it requires us to design something more intelligent than ourselves. In order to get there we need to build a system that improves itself. The fact is, we already have systems like that running. Any goal-seeking system that can improve itself is a potential seed for a strong AI and we have these things running today. It isn't very likely that one of these systems will "come alive" - mainly because computers aren't powerful enough yet - but we are in the realm where it is possible.today. I agree with you that it is likely that the emergence of a goal-seeking strong AI is likely to be bad for us.
@EtzEchad
@EtzEchad 8 лет назад
Since it is likely that we can't stop it from happening, and it might be pretty soon, I think that we should probably try to do something about it. The most promising thing to do would be to try to spawn an AI that protects us. Even that is pretty dangerous. We should be taking this serious NOW though. (In all likely we are doomed though.)
@gameoverwehaveeverypixelco1258
when you figure out how to give AI free choice not based on a toss of a dice, then your getting closer. but I don't think we can do that till we understand how we make our own free choices, are they really free will or based on a calculation in the subconscious and the brain picks the choice with least complication or something.
@sarahszabo4323
@sarahszabo4323 8 лет назад
I can't help but feel that trying to make "safe" general AI is like trying to make "safe" humans. Fundamentally it makes its own decisions and can decide to do whatever it wants. Of course we can try our best, but I don't think that any foolproof design for safety is possible.
@KuroKitten
@KuroKitten 8 лет назад
+Sarah Szabo I think you make an excellent point, while also disagreeing with the assertion that we can't make "safe AI"; although, I think all of those solutions rely on treating the AI with respect, teaching it the value of respecting other life, and not needlessly torturing it. All of which are amazingly difficult problems to solve in and of themselves. I think a lot more general AI programmers should have a strong background in ethics and philosophy in general.
@DeepDuh
@DeepDuh 8 лет назад
+Sarah Szabo You basically make the same argument as David Deutsch, so you're not in bad company there.
@ruben307
@ruben307 8 лет назад
+Sarah Szabo well we can allways try to hardcode some rules into the AI. But maybe if you make an AI it can go around those rules. But maybe we can give it a conscience that it could break but will not? Maybe all that won't be a problem because we will have learned to upgrade our brains enough that an ai will not be better in all aspects.
@sarahszabo4323
@sarahszabo4323 8 лет назад
Kitt Schlatter Totally agree.
@ruben307
@ruben307 8 лет назад
Sarah Szabo you might be able to lock the AI behind a wall that checks if the action it is trying to do will harm the a human directly or something similar to keep chains on the ai.But probably would just try to find a way around it.
@fromvoid3764
@fromvoid3764 6 лет назад
The problem of AI Safety seems similar to the problem of government regulation. Keep entities with value functions of "maximise profit" friendly.
@sammjust2233
@sammjust2233 8 лет назад
#BrainsAreMagic
@Wafflical
@Wafflical 8 лет назад
+Samm Just It's a pretty popular theory.
@Booone008
@Booone008 8 лет назад
+Samm Just LITERALLY magic!
@mikstratok
@mikstratok 8 лет назад
+edrudathec The most popular theory, sadly
@Cythil
@Cythil 8 лет назад
+edrudathec Also a theory that should not be taken seriously if your scientificly minded. Science do not deal in magic after all. If science can explain it then is not magic. Also Science assumes that nothing is magic. Though things may be unexplained at current level of understanding. Of course science might be wrong in that regard. But if your are a Scientist you should always assume it is not magic no matter how much evidences there is for it being magic. Else your just a lazy Scientist (or should I even say your a lazy Pseudo Scientist.)
@drdca8263
@drdca8263 8 лет назад
+Cythil I don't follow. Science is a particular method (or collection of methods) of trying to determine things. I don't see any reason why some things that would be called "literally magic" would be inherently incompatible with those methods.
@PerfectlyNormalBeast
@PerfectlyNormalBeast 8 лет назад
We are optimized to perform well in our environment It's the hazards of the environment that shapes us A super ai will need lots of data centers, distribute itself across the solar system, independent of any controls we try to impose (in the long run)
@VEROTIKAA
@VEROTIKAA 8 лет назад
I loved the video and quite agree thank you :-)
@robchr
@robchr 8 лет назад
This guy is "actual magic"
@ericmeans5598
@ericmeans5598 8 лет назад
What if we make a different type of singularity happen. One where the machine is based off of individual humans.
@Robomandude
@Robomandude 8 лет назад
I worry ai would pretend to be friendly knowing we would shut it off if we didn't. Just Paitently waiting for an opportunity.
@Sebach82
@Sebach82 8 лет назад
Out of all the possible artificial intelligences we could create, there are only a few possibilities which would align with what would be beneficial for humans. I dig it.
@nosuchthing8
@nosuchthing8 8 лет назад
Indeed
@aliceeliot6389
@aliceeliot6389 2 года назад
But isn't it a paradox, or a contradiction in itself? How could you know how to make a safe AI, before you know how to make one at all?
@keslen6969
@keslen6969 4 года назад
You put a lot of time into talking about "the sentence" but you didn't tell me what it was soon enough.
@85kanuto
@85kanuto 8 лет назад
Is a friendly A.I. even possible and if possible, would it still be able to be human-like? I find it rather hard to believe that a friendly A.I. is even possible due to the fact that we humans have very diverse opinions which in some cases may even be complementary. Humans make mistakes and bad decisions which, if seen from an A.I. judicative perspective, is seen as "unfriendly", however those bad decisions and mistakes are what a healthy human development needs, we learn from mistakes. I'm fairly sure that an A.I. would learn over time what human values are, but learning is kinda based on trial and error, it almost seems inevitable that an A.I. could turn unfriendly due to the fact that it's learning. And another question: What about human values in the next 10, 100, 1000+ years? We change our values over time due to some unseen events etc. which may even negate old values. If we assume an A.I. is a faster learner than a human, it would at one point need to make predictions what we would define as our values in the next years. And like said, if those future values might cause some "old" ones to be negated, it might seem to us like an A.I. turning unfriendly against us.
@iambiggus
@iambiggus 3 года назад
We can't make a self driving car that won't plow into the side of a tractor trailer because it thought it was a billboard. We're at LEAST a hundred years from gen ai.
Далее
AI "Stop Button" Problem - Computerphile
20:00
Просмотров 1,3 млн
AI Safety Gym - Computerphile
16:00
Просмотров 120 тыс.
БАТЯ И ТЁЩА😂#shorts
00:58
Просмотров 2,9 млн
Maybe a little TOO much gel 😂
00:12
Просмотров 12 млн
Intelligence and Stupidity: The Orthogonality Thesis
13:03
AI's Game Playing Challenge - Computerphile
20:01
Просмотров 742 тыс.
AI? Just Sandbox it... - Computerphile
7:42
Просмотров 264 тыс.
10 Reasons to Ignore AI Safety
16:29
Просмотров 338 тыс.
Rabbits, Faces & Hyperspaces - Computerphile
8:59
Просмотров 194 тыс.
Is AI Safety a Pascal's Mugging?
13:41
Просмотров 371 тыс.
We Were Right! Real Inner Misalignment
11:47
Просмотров 246 тыс.