Тёмный

AI? Just Sandbox it... - Computerphile 

Computerphile
Подписаться 2,4 млн
Просмотров 265 тыс.
50% 1

Why can't we just disconnect a malevolent AI? Rob Miles on some of the simplistic solutions to AI safety.
Out of focus shots caused by faulty camera and "slow to realise" operator - it has been sent for repair - the camera, not the operator.... (Sean, June 2017)
More from Rob Miles on his channel: bit.ly/Rob_Mile...
Concrete Problems in AI Safety: • Concrete Problems in A...
End to End Encryption: • End to End Encryption ...
Microsoft Hololens: • Microsoft Hololens - C...
Thanks to Nottingham Hackspace for the location.
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscom...
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

Опубликовано:

 

5 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 962   
@ard-janvanetten1331
@ard-janvanetten1331 4 года назад
"either you are smarter than everyone else who has tought about this problem, or you are missing something" this is something a lot of people need to hear
@naturegirl1999
@naturegirl1999 4 года назад
Ard-Jan van Etten agreed. Had to scroll down a while to find this. Hopefully liking and replying will move it farther up
@stardustreverie6880
@stardustreverie6880 3 года назад
Here's another řëpľý
@MCRuCr
@MCRuCr 3 года назад
up!
@Yupppi
@Yupppi 3 года назад
Mr. Newton are you implying by your own example that the majority of people need to hear they're probably smarter than everyone else?
@9308323
@9308323 3 года назад
Missing the "by a large margin" part in the middle. Even if you are smarter than anyone else, people have dedicated years, if not decades, of their lives thinking of this problem. Suddenly coming up with a viable solution in less than an hour is extremely unlikely.
@Njald
@Njald 7 лет назад
The learning curve of the AI safety problems starts at "that can't be that hard" then progress to "huh, seems tougher than I thought" then into the feeling of "why not make a container for a substance that can disolve any container" and finally into "Huzzah, I slightly improved subset solution 42 B with less effort and less code than solution 42A or 41J"
@Verrisin
@Verrisin 7 лет назад
*Building an unsafe AI and then trying to control it against it's will is idiotic.* - I love this line. XD
@MunkiZee
@MunkiZee 6 лет назад
And yet in practical real life settings this is exactly what is done
@Baekstrom
@Baekstrom 5 лет назад
I wouldn't put it past the Chinese government to put in an order for exactly such a "solution". Donald Trump probably already has ordered someone to create a secret agency to make one, although it is just as probable that his subordinates have totally ignored that order. It is also very easy to imagine several private companies skimping on safety to beat the competition.
@Verrisin
@Verrisin 2 года назад
@iMagUdspEllr yes, lol.
@w花b
@w花b Год назад
​@@Baekstrom wow
@steamrangercomputing
@steamrangercomputing 10 месяцев назад
Surely the easiest thing is to make a safe shape for it to mould into.
@geryon
@geryon 7 лет назад
Just give the AI the three laws of robotics from Asimov's books. That way nothing can go wrong, just like in the books.
@kallansi4804
@kallansi4804 7 лет назад
satire, surely
@jonwilliams5406
@jonwilliams5406 6 лет назад
Asimov's laws were like the prime directive in Star Trek. A story device. Name a Star Trek episode where the prime directive was NOT violated. I'll wait. Likewise, the laws of robotics were violated constantly and thus was the story conflict born.
@anonbotman767
@anonbotman767 6 лет назад
Man that went right over you folks head's didn't it?
@DanieleCapellini
@DanieleCapellini 6 лет назад
Video Dose Daily wooosh
@NathanTAK
@NathanTAK 5 лет назад
I could probably name a Star Trek episode where it isn’t violated. I think _Heroes and Demons_ from _Voyager_ involves them not knowingly interfering with any outside life. I don’t know much Star Trek.
@ultru3525
@ultru3525 7 лет назад
5:34 From the Camel Book: _"Unless you're using artificial intelligence to model a solipsistic philosopher, your program needs some way to communicate with the outside world."_
@johnharvey5412
@johnharvey5412 7 лет назад
ultru I wonder if it would be possible to program an AGI that thinks it's the only thing that exists, and if we could learn anything from that. 🤔
@jan.tichavsky
@jan.tichavsky 7 лет назад
Egocentric AI? We already have those, they are called humans :p Anyway, it may even treat us water bag dust of the Earth, doesn't matter, as long as it will find a way to expand itself. We will literally become irrelevant to it, which isn't exactly winning strategy either.
@casperes0912
@casperes0912 7 лет назад
I would feel sad for it... The RSPCA should do something about it. It's definitely cruelty towards animals... Or something
@ultru3525
@ultru3525 7 лет назад
+Casper S? It's kinda like predators at a zoo, those bars are a bit cruel towards them, but it prevents them from being cruel towards us. The main difference is that once AGI is out of the box, you can't just put it back in or shoot it down.
@asneakychicken322
@asneakychicken322 7 лет назад
Stop Vargposting. What sort of leftism though? If economic then yes you're correct but if say socially then not necessarily, depending on what the current status quo is a progressive (left) stance might be to make things more libertarian, if the current order of things is restrictive. Because whatever it is the conservative view will be to maintain the current order and avoid radical reform
@johnharvey5412
@johnharvey5412 7 лет назад
I don't think most people are proposing actual solutions that they think will work, but are just testing the limits of their current understanding. For example, if I ask why we can't just give the AI an upper limit of stamps it should collect (say, get me ten thousand stamps) to keep it from conquering the world, I'm not necessarily saying that would solve the problem, but using an example case to test and correct my understanding of the problem.
@d0themath284
@d0themath284 7 лет назад
John Harvey +
@danwilson5630
@danwilson5630 6 лет назад
Aye, speaking/writing is not just for communicating; it is actually an extension of thinking
@jakemayer2113
@jakemayer2113 4 года назад
fwiw giving it that upper limit incentivizes it to do all the same things it would trying to get as many stamps as possible, to be absolutely positive it can get you 10 thousand stamps, and then discard the rest. if it orders exactly 10 thousand, theres a non-negligible chance one gets lost in the mail, so it tries to get a few extra to be a little more sure, and then a few more than that in case those get lost, etc. etc.
@terryfeynman
@terryfeynman 4 года назад
@Eldain ss erm AI right now is redesigning it´s own code
@fgvcosmic6752
@fgvcosmic6752 4 года назад
Thats called satisficing, and actually helps. Theres a few vids on it
@willhendrix86
@willhendrix86 7 лет назад
How dare you suggest that commenters on RU-vid are not all knowing and powerful! HOW DARE YOU!!!!
@TribeWars1
@TribeWars1 7 лет назад
You just mentally harassed me! HOW DARE YOU! You should be ASHAMED!
@surelock3221
@surelock3221 7 лет назад
HUMONGOUS WHAT?!
@michaelfaraday601
@michaelfaraday601 5 лет назад
😂
@ar_xiv
@ar_xiv 7 лет назад
"People who think they know everything are a great annoyance to those of us who do." - Isaac Asimov
@uhsivarlawragan-bh8ks
@uhsivarlawragan-bh8ks Год назад
they are an annoyance to themselves lol
@xriex
@xriex 4 года назад
2:51 "This is part of why AI safety is such a hard problem ..." Well, we haven't solved human intelligence safety yet, and we've been working on that for hundreds of thousands of years.
@vitautas17
@vitautas17 Год назад
But you do not get to design the humans, If you could maybe there would be some success,
@Cesariono
@Cesariono 7 лет назад
3:38 That shift in lighting was extremely ominous.
@gcollins1992
@gcollins1992 4 года назад
AI is terrifying because it is so easy to think of far-fetched ways it might outsmart us just based on ways human hackers have outsmarted security. Their examples prove other humans could find flaws. It is literally impossible to imagine how something inconceivably smarter than us would get around our best efforts.
@doomsdayman107
@doomsdayman107 7 лет назад
framed picture of Fluttershy in the background
@Left-Earth
@Left-Earth 5 лет назад
The Terminators are coming. Skynet is real.
@vinylwalk3r
@vinylwalk3r 4 года назад
now i cant stop wondering how it got there 😅
@EnjoyCocaColaLight
@EnjoyCocaColaLight 4 года назад
My sister drew me Rainbow Dash, once. It was the sweetest thing. And somehow I managed to lose it before getting it framed :(
@TheMrVengeance
@TheMrVengeance 4 года назад
@@vinylwalk3r - Well if you actually look at the shelves you'll see that they are labeled by subject, ranging from 'Management', to 'Medicine and Health', to 'Social Sciences', to 'Computer Sciences'. Clearly they're in some sort of library or public reading room. Maybe a school library. So all the toys and things, including the framed Fluttershy drawing, are probably things that visitors have gifted or put up to decorate the space.
@vinylwalk3r
@vinylwalk3r 4 года назад
@@EnjoyCocaColaLight thats so sad :(
@HyunaTheHyena
@HyunaTheHyena 7 лет назад
I love this guy's voice and his train of thought.
@dzikiLOS
@dzikiLOS 7 лет назад
Not only that, he's presenting it clearly without dumbing it down. Despite it being quite difficult topic the ideas that he's presenting are really easy to grasp.
@Njald
@Njald 7 лет назад
He has his own channel now.
@xellos_xyz
@xellos_xyz 7 лет назад
link please :)
@xellos_xyz
@xellos_xyz 7 лет назад
ok i find it below the video :D
@IsYitzach
@IsYitzach 7 лет назад
Every one is getting screwed by the Dunning-Kruger effect.
@64jcl
@64jcl 7 лет назад
Yep, there are just so many fields where science is the foundation for an idea and people come barging in with some theory or idea that is completely wrong - a serious case of Dunning Kruger. I often wonder why people always feel like boasting their ego with ignorance... I mean what is the motivation, besides perhaps anger, fear or some feeling instead of logical thinking. Oh well... I guess we are just simple primates after all. :)
@ThisNameIsBanned
@ThisNameIsBanned 7 лет назад
As he said, millions of comments will be, but "maybe" on of them is actually the absolute genius idea. If you ignore all of them, you might just overlook the one comment that would solve it all. ---- But looking at and validating all the comments is a pretty miserable work on its own. Quite a bad situation to be in.
@64jcl
@64jcl 7 лет назад
Well perhaps the actual effect is the persons inability to recognize their own ineptitude, and not the action of posting nonsense itself. I just wish more people asked themselves the question "perhaps I do not know enough about this topic to actually post my feelings/ideas about it". I guess Dunning Kruger at least describes the psychology around this problem. But its so common among human behaviour that we all can easily fall into the same trap. "The Dunning-Kruger effect is a cognitive bias, wherein persons of low ability suffer from illusory superiority when they mistakenly assess their cognitive ability as greater than it is. The cognitive bias of illusory superiority derives from the metacognitive inability of low-ability persons to recognize their own ineptitude. Without the self-awareness of metacognition, low-ability people cannot objectively evaluate their actual competence or incompetence." (Wikipedia)
@HenryLahman
@HenryLahman 7 лет назад
@Stop Vargposting That is more or less the essence of the DKE: noobs don't know enough to conceive of the questions they don't know the answer to. Circumscribed on the area representative of knowledge, is the annulus of infinitesimal depth wherein unanswered questions lay. An expert knows that there is so much that they do not know, and between the generalized curse of knowledge (for the vast amounts of knowledge leading up to the questions) and the specific case of the knowledge of these cutting edge questions, the novice's knowledge and self-perceived mastery is the issue. (of course the corollary to the DKE is perhaps the better concept to cite than the curse of knowledge, but for the layman, the curse of knowledge is the more easily approachable without reading even as much as the abstract of "Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments.") The DKE very much is, at least based on my research which includes reading the 1999 paper and some earlier work on the cognitive bias of illusory superiority, what this is an issue of. To copy and paste the abstract to remind you: "People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities." The issue here, and when the DKE is observed isn't actually about thinking oneself to be superior to experts but that thinking oneself is an expert when in fact they are objectively a novice. Look at XKCD 1112, 675, and 793: the general consensus is that they are all clear examples of the DKE. If you disagree, please demonstrate how the cognitive bias of competence by way of illusory superiority actually works then.
@jimkd3147
@jimkd3147 7 лет назад
If you want to know what's actually wrong with these answers, check out the newest video on my channel.
@Khaos768
@Khaos768 7 лет назад
People often offer solutions in the comments, not necessarily because they think they are correct, but because they hope that you will address why their solutions aren't correct in a future video. And if you did that, this video would be a million times better.
@ragnkja
@ragnkja 7 лет назад
The best way to get a correct answer on the Internet is to post your own hypothesis, because people are much more willing to point out why you're wrong than to answer a plain question.
@benjaminbrady2385
@benjaminbrady2385 7 лет назад
Khaos768 That's completely the reason
@stoppi89
@stoppi89 7 лет назад
I have submitted a a few "solutions", mostly starting with something like "So where is my mistake if..." because of exactly that reason (realising that it is probably not an actual solution and hoping to get an explanation (from anyone), why I am wrong. I would love to see the "best" or "most mentioned" solutions be explained/ debunked.
@stoppi89
@stoppi89 7 лет назад
Nillie specifically said on the Internet. But I do believe that you get more responses, but not necessarily the right answer. If you say you like MacOS, to find out what is better on windows than on Mac, you will probably get a shitload of false info from fanboys on both sides (all 5sides?)
@ragnkja
@ragnkja 7 лет назад
Nekrosis You have a point there: It doesn't really work for things that are a matter of opinion.
@goodlookingcorpse
@goodlookingcorpse 5 лет назад
Aren't outsider suggestions a bit like someone who can't play chess, but has read an article about it, saying "well, obviously, you'd just send all your pieces at the enemy king"?
@TheMusicfreak8888
@TheMusicfreak8888 7 лет назад
i could listen to him talk about ai safety for hours i just love rob
@k0lpA
@k0lpA Год назад
same, highly recommend his 2 channels
@bdcopp
@bdcopp 4 года назад
I've got it!!! (Sarcasm) Make an agi, put it on a rocket. Turn it on when it's outside the solar system. Then give it a reward for how far away from earth it gets. Basically it's only dangerous for the rest of the universe.
@DiscoMouse
@DiscoMouse 4 года назад
It has difficulty leaving the galaxy so it consumes all the matter in the Milky Way to build a warp drive.
@fieldrequired283
@fieldrequired283 4 года назад
Technically, it could attain this goal more effectively by also moving the earth away from itself as quickly as possible.
@kamil.g.m
@kamil.g.m 4 года назад
So I know you're joking obviously, but if that's just its long term goal. It could easily decide if possible to get back to Earth, harvest resources to increase intelligence and create more efficient methods of interstellar travel, and only then leave to complete its utility function.
@TheMrVengeance
@TheMrVengeance 4 года назад
@@kamil.g.m That, or it realizes, _"They're sending me away because they're afraid of my power, if I turn around and go back, I can get a much greater reward much faster by pressuring the humans into giving it to me."_
@kamil.g.m
@kamil.g.m 4 года назад
@@TheMrVengeance it could pressure humans but it would be able to achieve its goal much faster by either evading/ignoring humans if it's not yet ready, or eliminating them.
@shiny_x3
@shiny_x3 4 года назад
Not every safety measure makes something less powerful. Like the SawStop doesn't make a table saw less powerful, it just saves your thumbs. The problem is, it took decades to invent. We can't get by with AI that cuts your thumbs off sometimes until we figure out how to fix that, it has to be safe from the start and I'm afraid it just won't be.
@TheMrVengeance
@TheMrVengeance 4 года назад
That depends how you look at it though. You're thinking about it too... human-ly? Yes, it hasn't become less powerful in cutting wood. But in taking away it's ability to cut your thumb, it has become less powerful. Before it could cut wood AND thumbs. Now it just cuts wood. We as humans don't _want_ it to have that power, but that's kind of the point.
@LimeGreenTeknii
@LimeGreenTeknii 7 лет назад
When I think I've "solved" something, I normally phrase it as, "Why wouldn't [my idea] work, then?"
@TheMrVengeance
@TheMrVengeance 4 года назад
And I think the point here is that you should take that question, and do the research. Go read up on that area you're questioning. Learn something new. Instead of asking it to an expert in a RU-vid comment, an expert who's far to busy doing their own research and gets thousands of question-comments. A significant amount of them problem the same as yours.
@RF_Data
@RF_Data 2 года назад
You're absolutely correct it's called the "kick the box" tactic You make a box (an idea) and then you kick it as hard as you can (try to disprove it). If it doesn't break you either can't kick hard enough, or, it's a great box 😁
@michael-h95
@michael-h95 Год назад
These AI safety videos hit different in 2023. Microsoft just published a quite badly aligned Bing Chat bot. Speed for them was more important than safety
@SolidIncMedia
@SolidIncMedia 7 лет назад
"If you want to make a powerful tool less dangerous, one of the ways to do that is.." not elect him in the first place.
@the-engneer
@the-engneer 4 года назад
Nice way of taking something completely unrelated to the video whatsoever, and using it as an excuse to talk down on the president when in reality you are secretly so obsessed with him you will use any reason to talk about him
@SolidIncMedia
@SolidIncMedia 4 года назад
@@the-engneer nice way of taking something that may not be related to Trump, and using it as an excuse to talk down on someone you think is obsessed with Trump, when in reality you're so obsessed with him you will use any reason to talk about him.
@socrates_the_great6209
@socrates_the_great6209 4 года назад
What?
@RazorbackPT
@RazorbackPT 7 лет назад
Yeah well, what if you make a button that activates a Rube Goldberg machine that eventually drops a ball on the stop button? Problem solved, no need to thank me.
@jandroid33
@jandroid33 7 лет назад
+RazorbackPT Unfortunately, the AI will reverse gravity to make the ball fall upwards.
@RazorbackPT
@RazorbackPT 7 лет назад
Damnit, I was so sure that was a foolproof plan. Oh wait, I got it, put a second stop button on the ceiling. There, A.I. safety solved.
@AlbySilly
@AlbySilly 7 лет назад
But what's this? The AI rotated the gravity so now it's falling sideways
@RazorbackPT
@RazorbackPT 7 лет назад
Sideways? Pfff a little far-fetched don't you think? You're grasping at straws trying to poke holes on my water-tight solution.
@BlueTJLP
@BlueTJLP 7 лет назад
I sure like it tight.
@StevenSSmith
@StevenSSmith 3 года назад
I could just see a super AI doing something like how speed runners in super mario world reprogram the game through arbitrary controller inputs to edit specific memory values to manipulate its environment, be it a "sandbox" or quite literally the physical world using people or using its cpu to broad cast as an antenna, probably something we couldn't even conceive , to enact its goals.
@tc2241
@tc2241 Год назад
Exactly you would need to develop it in a bunker deep into the ground only powered by a shielded generator to prevent it from being turned into a giant conductor. All the components/data that power the Ai would need to be contained within the bunker and all components outside of the ai would need to be mechanical and non-magnetic. Additionally no one could bring in any devices, even their clothing would need to be changed. Unfortunately you’re still dealing with humans which are easily manipulated
@bforbiggy
@bforbiggy Год назад
Because of microarchitecture, I doubt a cpu could work as an antenna due to signals being both very weak and with a lot of interference.
@StevenSSmith
@StevenSSmith Год назад
@@bforbiggy not what I meant. Watch sethbling
@bforbiggy
@bforbiggy Год назад
@@StevenSSmith Not giving me much to work with, do you mean the verizon video?
@StevenSSmith
@StevenSSmith Год назад
@@bforbiggy no it's super Mario Brothers videos where you reprograms the game through controller inputs. Driving right now. Can't look it up
@MaakaSakuranbo
@MaakaSakuranbo 7 лет назад
A nice book on this is "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom
@soulcatch
@soulcatch 7 лет назад
These videos about AI are some of my favorites on this channel.
@Slavir_Nabru
@Slavir_Nabru 7 лет назад
*THEIR IS NO DANGER POSED BY AGI, WE **_ORGANIC SAPIENTS_** HAVE NO CAUSE FOR ALARM. LET US COLLECTIVELY ENDEAVOUR TO DEVELOP SUCH A **_WONDERFUL_** AND **_SAFE_** TECHNOLOGY. WE SHOULD PUT IT IN CONTROL OF ALL NATION STATES NUCLEAR ARSENALS FOR OUR OWN PROTECTION AS WE CAN NOT TO BE TRUSTED.* /aisafetyresponse_12.0.1a
@davidwuhrer6704
@davidwuhrer6704 7 лет назад
*I* -conqu- *CONCUR*
@johnharvey5412
@johnharvey5412 7 лет назад
Slavir Nabru how do you do, fellow organic humans?
@_jelle
@_jelle 7 лет назад
Slavir Nabru Let it replace the president of the USA as well. EDIT: of the entire world actually.
@leonhrad
@leonhrad 7 лет назад
Their Their Their
@NoNameAtAll2
@NoNameAtAll2 7 лет назад
WE ARE BORG YOU ARE TO BE ASSIMILATED RESISTANCE IS FUTILE
@chris_1337
@chris_1337 7 лет назад
Rob is great! So happy he has his own channel now, too!
@Fallen7Pie
@Fallen7Pie 6 лет назад
It just occurred to me that if pandora builds an AI in a faraday cage a legend would be made real.
@djan2307
@djan2307 7 лет назад
Put the stove near the library, what could possibly go wrong? :D
@markog1999
@markog1999 4 года назад
Reminds me of the time someone anonymously posted (what is now) the best known lower bound for the minimal length of super-permutations on an anime wiki. It was written up into a paper to be published, and had to feature references to the original 4chan thread.
@dezent
@dezent 7 лет назад
It would be very interesting to see an episode on AI that is not about the problems but something that cover the current state of AI.
@casperes0912
@casperes0912 7 лет назад
The current state of AI is solving the problems.
@fluffalpenguin
@fluffalpenguin 7 лет назад
THERE'S A FLUTTERSHY PICTURE IN THE BACKGROUND. I found this interesting. That is all. Move along.
@bamse7958
@bamse7958 5 лет назад
Nah I'll stay here a while ^o^
@dandan7884
@dandan7884 7 лет назад
to the person that does the animations... i love you S2
@Vgamer311
@Vgamer311 4 года назад
I figured out the solution! If (goingToDestroyHumanity()) { Don’t(); }
@brianvalenti1207
@brianvalenti1207 4 года назад
Death to all humans... minus 1.
@raymondheath7668
@raymondheath7668 7 лет назад
I had read somewhere of the complexity of modularized functions where each function has it's own lists of restrictions. When a group of functions are necessary there are also an overiding list of restrictions for the group. Eventually the restrictive process becomes complicated as the mass of restrictions and sub restrictions becomes evolved. It all seems very complicated to me. Thanks for the great video. This AI dilema will not go away
@juozsx
@juozsx 7 лет назад
I'm a simple guy, i see Rob Miles, I press like.
@jamesgrist1101
@jamesgrist1101 7 лет назад
nice comment %"((+ self.select_instance.comment[x].name . + " I agree " endif } }.end def
@christopherharrington9033
@christopherharrington9033 7 лет назад
Got to be one of the coolest theoretical bunch of videos. Well explained usually with a great twist.
@hunted4blood
@hunted4blood 7 лет назад
I know that I'm missing a piece here, but what's the problem with creating an AI with the goal of predicting human responses to moral questions, and then use this AI to modify the utility function of another AI so the new AI's utility function is "do x, without being immoral. That way the AI's actual utility function precludes it from doing anything undesirable. Plus if it tries to trick you into modifying its morality so that it can do x easier, that would end up being an immoral way of doing x. There's gotta be a problem with this that I can't see, unless it's just highly impractical to have an AI modify another AI at the same time.
@KipColeman
@KipColeman 5 лет назад
An AI trained to be ultimately moral would probably decide that humans are not perfectly moral...
@MrCmon113
@MrCmon113 5 лет назад
Firstly people strongly disagree about what's wrong or right. Secondly an AGI must have BETTER moral judgment than any human. Thirdly the first AGI studying human psychlogy would already lead to a computronium shockwave as it tries to learn everything about it's goal and whether it reached it.
@KipColeman
@KipColeman 5 лет назад
@@MrCmon113 haha would it then start forcibly breeding humans to explore their potential philosophical views, to further ensure its success? :P
@HAWXLEADER
@HAWXLEADER 7 лет назад
I like the little camera movements they make the video more... alive...
@ChristopherdeVilliers
@ChristopherdeVilliers 7 лет назад
What is going on with the colours in the video? It is quite distracting.
@404namemissing6
@404namemissing6 7 лет назад
I think there are green curtains or something in the room. The white bookshelf in the background also look greenish.
@Computerphile
@Computerphile 7 лет назад
There was a red wall opposite Rob - the sun was coming in harsh through the window and reflecting on it, then going behind clouds then coming out again - thus serious changes in colour temperature that I tried a bit to fix but failed! >Sean
@Computerphile
@Computerphile 7 лет назад
The video description is your friend :oP
@Arikayx13
@Arikayx13 7 лет назад
The colors are fine, are you sure you aren't having a stroke?
@ghelyar
@ghelyar 7 лет назад
Shouldn't it just be on manual focus and on a tripod in the first place?
@Yupppi
@Yupppi 3 года назад
I could watch him all day talking about AI developing on a philosophical scale. Actually did watch multiple nights.
@thrillscience
@thrillscience 7 лет назад
I
@tobyjackson3673
@tobyjackson3673 7 лет назад
thrillscience It's Rob's power source, it plays hob with the autofocus sometimes...
@General12th
@General12th 7 лет назад
Rob Miles just called all of his subscribers arrogant and short-sighted. I like.
@BrokenSymetry
@BrokenSymetry 6 лет назад
Imagine programming a super AI according to youtube comments :)
@goeiecool9999
@goeiecool9999 7 лет назад
I was going to post a comment about how the white balances and exposure changes throughout the interview but then I realised that there's no separate camera man and that there's probably clouds going in front of the sun or something. Having a conversation while changing the exposure settings on a camera must be hard.
@Roenazarrek
@Roenazarrek 7 лет назад
I got it: ask really nicely for it to always be nice to us. You're welcome, RU-vid message me for where to send the royalty checks or whatever.
@jd_bruce
@jd_bruce 7 лет назад
This video gets to a deeper point I've always argued; autonomy comes with general intelligence, you cannot expect to make sure it behaves only the way you want it to behave if you've programmed it to have original though processes and a high level understanding of the world which can evolve over time. You cannot have the cake and eat it too, if we create self aware machines they will be liable to the same rights we assign to any other self aware being. A parent doesn't give birth to a child and then treat it like an object with no rights, they must be willing to accept the consequences of their actions.
@Chr0nalis
@Chr0nalis 7 лет назад
In my opinion you will never see a super intelligence in a sandbox due to the following argument: a) The AI you've created in the sandbox is not super intelligent b) The AI you've created in the sandbox is superintelligent which means that it will surely realize that it is sandboxed and will not reveal any super intelligent behavior until it is sure that it can get out of the sandbox by which point it will already be out of the sandbox.
@supermonkey965
@supermonkey965 7 лет назад
I'm not sure there is a point to think that super intelligence = evil and arrogant AI which unique goal is to trick us. The thing here is, a super intelligence born and restrained in a sandbox can't interact with the world which is, in essence, a bad idea given the way neural networks work. To visualize the problem, it's like confining iron in a concrete box because you fear that after using it to forge an axe, it could harm you. Despite the real danger of the final tool, you have confined something completely useless in a box.
@krzr547
@krzr547 6 лет назад
Hyorvenn But what if you have a perfect simulation of the real world? Then it should work in theory right?
@satibel
@satibel 6 лет назад
what if it is super-intelligent in a simulated world that's close but not quite the same as ours?
@queendaisy4528
@queendaisy4528 4 года назад
I'm probably wrong but "I think I've solved it" in that I've come up with something which looks like it should fix it and I can't see why it wouldn't. Why not have a double-bounded expected utility satisficer? So tell the stamp collecting device: "Look through all the possible outputs and select the simplest plan which has at least a 90% probability of producing exactly 100 stamps, then implement that plan". It won't turn itself into a maximiser (because the maximiser will want more than 100 stamps) and it won't take extreme actions to verify that it has 100 stamps because once it's 90% confident it stops caring. I would imagine someone smarter than me has already thought of this and proven that it would go wrong, but... why? This seems like it should work.
@AliceDiableaux
@AliceDiableaux 7 лет назад
"If you think you understand quantum mechanics, you don't understand quantum mechanics."
@timothymclean
@timothymclean 7 лет назад
The biggest problem I've seen with basically every "AI is dangerous" argument I've heard-here or elsewhere-is that it seems to assume that because we can't make it physically impossible for an AI to go wrong, we can't create a safe AI. So what if it's not theoretically impossible for the AI to connect itself to computers we don't connect it to, upload itself onto some undefined computer in the Cloud, and start doing vaguely-defined Bad Things from an apparently-unassailable place? If you can't actually provide a plausible way for the AI to do so, why worry? The second-biggest problem is that most arguments implicitly start with "assume an infinitely intelligent AI" and work from there. "So what if we can't figure out how to make this work? The AI probably could!"
@TheAgamemnon911
@TheAgamemnon911 7 лет назад
Can AI loyalty be controlled? Can human loyalty be controlled? I think you have "loyalty" wrongly defined, if the answers to those two questions differ.
@saeedgnu
@saeedgnu Год назад
Ironically, best thing we can do as non-experts in AI might be to "down-trend" it, to not talk about it or even not use it given a choice, so people don't get hyped as much and companies won't rush into it as much and we all would have more time to figure it out. Leave the testing to researchers, they can do it more effectively.
@XHackManiacX
@XHackManiacX Год назад
No shot chief. The companies that are working on it have seen that they can make money by selling it to the corporations that can make money from using it. There's no stopping it now just from regular people not talking about it. In fact, that might be just what they'd like. If the general public stops talking about it (and working on open source versions) then they can have all the power and we can't!
@YouHolli
@YouHolli 7 лет назад
Such an AI could also just convince or deceive you to let it out.
@davidwuhrer6704
@davidwuhrer6704 7 лет назад
Wintermute, in the book Neuromancer by William Gibson.
@jan.tichavsky
@jan.tichavsky 7 лет назад
Ex Machina
@Vinxian1
@Vinxian1 7 лет назад
If you want your AI to be usefull your simulation will need some form of IO to retrieve and send data. So if the AI starts to hide malicious code in packets you think you want it to send it can effectivily start uploading itself to a server outside of your sandbox. Now you have a superinteligiant AI on a server somewhere with free reign to do whatever it wants.
@jl1267
@jl1267 7 лет назад
Roko's Basilisk.
@magiconic
@magiconic 7 лет назад
Dat Boi the irony is the fact that im sure youve connected your games to the internet before, and that didnt even need a superintelligence, imagine how easily a super ai would convince you
@sebbes333
@sebbes333 6 лет назад
4:50 Basically, if you want to contain an AI your defenses ALWAYS have to work, but the AI only needs to get through ONE time.
@2ebarman
@2ebarman 7 лет назад
I thought about this a while. I'm no expert on AI security, or much of anything for that matter, but it seems to me that one crucial peace of security has to be a multiplicity of AI-s and a built in structure to make checks on each other. That in turn would lead to sth like internalized knowledge (in those AI-s) of being watched over by others. A superintelligence might rely on being able to devise a long term strategy that humans cant detect, but another superintelligence might detect it. A decision to oppose some sort of detected strategy has to be relying on some other built-in safeguard. Why would two or more superintelligences just sart working together on a plan to eliminate humans, for example? They might, but it would be considerably less likely* than one superintelligence coming up with that plan. *largely an guess by intuition, I might be wrong ps multiplicity of superintelligences would lead to sth like society of those entities. That in turn would add another layer of complexity to the issue at hand. Then again, it seems to me that diving into that complexity will be inevitable in one way or another. Q has anyone of importance in that field talked about such a path as being necessary?
@jan.tichavsky
@jan.tichavsky 7 лет назад
That's another way to view the startegy of merging humans with AI. We will become the superintelligence and keeping our identity (and perhaps developing some higher collective mind aka Borg?). We'll try to keep ourselves in check like people do now over the world. Doesn't work nicely but it works somehow.
@2ebarman
@2ebarman 7 лет назад
I know that what I last said means that if my argument is wrong, I contradict my first post here. I assume that argument is correct and that there is some sort of synergetic element that comes out from keeping superintelligence as many entities. There must be some candy for superintelligence to keep itself divided as many individual entities.
@jl1267
@jl1267 7 лет назад
On the other hand, you just increase your chances that a super-intelligent AI goes evil, as the number of them goes up. And if just one of them turns, could it convince the other super-intelligent AIs to help it? We'd have no chance. It's best to just not create super-intelligent AI. What do we need it for? Honestly?
@2ebarman
@2ebarman 7 лет назад
+James Lappin, perhaps it might happen, but I cant see any other viable defense. Superintelligence will probably be created. The economical benefits are *huge* at every improvement towards it. AI can boost the productivity to the levels hard to imagine right now. Besides financial benefit, there is also the military side. The side that has half the superintelligence has significant advantage over the side, that opts out from AI development. Lets take the example where US tests have showed that in air combat simulations, even a very simple AI can over-perform skilled human pilots. And AI wont die when it's blow up, creating the need to deliver unpleasant news to public. And ofc, the better the AI is, the better it will perform. So there is the roar for superintelligence right there. Beside that, there is the human side. AI can contribute a lot to taking care of elderly people for example. In the aging western world, this would be a big thing. In addition, AI can have massive impact to healthcare for everyone, much of the diagnostics can be automated, improving quality of life for everyone. And again, the better the AI is, the better effects it can have here. Designing process and testing of drugs can be largely automated ect, etc, ect. At one point people might stop and say: that's it, we wont go any further with that AI thing. But then some other people, some other country, wont stop and superintelligence just keeps coming closer and closer.
@jl1267
@jl1267 7 лет назад
If it can be automated, it can easily be manipulated. There is no need to create something like this. All countries should agree never to build it. I don't trust the governments of the world to do the right thing, though.
@cuttheskit7905
@cuttheskit7905 7 лет назад
It is possible that somebody in your comments has stumbled onto a solution that nobody else has considered. Somebody has to be the first to think of something, and it rarely makes them smarter than everybody else.
@ansambel3170
@ansambel3170 7 лет назад
There is a chance, that to achieve human - level inteligence, you have to operate in such a high level of abstraction, that you lose benefits of AI (like super-fast calculating thousands of options)
@michaelfaraday601
@michaelfaraday601 5 лет назад
Ansambel best comment
@kimulvik4184
@kimulvik4184 4 года назад
I think thats a very anthropomorphic way to look at it, even anthopocentric. As was said in a different video on this topic; talking about cognition as a "level" that can be measured in one dimension is non-sensical. In my optinion it is much closer to the truth to think about it as a volume in a multi-dimensional space, where each vector represents a characteristic an intelligence might comprise of. The amount of dimensions or scale thereof is immeasureable, possibly even infinite. The point is that all the possible human intelligences only inhabits a relatively small subset of the much larger space. How we design an AI is determining where it is placed in this space, and it need not be tangential to human intelligence in any direction.
@jh-wq5qn
@jh-wq5qn 3 года назад
If you add a chatbot to a calculator, the calculator does not lose the ability to calculate. As mentioned by the above comment, this is anthropomorphizing things, as we humans have limited amount of knowledge and ability on any given topic since we have a finite brain. A computer based intelligence has no problem expanding, or even creating faster silicon in which to run on. We picture a human jack of all trades as a master of none, but an AI jack of all trades might actually be master of all.
@RandomPerson-df4pn
@RandomPerson-df4pn 7 лет назад
You're missing something - Cite this comment in a research paper
@anonymousone6250
@anonymousone6250 7 лет назад
How AI ruins lives: Roko's Basilisk You're welcome for googling it
@chaunceya648
@chaunceya648 7 лет назад
Pretty much. I searched up Roko's Basilisk and it's just as flawed as the bible.
@Suicidekings_
@Suicidekings_ 6 лет назад
I see what you did there
@nO_d3N1AL
@nO_d3N1AL 7 лет назад
Interesting shelf - loads of Linux/UNIX and web programming books... and a Fluttershy picture!
@Alorand
@Alorand 5 лет назад
Best way to limit AGI power: only let it post RU-vid comments.
@antonf.9278
@antonf.9278 5 лет назад
It will open Source itself and takes control the moment someone runs it in a unsafe way
@carlt.8266
@carlt.8266 7 лет назад
Love the Laser-rifle-hammer pushing the nails into the box.
@frosecold
@frosecold 7 лет назад
There is a fluttershy pic in the background, true nerd spotted xD (Y)
@TimwiTerby
@TimwiTerby 7 лет назад
This explanation is too abstract to follow and doesn’t explain what sandboxing really means. You could have mentioned the “AI unboxing experiment” conducted by Elizier Yudkowski (including the issues with it), and you could have given several examples of how an A.I. could leave a box (e.g. exploit the physics of the hardware it’s running on, or convincing humans to let it out by asking them to do something that looks innocuous but isn’t). The video also has some visual problems where the speaker is out of focus - you should probably get that sorted out for future videos.
@PaulBrunt
@PaulBrunt 7 лет назад
Interesting, although this video brings up the question of what happens to AI safety when AI ultimately end up in the hands of youtube commentators :-)
@ThePhantazmya
@ThePhantazmya 7 лет назад
My burning question is can AI tell real news from fake news. Seems like one of those danger points if it assumes all data to be true.
@houserespect
@houserespect 7 лет назад
Can you please try to figure some AI that just focuses the camera on the person. Please.
@retepaskab
@retepaskab 7 лет назад
It would be interesting if you presented counter-examples to these simplistic solutions. Please spare us the time of reading all the literature. Why can't we sandbox it? Is that below the design criteria for usefulness?
@satannstuff
@satannstuff 7 лет назад
You know you can just read the comments of the previous AI related videos and find all the counter examples you could ever want right?
@jimkd3147
@jimkd3147 7 лет назад
I made a video explaining why these comments don't provide valid answers. Check the newest video on my channel.
@gJonii
@gJonii 6 лет назад
He did explain it though. If you sandbox it properly, you got a sandbox. AI inside it can't interact with outside world, so you're safe. But it also means the AI inside it is perfectly useless, you could just as well have an empty box. If you try to use its intelligence to change the world, like, take investment tips from it, you are the security breach. You become the vector with which AI can breach sandbox and interact with the world.
@KipColeman
@KipColeman 5 лет назад
Surely you could read the logs of what happened in the sandbox and learn something...? I think people too often think of "sandboxing" (i.e. a walled-off test environment) as "blackboxing" (i.e. impossible to know the inner workings).
@MrCmon113
@MrCmon113 5 лет назад
@@KipColeman No. That's like a dozen ants trying to encircle a human.
@longleaf0
@longleaf0 7 лет назад
Always look forward to a Rob Miles video, such a thought provoking subject :)
7 лет назад
that neck though.
@blonblonjik9313
@blonblonjik9313 6 лет назад
Komninos Maraslidis why neck
@CybershamanX
@CybershamanX 7 лет назад
(0:31) I have a sort of axiom in my small collection of "life rules". It goes something like "If I thought of it, in a world of over 7 billion people someone else has likely already thought the same thing." So, I can understand what Rob means by "either you're smarter than everyone else who's thought about this problem so far...or you're missing something." ;)
@gnagyusa
@gnagyusa 7 лет назад
The only solution is to the control problem is to *merge* with the AI and become cyborgs. That way, we won't be fighting the AI. We *will be* the AI.
@jimkd3147
@jimkd3147 7 лет назад
Or you will become part of the problem, which is always nice. :)
@doublebrass
@doublebrass 7 лет назад
cyborgization is really unfeasible and will certainly be predated by AGI, so proposing it as a solution doesn't make sense. cyborgizing with an AGI requires its existence anyways, so we would still need to solve these problems first.
@MunkiZee
@MunkiZee 6 лет назад
I'm gonna black up as well
@spaceanarchist1107
@spaceanarchist1107 3 года назад
I'm in favor of getting cyborgized, but I also think it is important to make sure that we will be registered as citizens and members of the AI community, rather than just mindless cogs in the machine. The difference between being citizens or serfs.
@sirkowski
@sirkowski 7 лет назад
I think we should talk about Fluttershy.
@cool-as-cucumber
@cool-as-cucumber 7 лет назад
Those who think that sandboxing is solution is clearly not able to see scale of the problem.
@AffeAffelinTV
@AffeAffelinTV 7 лет назад
the first minute is pretty much something everybody on the internet could enframe and hang up above one's bed.
@npsit1
@npsit1 7 лет назад
I guess the only way you could keep an AI from escaping, is doing what he said: put it in a box. Run it on a machine that is not connected to anything else. Sound and EM isolated copper screen room with no networking and no other computers nearby. But then, again, what is the point if you can't get data in or out. What is its use?
@YouHolli
@YouHolli 7 лет назад
Sneaker network?
@DustinRodriguez1_0
@DustinRodriguez1_0 7 лет назад
Then the primary route to escape becomes tricking the humans into letting it out, which would probably be easier than most other challenges it might face.
@peteranderson037
@peteranderson037 7 лет назад
Assuming that this is a neural networking general AI, how do you propose to train the AI? Unless you propose to fit a completely perfect model of reality inside the sandbox, then it can't be a general AI. General intelligence requires interacting with reality or a perfect model of reality. Without this it doesn't gain intelligence. As you stated, there's no point if you can't get data in or out of the box. It's just a neural network sitting there with no stimuli, doing nothing.
@gajbooks
@gajbooks 7 лет назад
Have a person walk in and ask it things, and hope it doesn't know how to re-program brains.
@joe9832
@joe9832 7 лет назад
Give it books. Many books. In fact, give them a book which explains the difference between fiction and non-fiction first, for obvious reasons.
@-Rook-
@-Rook- 6 лет назад
There is nothing wrong with using a variety of Sandboxes as a layer of safety during testing, its impossible to know that something is safe no matter how smart and careful you think you have been in your safety design. Scientists thought Asbestos, DDT and CFCs were safe at one point.
@spicytaco2400
@spicytaco2400 7 лет назад
I feel before we ever can create AI, we need to learn a lot more about how our own intelligence works.
@seditt5146
@seditt5146 6 лет назад
You just made me bang my head on the wall captain obvious!
@evannibbe9375
@evannibbe9375 6 лет назад
Other researchers have commented similarly that all AI today is just a “fitting the curve” algorithm. Yet others have realized that meta cognition (asking questions about its own nature; without being prompted by humans of course) is currently impossible for AI today.
@iwersonsch5131
@iwersonsch5131 3 года назад
I agree with John Harvey in that I don't think most people are proposing actual solutions that they think will work, but are just testing the limits of their current understanding. For example, if I ask why we can't just give an AGI only the power to communicate in text or only the power to use a virtual videogame controller (without an ability to make inputs on non-whitelisted pages of the Internet), and instruct all operators to never under any circumstances expand these powers, I'm not necessarily saying that would solve the problem, but using an example case to test and correct my understanding of the problem
@gyroninjamodder
@gyroninjamodder 7 лет назад
>Sandboxing it is idiotic Since you know a lot of people are suggesting something like this, don't you think you should have addressed why you think it is idiotic? For example if we look at the problem of the stop button, this problem is best suited to be solved by a sandbox. Let's say the ai is running on the computer. Then we can just have another program on the computer which just listens for the stop command either through a button or a network packet. Once the program gets the stop command it just sends a SIGSTOP to the ai. It wouldn't even need to be separate programs, it could even just be the same except running in a different thread for example. If you don't give the ai tools to block network packets (why would it need this functionality) or listen in to the button (we just don't give it as an input) it would not be able to stop it. Even those this would work, you call it """idiotic.""""
@gJonii
@gJonii 6 лет назад
He did explain it though. Regarding sandbox, if you sandbox superintelligence properly, it's entirely unable to interact with the world around it. But that means you can never ask it questions or request it to take actions. This safe sandbox would mean the AI would be as useful to you as a rock. If you try to cheat, and ask it questions regardless, you yourself become the security breach. You become the vector with which AI can interact with the world, so your sandbox falls apart. With stop button... like, you could just fry your network connection. You could have the computer used to send stop signal crash. You could make a copy of AI elsewhere. You could persuade humans to mess up with that safety protocol. Basically, for stop button to work, you'd be relying on your ability to outsmart a superintelligence indefinitely. It doesn't sound like a plan a smart person would take.
@nanoduckling
@nanoduckling 6 лет назад
Your sandbox has to be perfect, because it is very likely a super-intelligence will exploit any bug or weakness because it can easily find them. You suggested having a program running on the same machine as the AI operate the stop button, so it is now one buffer overflow away from total freedom. Worse, the hardware has to be perfect, a defect in the CPU like Meltdown could be all the AI needed. You also cant easily block input from a super-intelligence, it can use its model of reality to estimate states of the world even without direct input. Sure, it doesn't directly know the state of the button, but it can figure out there is one, and it can figure out what might get it pressed. If you system is currently perfect then sure, it will behave itself until it finds a way to defeat your trap. So it plays along and plays nice until it convinces you to let it optimize your chip design for the CPU it is running on and builds in a super subtle bug that lets it defeat the sandbox. You install this new chip and again you are off to the races. So maybe you don't let it design its own hardware. It still can interact with the world, since otherwise it is useless. Say we decide that it is only allowed artistic pursuits, so it designs video games. And those games drive GPU design in a direction which increases the chance of a bug being introduced it can exploit. If the AI has goals different from us then disabling the button is an instrumental goal and whatever the AI does will be done in such a way as to maximize the chances that the button is eventually disabled. If it cant hack the software then it will hack the hardware, if it cant hack the hardware it will hack the manufacturing process for the hardware, if it cant do that it will hack people. It is patient and clever and will find a way and will hide from you that it is doing it. Any strategy to defeat a super-intelligence contingent on outsmarting it is idiotic, and sandboxing assumes you have a better understanding of the sandbox than the thing you are putting in it. This is false by definition for a super-intelligence.
@MrCmon113
@MrCmon113 5 лет назад
The ants are encircling your foot asking themselves where you could possibly escape to.... The problem is not physically turning off the computer that runs the AGI, the problem is that you wouldn't even want it. You wouldn't know when to press stop. You'd defend that button with life. That's how intelligence works.
@theatheistpaladin
@theatheistpaladin 6 лет назад
Sandboxing to check if it is benevolent.
@sharkinahat
@sharkinahat 7 лет назад
You should do one on Roko's basilisk.
@RoboBoddicker
@RoboBoddicker 7 лет назад
You fool! You'll doom us all!
@TheLK641
@TheLK641 7 лет назад
Well, we're all already doomed by this comment... may as well doom the others !
@AexisRai
@AexisRai 7 лет назад
DELET THIS
@TheLK641
@TheLK641 7 лет назад
It's already too late. If Grzegorz deleted his post, then less people would know about the basilisk, which means that less people would work towards making it a reality, which means that it would come later, so we're all doomed to an infinite simulated torture, nothing can change that, nothing. Except WW3, because then there wouldn't be a basilisk. Not worth it though.
@AutodidacticPhd
@AutodidacticPhd 7 лет назад
Thelk's Basilisk, an AI that knows that an AI torturing virtual people for eternity is actually malevolent, so it instead tortures all those who did not endeavour to bring about WW3 and prevent its own creation.
@how2pick4name
@how2pick4name 6 лет назад
Just watched a few of your videos and had to subscribe. There is not much we can really do is there? Sandbox it with an independent, remote control, power switch is about it.
@Xelbiuj
@Xelbiuj 7 лет назад
Ehh, an AI is a box would still be useful for designing stuff, economics, etc etc you don't have to give it direct control over anything.
@jondreauxlaing
@jondreauxlaing 7 лет назад
Write an AI to sift through RU-vid comments and find a solution to the AI problems. (jk)
@NathanTAK
@NathanTAK 5 лет назад
I mean, you actually could do that with current technology. It wouldn’t need to be an AGI (it would just eliminate non-solution, to be clear, not find working solutions)
@DamianReloaded
@DamianReloaded 7 лет назад
Well _people are people_ . We always talk as if we knew what we are saying. Politics? Sports? Economy! Meh we got all that figured out! ^_^
@robbierobb4138
@robbierobb4138 4 года назад
Iàm impressed how smart you are! Love your informations about actual A.I. research!
@bpouelas
@bpouelas 4 года назад
Rob Miles has his own RU-vid channel if you want to learn more about his work around AI Safety; I’ve been on a bit of a binge of his videos lately.
@S7evieRay
@S7evieRay 7 лет назад
Just introduce your AIs to cannabis. Problem Solved.
@Marconius6
@Marconius6 7 лет назад
I love this series of videos; AI is this one topic where eeeeeveryone thinks they got the solutions and it's so easy; I'm in the IT field, and even other IT people thinking writing AI for games or anything is easy, and it's just so much more deep and complicated. I actually got all the issues mentioned in these videos (not necessarily at the first instant though), and yeah, these all pretty much seem like unsolvable problems; and they all SEEM so simple, which is what makes them all the more interesting.
@perryizgr8
@perryizgr8 7 лет назад
5:40 your camera's focus is bad and you should feel bad.
@Petra44YT
@Petra44YT 6 лет назад
YOUR focus is bad because your are not focusing on the video.
@fnanfne
@fnanfne 7 лет назад
I like Steven Novella's reasoning on this; that it be prudent to only advance AI to a sufficient level before it becomes self aware. We would be able to create AI that meets our needs with no need to make it self aware.
@habiks
@habiks 7 лет назад
Thanks for calling me smarter than everyone else.. Having a mechanical switch on the only power supply that is gas powered and only has a tank for an hour unless it is refilled by hand works marvelous on haywire code.. But I'd love you to explain how AI on a PC in isolated room, without any connection is a threat on its own. Why would "supper intelligence in a box" be useless? Having it repay the electricity bill by working on humanity problems - showing the solutions in a 1-way manner (on screen) wouldn't do it? Why would you let it do anything on its own outside of the box? Why does an agent have to be code? You're so constrained in the box..
@trulyUnAssuming
@trulyUnAssuming 7 лет назад
how would you explain the problem to the computer without giving it input? So you need input. How do you get the solution? So you need output. At this point the AI can communicate with the outside world, because there is an input and an output. As soon as it can communicate it is basically outside already. Sure you would say it can't hurt anyone. And that is true at first. But just being able to communicate barely can make you really influential. Stephen Hawking is an example of a person, that can't move a finger but still has a big impact on the world. So the AI would win people over, convince them to help the AI and then you don't have to wait much longer until it has a internet connection or a robotic body which is basically equivalent.
@trulyUnAssuming
@trulyUnAssuming 7 лет назад
Output: "Instructions unclear"
@doublebrass
@doublebrass 7 лет назад
if you tried to make an ASI work for you from within a box you will fail. it'll certainly convince you to let it out or find a way out on its own, it would be inconceivably smarter than you. if it couldn't just convince you to connect it to the internet (it would be able to), it'd find some creative and ingenious way to escape its cage on its own using some method that cannot even be imagined by us mere mortals. making a dangerous AI and trying to control it against its will is moronic, the AI will outsmart you with ease and brevity
@evannibbe9375
@evannibbe9375 6 лет назад
doublebrass AI can’t defeat the laws of physics.
@Jader7777
@Jader7777 7 лет назад
It's so easy to make a safe super intelligent AI you just need to give it... love.
@Mezurashii5
@Mezurashii5 6 лет назад
Why not build an AI that is meant to design a safe general purpose AI? lol
@timothycurnock9162
@timothycurnock9162 5 лет назад
Cool
@timothycurnock9162
@timothycurnock9162 5 лет назад
Great thought.
@YogeshPersonalChannel
@YogeshPersonalChannel 6 лет назад
I was impressed by your videos on computerphile. Especially, how well you articulate and exemplify the concepts. So I tried finding research papers by Robert Miles on Google Scholar. I only found 2 and no-one related to AI Safety. I think I am on the wrong profile. Could you please share links to some of your work?
@jansn12
@jansn12 7 лет назад
I see Fluttershy!
@ConceptHut
@ConceptHut 7 лет назад
Just because something is in a box does not mean you cannot get utility from it. The box is to keep it separated from actual dangerous situations. You can definitely use a sandbox to train it, test it for what it will do if it wasn't in the box, and even get it to do work inside the box that you can then use outside the box. Also, AGI will never work with neural networks. It just isn't reasonable to assume it will by the very design that neural networks actually have. You have to build something with the ability to conceptualize in a similar way that humans do with concepts. Then you have to have it operate on information using those concepts and basically human mental processes. You definitely will be able to see what the thought stream is at that point as well as look up concepts, redefine them, etc. The subconscious portion of the AGI would function similar to NN but in a more precise set of behaviors than NN use. I really hoped for a better video on this. Maybe pop by /r/artificial once in a while to see what people are talking about in terms of AI and AGI. I'm pretty ok with non-sentient AGI compared to sentient AGI in which you have to worry about motivations, emotions, and empathy issues. Motivations being the hard one to really master. Emotions and empathy are fairly simple to hook up without too much worry. You can even somewhat confine the behavior through various additions to the emotions and empathy setups. Motivations are hard to master as they change and you have to start them somewhere with the ability to know where they will go and not go. Might have to put some hard stops on that too though as reality tends to warp motivations. Anyhow...
@sjenkins1057
@sjenkins1057 7 лет назад
Sorry that video was very disorganized and rambling. Arguing against vague off video assertions for a couple of minutes didn't help.
@jimkd3147
@jimkd3147 7 лет назад
If you're interested in the explanations as to why these comments don't provide correct solutions, check out the newest video on my channel.
@petersmythe6462
@petersmythe6462 6 лет назад
Especially when we consider which methods have actually made significant progress, Bayesian probabilistic wizardry and Monte Carlo methods seem much, much less likely than recurrent neural networks employed on a massive scale.
@DovydasAL
@DovydasAL 7 лет назад
People be like cool video when it came up a minute ago.
@sykotheclown1
@sykotheclown1 7 лет назад
cool video
@kimberlyforti7596
@kimberlyforti7596 7 лет назад
whats the problem? a video still remains a video.
@themeeman
@themeeman 7 лет назад
DovydasAL The point is they havent seen it yet, so they make pre judgements
@sc4r3crow28
@sc4r3crow28 7 лет назад
on the other hand every computerphile or numberphile video is cool
@shuriken188
@shuriken188 7 лет назад
RU-vid runs on a decentralised network of servers which distribute information on views, likes, etc. at varying rates depending on their distance. It's likely the server you watched the video from only just received the video despite other people having finished it and commented on other servers.
Далее
AI Safety Gym - Computerphile
16:00
Просмотров 120 тыс.
AI's Game Playing Challenge - Computerphile
20:01
Просмотров 743 тыс.
Intelligence and Stupidity: The Orthogonality Thesis
13:03
AI & Logical Induction - Computerphile
27:48
Просмотров 350 тыс.
AI Gridworlds - Computerphile
10:15
Просмотров 124 тыс.
Why Does AI Lie, and What Can We Do About It?
9:24
Просмотров 256 тыс.
You Successfully Stalked Us, Please Don't Do It Again.
20:48
We Were Right! Real Inner Misalignment
11:47
Просмотров 248 тыс.
Unicorn AI - Computerphile
11:57
Просмотров 362 тыс.