Тёмный

AI Safety Gym - Computerphile 

Computerphile
Подписаться 2,4 млн
Просмотров 121 тыс.
50% 1

Опубликовано:

 

28 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 191   
@AcornElectron
@AcornElectron 4 года назад
Awesome. I’ve watched literally everything Rob has recorded on AI. He’s very relatable, knowledgeable and informative.
@RasperHelpdesk
@RasperHelpdesk 4 года назад
"A ship in harbor is safe - but that is not what ships are built for."
@imveryangryitsnotbutter
@imveryangryitsnotbutter 4 года назад
"The Earth is the cradle of the mind, but one cannot eternally live in a cradle." - *Konstantin Tsiolkovsky,* _from a letter written in 1911_
@Sonny_McMacsson
@Sonny_McMacsson 4 года назад
Except in Peal Harbor.
@Catcrumbs
@Catcrumbs 4 года назад
The ships attacked in Pearl Harbour were safer there than if they had been attacked in open water. Almost all the ships sunk there were raised to fight again.
@Blox117
@Blox117 4 года назад
@@Sonny_McMacsson never heard of any ships sunk at this Peal Harbor
@iam3377
@iam3377 4 года назад
Blox117 TENOHAIKA BONZAI
@hattrickster33
@hattrickster33 4 года назад
Looks like they're taking security very seriously. This guy is always kept inside a prison to avoid his rogue AI pets from escaping.
@letsgobrandon416
@letsgobrandon416 4 года назад
I really love listening to Rob's explanations.
@intron9
@intron9 4 года назад
enable subtitles please
@7333-e3k
@7333-e3k 4 года назад
That is an absolutely miserable classroom!
@zwz.zdenek
@zwz.zdenek 4 года назад
I thought it had to be used by soldiers or something.
@ashurean
@ashurean 4 года назад
5:19 Would it be possible to mix VR and test simulations to have real humans interact with the simulated machine? Just have it open to the public and you have all the "real human reactions" you'll ever need.
@oxybrightdark8765
@oxybrightdark8765 2 года назад
When real, unselected humans mess with machines- they invariably will try and teach the machine bad things. For instance, look up what happened to microsoft Tay.
@Theoddert
@Theoddert 4 года назад
*clears through* I am a simple robot, I see a Rob Miles AI video, I like it,
@markhall3323
@markhall3323 4 года назад
I liked the content but not the adverts, too intrusive
@ragnkja
@ragnkja 4 года назад
Mark Hall And, in this case, too loud.
@Speed001
@Speed001 4 года назад
@@ragnkja Other than that, I was okay with it
@RedByte1608
@RedByte1608 Год назад
Hello, I have a question about this topic: Is it possible to imprison these robots in an environment where these can't harm any humans, but can do all the tasks that are gaffed to them? For example, in a warehouse where there is no way out for these robots, but where they can do all the warehouse work, or in a commercial kitchen where they can only interact with the kitchen and nothing else. I think the best solution is to separate these robots from humans as much as possible. I believe that it is impossible to develop an algorithm that can cover all hazards and avoid harming a human being.
@lHenry97
@lHenry97 4 года назад
What exactly is the difference between reinforcement learning penalties and these constraints?
@danieljensen2626
@danieljensen2626 4 года назад
Seems like the penalty basically becomes infinite after a set number of negative outcomes, and you program that limit in yourself. There are probably other differences, but I don't know enough to understand them.
@HalcyonSerenade
@HalcyonSerenade 4 года назад
From what I can tell, penalties are negative events that are _responded_ to, whereas constraints are considered _before_ they're violated. Assigning a _penalty_ to hurting a human wouldn't be ideal, because then the AI would only learn not to do that _after_ they've already hurt someone. That's the high-concept as I understand it... I'm actually pretty interested in getting into machine learning and might do some research into the topics discussed in this video, so maybe I'll make a follow-up comment (or edit this one) with a more robust answer if someone else doesn't give one in the meantime :P
@JulianDanzerHAL9001
@JulianDanzerHAL9001 4 года назад
does everything have to be controled by learning? I mean I get that's a nice heoretical exercise which might get relevant eventually and it's jsut examples - and this is partially already done - but for example for a robotic arm I'd use learning only to ouput a desired hand location - then use (comparatively) simple reverse kinematics to igure out how to move the arm to get the hand there wile also checking that the arm cannot get anywhere near a human - the learning part has no direct control of the arm and if it tries to move the arm through the human the kinematics won't let it and it will have to find a way around
@BurningApple
@BurningApple 4 года назад
The robot arm is a toy problem - it doesn't map to all cases, e.g a robot learning to walk
@JulianDanzerHAL9001
@JulianDanzerHAL9001 4 года назад
@@BurningApple yeah but in many applications a similar though more complex solution might be doable - if you have a walking robot controlled by a learning algorithm you could instead have 2 learning algorithms and a set o simple geometry equations where the first learner tries to solve a probelm and tells the robot where to go, the geometry limits where that goal location CAN be (not near humans, cars, fragile objects, etc) and the second learner moves the robot but it's goal is not to solve a problem but only to reach the (previously limited) location can't work everywhere which is why this kind of research is important but I think it's ometimes an overlooked soluion in practice
@theMifyoo
@theMifyoo 4 года назад
Here is an idea, baby robots. The idea being you make a smaller scale, perhaps squishier body for learning in and train the robot. That way the baby can flail about while learning what is safe or not safe while not harming anything.
@witeshade
@witeshade 4 года назад
I think I use the internet too much. I read "fasthosts" as "fast thots"
@ciarfah
@ciarfah 4 года назад
Daniel G begone
@PopeGoliath
@PopeGoliath 4 года назад
I read "fast thots" as "fast tots" and really wanted me some drive-through taters.
@qeithwreid7745
@qeithwreid7745 3 года назад
What would be a typical task for the first generation of AI?
@SkarbowkaZokopane
@SkarbowkaZokopane 4 года назад
Dude looks like skinny Ethan from H3H3
@pb-vj1qs
@pb-vj1qs 4 года назад
What is your channel?
@RobertMilesAI
@RobertMilesAI 4 года назад
It's just "Robert Miles AI"
@pb-vj1qs
@pb-vj1qs 4 года назад
@@RobertMilesAI thanks!
@mohammedmohammed519
@mohammedmohammed519 4 года назад
Robert Miles ok
@mikescott7530
@mikescott7530 4 года назад
Bamzooki with extra steps
@gowikipedia
@gowikipedia 4 года назад
Rob Miles legitimately looks jaundiced and has done for ages. Someone tell him to eat better.
@gowikipedia
@gowikipedia 4 года назад
It's not ok to have waxy, yellow skin
@denisschulz3814
@denisschulz3814 4 года назад
He looks normal 😅
@gowikipedia
@gowikipedia 4 года назад
@@denisschulz3814 false, look again
@THEPHILOSOPHYIS
@THEPHILOSOPHYIS 4 года назад
Hey! I really like your videos. And I am learning JSP right now after completing the basics of Java. Could you please make a video on why scriptlets in jsp are discouraged. Thanks.
@Jojoxxr
@Jojoxxr 4 года назад
Griswold
@R.Daneel
@R.Daneel 2 года назад
So if you want to create a self driving car, you release it half-finished and tell people they need to keep their hands on the wheel. Then you pay very close attention to when the driver makes corrections to what the autopilot is doing. Or if you find people are voting videos down only because lots of other people have, polluting the data, you hide the down votes. The motivations of all modern companies suddenly look very different from the old-school "maximize profit".
@kasuntharaka8040
@kasuntharaka8040 Год назад
Gym???
@deanvangreunen6457
@deanvangreunen6457 4 года назад
200 points = dont tuch baby 100 points = make coffee 50 points = push power buttons ai = greedy reward function
@blackmage-89
@blackmage-89 4 года назад
Common sense seems to be the most difficult thing for AIs to learn.
@sevrjukov
@sevrjukov 4 года назад
Blue car at 7:42 has "1337C" licence plate. A Rick&Morty reference? :-)
@MmKayUltra1
@MmKayUltra1 4 года назад
Leet reference
@Pehr81
@Pehr81 4 года назад
1337
@AlexandreGurchumelia
@AlexandreGurchumelia 4 года назад
Safety gym? These sounds so cringe.
@DrumsKylePlays
@DrumsKylePlays 4 года назад
Miles is an excellent teacher. Always does a great job fielding questions from a layperson.
@sandwich2473
@sandwich2473 4 года назад
His channel has a bunch of videos that are great to just play on a second monitor, or in the background to learn stuff. He's pretty cool.
@ragnkja
@ragnkja 4 года назад
The sponsor intro is too loud. Edit: as is the sponsor segment at the end.
@AustinSpafford
@AustinSpafford 4 года назад
Indeed, the video content’s volume was comparable to other videos I had been watching, but that sponsor callout at the beginning was so loud that I found myself swearing and scrambling for the volume control. Understandably, mistakes happen, and it’s unfortunate that only youtube themselves have access to editing published videos.
@Petertronic
@Petertronic 4 года назад
It made my cat jump.
@philrod1
@philrod1 4 года назад
It nearly woke my child! 😱
@SproutyPottedPlant
@SproutyPottedPlant 4 года назад
Hint: you can use the volume control to adjust the volume
@philrod1
@philrod1 4 года назад
@@SproutyPottedPlant After the fact? It's not as if there was a warning.
@danielm9753
@danielm9753 4 года назад
I used to know this guy. Glad he’s still at it. Easily one of the smartest dudes I’ve met in person
@janzacharias3680
@janzacharias3680 4 года назад
i first read prison, and was like what
@null-bd7xo
@null-bd7xo 4 года назад
@@janzacharias3680 bruhhhhhh xddddddd
@moritzschmidt6791
@moritzschmidt6791 4 года назад
Still no PhD yet and I guess hes not as smart as some people here think.
@lullah85
@lullah85 4 года назад
@@moritzschmidt6791 is getting a PhD a benchmark for smartness?
@moritzschmidt6791
@moritzschmidt6791 4 года назад
@@lullah85 Well Iam sure that if someone is trying hard to get a PhD and doesnt get it, he is not as smart as someone who got it under the same conditions. Right?
@DIECARS1
@DIECARS1 4 года назад
never knew notts uni had a prison to film in
@Ceelvain
@Ceelvain 4 года назад
For, you know. Reenacting the Standford prison experiment. :D
@johnhudson9167
@johnhudson9167 4 года назад
It's a safety gym for academics
@zacmg
@zacmg 4 года назад
pretty sure it's at the nottingham hackspace
@LeoStaley
@LeoStaley 4 года назад
Rob's video on the 3 laws of robotics is what really demonstrated to me how serious ai safety really is.
@TheStarBlack
@TheStarBlack 4 года назад
7:58 that artificial camera movement is both trippy and impressive!
@esquilax5563
@esquilax5563 4 года назад
I like the fact that young Robert uses the same Simpsons references I remember from 20-odd years ago
@silaspoulson9935
@silaspoulson9935 4 года назад
Could you link paper?
@joshie228
@joshie228 4 года назад
I'm a man of simple tastes - I see Rob Miles, I press the like button.
@gasdive
@gasdive 4 года назад
It tickles my reward function.
@arthurcheek5634
@arthurcheek5634 4 года назад
gasdive hahahaha
@doodlebobascending8505
@doodlebobascending8505 4 года назад
I initially read this as "AI Sentry Gun" and thought Rob was having a crisis.
@moon_bandage
@moon_bandage 4 года назад
He never ended up explaining what this "gym" thing is :(
@giampaolomannucci8281
@giampaolomannucci8281 4 года назад
I think he did. He first said these are places where you train AI, then moved into explaining what "training AI" means.
@Hexanitrobenzene
@Hexanitrobenzene 4 года назад
? At 12:43, the entities which AI can control in a "gym" are presented. Then at 13:26, the obstacles are presented. The whole video is presenting a framework which helps to develop safer algorithms, which can then be benchmarked in the "gym" for their safety.
@abcdemnopq3583
@abcdemnopq3583 4 года назад
Fab and super interesting video, also v. much appreciated your [Rob's] EA talk yesterday - will definitely be checking out the AI Safety field in more depth.
@PanicProvisions
@PanicProvisions 4 года назад
If he stays at it, in 20-30 years, this man will be in the position of people like Neil deGrasse Tyson, Bill Nye or Lawrence Krauss today, once AI starts really taking off and people are looking for public educators who have been tackling this for decades.
@reedl9452
@reedl9452 4 года назад
"You can't train self driving cars safely in the real world" Tesla fanboy has entered the chat
@zwz.zdenek
@zwz.zdenek 4 года назад
More like: Tesla: Hold my electrolyte!
@Speed001
@Speed001 4 года назад
Ehh, controlled environment
@U014B
@U014B 4 года назад
10:59 Well, pens and mugs are both toruses, so you really wouldn't need to change anything.
@Hexanitrobenzene
@Hexanitrobenzene 4 года назад
Mugs - yeah, but pens ?
@007filko
@007filko 4 года назад
So, if we look at how humanbabies tend to learn, it usually is also by doing random things, which very often happen to be quite dangerous, even if only to the baby itself. It's not that a baby crawling around can't do anyone harm. The difference is, I believe, that a human baby is under constant superviosion by its parent(s). We perfectly know, that it's impossible for any human to constantly observe and analyse the process of learning of an AI, even with use of reward modelling. If there is a possibility of something danegrous happening, we should sit with a power off button in a vritual world, predicitng when an agent is going to crash or destroy something, and then manually giving negative feedback. However, maybe a solution worth considering would be to have this kind of "parenting agent", trained specifically to try to predict the "learning" agent actions, or just switching it off, when it detects a possible disaster? To put it in another words - to have this constraint in a form of another trained AI?
@jimijenkins2548
@jimijenkins2548 2 года назад
Okay, now train the parent AI.
@locarno24
@locarno24 4 года назад
Completely agree. Big safety failures - in organisation structure, or real world industry, or whatever - usually occur because of either unknown elements in the environment or unexpected interaction by known elements. Because - at stupidly obvious level - if you could predict it you would (you'd hope) have done something about it. Thanks for the the description on the constraint learning. Keeping constraints and goals kept as modular elements is one of those things that makes obvious sense *once* someone explains it to me.
@Danicker
@Danicker 4 года назад
Sneaky hitch hikers reference ;) love it!
@mohamedhabas7391
@mohamedhabas7391 2 года назад
Miles is an excellent teacher. 👨‍🏫
@elephantwalkersmith1533
@elephantwalkersmith1533 4 года назад
Non linear optimization methods like sqp often include constraints. This is very common in fields other than machine learning. The problem with constraints is their formulation is actually very difficult, and infeasible path optimization is necessary to solve the learning problem.
@cheaterman49
@cheaterman49 4 года назад
The path optimization thing, is it kind of like hitting a local minimum because of constraint boundaries, preventing the exploration of a better solution?
@raleighcockerill
@raleighcockerill 4 года назад
Engagement
@TheBinaryHappiness
@TheBinaryHappiness 4 года назад
1337 plate number, aww yeahh!
@ri-gor
@ri-gor 4 года назад
the license plates are 1337 XD
@95reide
@95reide 3 года назад
I consider myself oto be quite knowledgeable when it comes to hitchhiker's guide to the galaxy, and as best I can tell, he butchered whatever he was trying to reference. If I'm correctly inferring what he's going for, it's the 42 bit. Engineers: *Builds a super-powerful computer system.* "What's the answer to the ultimate question of Life, the universe, and everything?" Computer system: ... *1,000 years later.* "42. you asked for the answer to the ultimate question. BUt you'll need an even more sophisticated system in order to figure out what the right question is."
@Qkano
@Qkano Год назад
Great talk! 10:15 Rename it. Just imagined the "Biden Robot" trying to "avoid a recession" ... and how it discovered "completely redefine what a recession is" instead of actually change the economy.
@rtg5881
@rtg5881 2 года назад
I dont know, seems fairly straightforward to me. I dont know how often humans crash, say every 500 trips/every 10000 kilometers, okay. Then whatever reward it gets for 500 trips and 10.000 kilometers is the negative for a crash. Sure, maybe you should refine it by severity of the impact and of course some things im happy to take a greater risk on than on others. Maybe i have a medical emergency and need the AI to get me to the hospital quickly. Or maybe i need the AI to flee from the police for me. Maybe we can have a dial for that. Certainly it should be entirely open to modification by the owner of the car or hes not the owner.
@BlenderDumbass
@BlenderDumbass 4 года назад
Can we make a sponsor segment just sit somewhere in the end of a description?
@jetjazz05
@jetjazz05 4 года назад
So the problem with AI is it's like a moving target where the target can move in an almost infinite number of ways. Nice.
@drawapretzel6003
@drawapretzel6003 4 года назад
they just need to make an ai that simulates the target, and then simulates how it would get the target.
@temptemp563
@temptemp563 3 года назад
... like watching an ai learn how to programme an ai ...
@ExOster-ys9sj
@ExOster-ys9sj 3 года назад
Where can i find the paper, looked it up at google scholar and cant find it!
@Marina-nt6my
@Marina-nt6my Год назад
13:39 😂 I love how they named all these things
@Shabazza84
@Shabazza84 11 месяцев назад
Number 5 needs more input....
@konradw360
@konradw360 4 года назад
sponsor? a sponsor
@smithwilliams5637
@smithwilliams5637 3 года назад
license plate "1337 c" l33t dont mind if i do
@WillToWinvlog
@WillToWinvlog 4 года назад
This is one of Ben Schwartz's characters!
@AndreRhineDavis
@AndreRhineDavis 3 года назад
omg I never realised before how much Rob Miles actually does look like Ben Schwartz!
@TheArchsage74
@TheArchsage74 4 года назад
Damn didn't know Ben Schwartz knew so much about AI
@HenrikoMagnifico
@HenrikoMagnifico 2 года назад
I want more videos with Miles
@littlebigphil
@littlebigphil 4 года назад
This is an AI safety paper that has potential for immediate positive consequences in the real world.
@russiaprivjet
@russiaprivjet 4 года назад
A little vague aren’t we?
@littlebigphil
@littlebigphil 4 года назад
@@russiaprivjet Well, most AI safety research that I hear about is more concerned with general intelligence AI. It's refreshing to see work being done on problems that we are currently facing.
@AA-qi4ez
@AA-qi4ez 4 года назад
Oooof... "Doggo." Some top quality memes AI researchers
@the1exnay
@the1exnay 4 года назад
I was thinking about how i explore safely. And a simplified AI-friendly version of it could be i assess the likelihood of a negative outcome happening and then apply a negative outcome equal to the percentage multiplied by the value of the negative outcome. So like if there's a 0.1% chance of me dying and dying is -1,000,000 then I'd apply a -1,000 to the action. But then i also account for uncertainty in a way that increases the likelihood I'll explore it but also increases the care taken exploring it. So like a reward for learning and an increase to the negative that's proportional to how uncertain it is, so that encourages finding the safest surest way to find the answer even if the safe sure way takes longer. I'm uncertain how easy that'd be to make an actual program or how effective it'd be but seems reasonable to try copying humans. Doesn't really solve how to get started though, cause flailing like a baby with an arm that weighs a ton is a horrible idea. Maybe it's possible to give the AIs neutered bodies to learn with before being transferred to a more dangerous body?
@subschallenge-nh4xp
@subschallenge-nh4xp 4 года назад
Did life make experience or does experience make life? Seriously
@the1exnay
@the1exnay 4 года назад
william polo valerio I don't understand what you're asking
@gasdive
@gasdive 4 года назад
The other thing I've noticed is that they're seemingly not programming in boredom. I get bored doing the same thing all the time. This seems to prevent me getting stuck in a local optimum. For example, I'll drive the same route to work every day, but then get bored and try a quite different route, expecting it to be slower, but occasionally it's faster, or less stressful or smoother. In other words I intentionally reduce expected reward, in the hope of getting something unexpected.
@LochyP
@LochyP 4 года назад
@@gasdive I understand and half agree with your point, but making robots get 'bored' sort of defeats the entire point of Using them over humans for automation
@trucid2
@trucid2 4 года назад
How do you assess the likelihood that your action is unsafe if you've never performed it before?
@lesslesser6849
@lesslesser6849 4 года назад
speed limits give a data point from which the collision penalty could be deduced. see an absense of penalty function in the exploring not getting a haircut space. aside from personal commenrs on youtube inferring one.
@Danicker
@Danicker 4 года назад
Sneaky hitch hikers reference ;) love it!
@joshuahillerup4290
@joshuahillerup4290 4 года назад
I wonder if you can get complicated multidimensional shapes like optimization problems for reward functions
@marflfx
@marflfx 4 года назад
Have you seen what people are doing with AI in StarCraft and StarCraft2?
@cabbageman
@cabbageman 4 года назад
I have not, is there a video you can link?
@bldcaveman2001
@bldcaveman2001 3 года назад
Just noticed you're a slapper (aka Bassist) - Love it!
@springboard9642
@springboard9642 4 года назад
Are there theorists or programmers building AIs that they can watch learn.?
@glocksupremo
@glocksupremo 4 года назад
where are the subtitles tho
@shledzguohn
@shledzguohn 4 года назад
none of the computerphile videos even have auto-generated subtitles enableable; it makes me sad! ideally, they'd caption them for maximum accessibility, but i don't see the benefit of disabling the auto-captions... it sure makes them harder to follow 😔
@Computerphile
@Computerphile 4 года назад
The automatic subtitles are all enabled. There was a bug in YT where they didn't show because of Community Subtitles. I have switched Community Subtitles off in an attempt to get auto subs to appear again - not sure why they aren't there >Sean
@WilliamDye-willdye
@WilliamDye-willdye 4 года назад
@@Computerphile As of this writing, the option to show subtitles does not appear for me.
@Computerphile
@Computerphile 4 года назад
@@WilliamDye-willdye still don't understand this - photos.app.goo.gl/sqT3j7r81AgKDtM58
@SirWilliamKidney
@SirWilliamKidney 4 года назад
+ 100 points for the THHGTTG reference!
@GFmanaic
@GFmanaic 4 года назад
I didn't come here to get roasted thank you very much
@hermask815
@hermask815 4 года назад
What if A.I. starts to think outside the box?
@charstringetje
@charstringetje 4 года назад
@14:53 Is that the UK bass in the background?
@FuZZbaLLbee
@FuZZbaLLbee 4 года назад
I was waiting for Robbert to make this video. 😀
@MyMusics101
@MyMusics101 4 года назад
Haven't looked at the paper yet and perhaps it's a silly idea, but couldn't you make a time-dependant reward function which gives very negative rewards for to the things you're supposed to stay away from, in proportion to your distance to them (e.g. close to bad things --> -10000). And as the training progresses, you reduce the penalty to a more reasonable value, so the agent starts caring more about their actual goal. The idea would be that it would first learn quickly to avoid the bad stuff, and *then*learn the actual task without forgetting that touching the bad things is bad.
@soumilshah1007
@soumilshah1007 4 года назад
With current reinforcement learning systems, once the agent has learned not to do something, it won't do it. There's no way for it to know that you've reduced the punishment for it. That's the problem with exploration vs exploitation, the most common approach I've seen to solve the fact that the agent doesn't explore actions whose reward might have changed is to occasionally take actions at random, which in this case would be a really bad idea. You gave your self driving car a large negative reward for a reason. You can't then deliberately program it to randomly crash and ignore it's reward.
@spicybaguette7706
@spicybaguette7706 4 года назад
5:10 don't drop anything near it
@mare4602
@mare4602 4 года назад
awsome video
@Nagria2112
@Nagria2112 4 года назад
Goodbye
@iugoeswest
@iugoeswest 4 года назад
Cool
@billykotsos4642
@billykotsos4642 4 года назад
Yeaaaah boy
@mvmlego1212
@mvmlego1212 4 года назад
Robert was sounding like Jordan Peterson around 6:30-6:45, LOL.
@iAmTheSquidThing
@iAmTheSquidThing 4 года назад
Peterson (and also John Vervaeke) get quite a lot of their lingo from cybernetics. A fair bit of the theory of Artificial Intelligence was formulated decades ago, and influenced psychology. But it's only recently that we've had computers powerful enough to actually execute it in a useful way.
@jetjazz05
@jetjazz05 4 года назад
....the first iteration of the Matrix.
@AgentM124
@AgentM124 4 года назад
Faster than their sponsor.
@vasiliigulevich9202
@vasiliigulevich9202 4 года назад
I feel love to viewer behind those rotations of article page. Awesome job!
@justusstamm1485
@justusstamm1485 4 года назад
Never have I clicked faster
@amrmoneer5881
@amrmoneer5881 4 года назад
More real world examples would be appreciated
@MrRobket
@MrRobket 4 года назад
7:10 1337
@StacyDubC
@StacyDubC 4 года назад
Blue car == 1337
@goethe528
@goethe528 4 года назад
Did you loose your good camera, with the tripod?
@TechyBen
@TechyBen 4 года назад
This. Why did we spend 50 years making "robots" tested in real life, wasting time on broken designs and materials, when we can test 10 or 100s in virtual spaces, then build 1 or 2 working prototypes? Yeah, I know computation was low for a long time, but if building a robot + it's computer takes time, how is building the computer and using existing servers to simulate any more expensive?
@timconlin7692
@timconlin7692 4 года назад
In the video it's mentioned that simulation can only get you so far as some things are too complex to simulate with any sort of meaningful accuracy, like the driving habits of humans for example. Less computational power back then also meant less accurate simulations.
@TechyBen
@TechyBen 4 года назад
@@timconlin7692 I agree. But coming from when I was a kid, it was all about robots driving around a room/box. The kind of thing we could simulate, and the kind of thing we could see was not gonna become "self aware" from a tiny 8bit chip. :P
@declup
@declup 4 года назад
This video has clear themes, but what is its message? What's its point? Could a link to the paper have sufficed? Is this video itself helpful?
@grill-surf-bust
@grill-surf-bust 4 года назад
Look up the concept of science communicators.
@zaprowsdower9471
@zaprowsdower9471 4 года назад
DOWNVOTED unannounced advertising
@LikelyToBeEatenByAGrue
@LikelyToBeEatenByAGrue 4 года назад
Didn't catch the first few seconds, huh?
@zaprowsdower9471
@zaprowsdower9471 4 года назад
You lost me, not following what you're saying.
@LikelyToBeEatenByAGrue
@LikelyToBeEatenByAGrue 4 года назад
That's when they announced the advertising.
@uniquename6925
@uniquename6925 4 года назад
This isn't Reddit, your down votes mean nothing here
@zaprowsdower9471
@zaprowsdower9471 4 года назад
@@LikelyToBeEatenByAGrue As a courtesy to the subscriber / viewer, I'm suggesting a channel include the text _"Includes Paid Subscription"_ prior to the advertising. Announcing the name of the advertiser, that the channel has advertising, can hardly be considered any kind of prior notice.
@Faladrin
@Faladrin 4 года назад
And none of this is AI. These are just really complex human written programs.
@christopherdasenbrock2683
@christopherdasenbrock2683 4 года назад
first
@AgentM124
@AgentM124 4 года назад
Sorry. I beat you to it.
@UmaiKayu
@UmaiKayu 4 года назад
@@AgentM124 You were the zeroth, he was the first :-)
Далее
AI? Just Sandbox it... - Computerphile
7:42
Просмотров 264 тыс.
ОБЗОР НА ШТАНЫ от БЕЗДNA
00:59
Просмотров 187 тыс.
titan tvman's plan (skibidi toilet 77)
01:00
Просмотров 5 млн
AI Gridworlds - Computerphile
10:15
Просмотров 124 тыс.
Game of Cat and Mouse - Numberphile
18:36
Просмотров 1,4 млн
Regular Expressions - Computerphile
17:19
Просмотров 243 тыс.
Stop Button Solution? - Computerphile
23:45
Просмотров 480 тыс.
I tried using AI. It scared me.
15:49
Просмотров 7 млн
Harder Drive: Hard drives we didn't want or need
36:47
AI "Stop Button" Problem - Computerphile
20:00
Просмотров 1,3 млн
We Were Right! Real Inner Misalignment
11:47
Просмотров 248 тыс.
ОБЗОР НА ШТАНЫ от БЕЗДNA
00:59
Просмотров 187 тыс.