Тёмный

This Image Is Fine. Completely Fine. 🤖 

Two Minute Papers
Подписаться 1,6 млн
Просмотров 141 тыс.
50% 1

❤️ Check out Lambda here and sign up for their GPU Cloud: lambdalabs.com/papers
📝 The paper "The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning" is available here:
attentionneuron.github.io/
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: / twominutepapers
Thumbnail background design: Felícia Zsolnai-Fehér - felicia.hu
Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: discordapp.com/invite/hbcTJu2
Károly Zsolnai-Fehér's links:
Instagram: / twominutepapers
Twitter: / twominutepapers
Web: cg.tuwien.ac.at/~zsolnai/

Наука

Опубликовано:

 

1 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 352   
@csoest24
@csoest24 2 года назад
I love the reference to the reverse steering bike example. That is a very clear comparison to shuffles being thrown at the neural networks.
@sebastianjost
@sebastianjost 2 года назад
But the network was trained with shuffled inputs right? If humans practice like that they might also be able to cope with the shuffled inputs.
@TheMarcusrobbins
@TheMarcusrobbins 2 года назад
@@sebastianjost I agree a human could learn to play the game with these inputs. They may never realise they are playing pong - but they will be able to work out the rules. You won't get the benefit of using the priors that the visual cortex has, but so what.
@MrFEARFLASH
@MrFEARFLASH 2 года назад
In fact, the neural network handles such problems poorly. Pong always arrives at one point, and the car is not driving along the road, albeit in the right direction. It seems to me that this is not a complete victory!
@sunboy4224
@sunboy4224 2 года назад
I would argue that the reverse steering bike is more complicated. Turning the handlebars involves moving your center of mass, and on typical bikes it moves in a way that is contusive to steering in that direction. To ride the reverse bike, it's not just relearning that clockwise = left and counter-clockwise = right, it's entirely relearning how to posture yourself on the bike. Humans are actually pretty good (albeit slow) at purely switching inputs (see, well, many examples of video games in which this happens as a "status effect").
@nadiaplaysgames2550
@nadiaplaysgames2550 2 года назад
In the case of backwards bike our internal brain networks was only trained on one way turning bikes to make if some rode both backward and forwards bikes there internal model accommodate that. for it 100% fair the neural net work should be trained based off same away if also for a human to learn a scrambled version of the game they could. but it would take longer because our brains already levvage older models to build new ones
@antiHUMANDesigns
@antiHUMANDesigns 2 года назад
One very important difference is that humans are always trying to save energy. Do as little work as possible. For a human to play a game the way that AI does, it would have to be pounding the keys constantly, but humans instead try to move in straight lines and only adjust their heading when needed. And to re-learn things take energy, which humans need to care about but AIs don't. So, make a lazier AI and see if it behaves more like a human.
@ViRiXDreamcore
@ViRiXDreamcore 2 года назад
That’s a really good point. Also there are no physical limitations on an AI other than the circuitry it runs on.
@toby3927
@toby3927 2 года назад
Yeah, that makes sense. Also human brain is entirely optimized for the earth environment where these strange things never happen, but AIs are trained from the ground up to handle these things.
@anywallsocket
@anywallsocket 2 года назад
Yes we are hunter gatherers not kaleidoscopic pong players 😂
@SemperFine
@SemperFine 2 года назад
But sometimes we gets bursts of energy too
@pvic6959
@pvic6959 2 года назад
this is really funny but imagine an ai that is efficient.. it would become so much better! we have evolved over millions of years to be efficient meat machines lol
@saudfata6236
@saudfata6236 2 года назад
(right at the end) - car drives completely off the road - "no issues whatsoever!"
@yash1152
@yash1152 2 года назад
7:05
@whiterottenrabbit
@whiterottenrabbit 2 года назад
Reminds me of Boosting Stop-Motion to 60 fps using AI (ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-sFN9dzw0qH8.html)
@Ajay_Pathak_
@Ajay_Pathak_ 2 года назад
Lol
@clray123
@clray123 2 года назад
Now imagine this is a robotic dog shooting guns at people - vendor's ad: "no issues whatsoever!"
@WilliamDye-willdye
@WilliamDye-willdye 2 года назад
I'm not convinced that shuffling the image reveals differences in how we think so much as differences in how we see. Still, it is a clever and worthwhile idea. Well done.
@warrenarnold
@warrenarnold 2 года назад
Wow Just like me with my broken screen
@spencereaston8292
@spencereaston8292 2 года назад
My question would be how many failure-resets happen before a success. Sure it took Dustin a week before he figured the bike out, but how many failure-resets was that? Human vs AI in this realm seems to be scale of time not capacity. Which in itself makes it a wonderful time to be alive!
@realmetatron
@realmetatron 2 года назад
It took Destin 8 months or so, practicing 5 minutes every day. His son did it much faster because a child's brain learns easier.
@getsideways7257
@getsideways7257 2 года назад
@@realmetatron And that's what we should be comparing it too. An adult's brain is basically solidified and almost unable to progress with these kinds of things (not to mention being riddled with all the useless data and conventions not helpful for the task at hand), but a clean slate kiddo brain is much closer to an artificial neural network in its learning ways.
@baumkuchen6543
@baumkuchen6543 2 года назад
The question is as well what is a failure-reset. A normal human will usually 'hit a reset' before failure, which in case of a bike ride is falling off and getting injured.
@tim40gabby25
@tim40gabby25 2 года назад
I noticed learning how to tightrope walk - a foot off the ground, it's ok - aged (55) was made easier by simply ignoring what I was trying to do, chatting with friends and family etc. Concentrating on the task was worse than useless. Curious, that.
@rayujohnson1302
@rayujohnson1302 2 года назад
​@@tim40gabby25 The unconscious mind is processing 500,000 x more information per second then your puny frontal cortex can muster. By not directly focusing on a task you are no longer slowing down the unconscious mind. This is also why taking a break from solving a hard problem actually helps you solve said problem.
@Logqnty
@Logqnty 2 года назад
For the pong example, the ai was just moving up slightly, and every time the ball would spawn in the same direction. because of how it spawned, the ai could just move up slightly and still win every time, regardless of the shuffled input.
@anywallsocket
@anywallsocket 2 года назад
Watch more closely, that’s only somewhat true.
@Nulley0
@Nulley0 2 года назад
5:08 I agree, the ball starts at center, and it is the same everytime. Other simulations are fine, pong is broken
@perschistence2651
@perschistence2651 2 года назад
I think this simply means that it does not matter how WE see the input as humans. The problem why we need to adapt, for example with the bike is probably because we already created a model in our head, an abstraction layer, you could say and if you change how the bike works, our whole abstraction-layer needs to be rewritten. The AI is simply more primitive, it has no abstraction layer but processes the information directly. Our thinking/abstraction makes us slow, but it also enables us to solve way more general problems.
@AirNeat
@AirNeat 2 года назад
We're also slow because of biology and chemistry. Silicon is much much faster at calculating things.
@franticsledder
@franticsledder 2 года назад
Car drifting off road and in the ditch. Tesla AI: "No issues whatsoever!"
@khiemgom
@khiemgom 2 года назад
Well, in racing that grass patch so i mean a little off road is not rlly a big deal
@Andytlp
@Andytlp 2 года назад
A.i would never do that in all a.i road traffic. The conclusion you came up with is a human idea, human error. A human would drive into a ditch and complain about something irrelevant to avoid responsibility of facing reality.
@brightblackhole2442
@brightblackhole2442 2 года назад
@@Andytlp 7:04 is an example of what they were actually trying to say
@LanceThumping
@LanceThumping 2 года назад
I mean doesn't the bike example and such not include time for training that the AI received? I've had various games that reshuffle controls as a penalty or a challenge and after doing it long enough you can adapt rather quickly. The fact that the brain can adapt to it's visual input completely inverting at all in adults means that testing could be done to see how far it can be pushed. Maybe a child that grows up with glasses that randomly shuffle their vision every second would adapt. I feel we don't know enough about the human brain at these extremes to say for sure that the AI is acting in an entirely non-human manner outside of the fact that it has computer speed to allow it to switch up faster than a physical network of neurons can.
@swe223
@swe223 2 года назад
I remember that video on YT where a guy puts on glasses with a mirror to see the world upside down. The beginning was awful, but after a few days he adapted and was able to do things like pouring milk into his cup just fine. Then the inverse happened when he took them off^^ (although adaptation was much faster)
@mattp1337
@mattp1337 2 года назад
"A child that grows up with glasses that randomly shuffle their vision every second". Good writing prompt for a sci-fi story, given basic understanding of the topic. Would a human brain that adapted to these conditions perceive more reliably than everyday humans? Conversely, can any common optical illusions in human vision be traced to our vision NOT being trained to deal with perturbation? I'm a migraine sufferer, and that messes with my vision quite often: can it explain my moderately-above-average artistic and mathematical aptitudes? What a time to be alive.
@ToyKeeper
@ToyKeeper 2 года назад
Yeah... Humans can definitely adapt to stuff like this. It just doesn't happen as fast. I don't think the shuffling bit shows a qualitative difference between human and AI processing methods... it just shows that AI can adapt to changes faster.
@user-sl6gn1ss8p
@user-sl6gn1ss8p 2 года назад
I think just maybe your experiment ins't going to pass the ethics committee : /
@user255
@user255 2 года назад
If human neurons were that quick to adapt, they might adapt too much into some random non-sense and be in troubles 99% of the cases.
@MrrVlad
@MrrVlad 2 года назад
for driving, it may use wiggling to map the squares. the constant left-right motion and corresponding translation-rotation within squares tell exactly how far a certain square is.
@anywallsocket
@anywallsocket 2 года назад
That is true, and actually a general trend for how neural networks fit a function: they exploit differential motion (tweaking nobs slightly) to see how they affect one another, gradually mapping the function space.
@ReynaSingh
@ReynaSingh 2 года назад
I don’t know if we can ever mirror human thinking perfectly but this is impressive
@tiefensucht
@tiefensucht 2 года назад
but we could already simulate insects to 100%. human brain for sure some day. the greater problem is that if you want a human like beeing, you would have to raise it like a human.
@TheMostGreedyAlgorithm
@TheMostGreedyAlgorithm 2 года назад
I don't know if we will ever need to do this. The goal is not to create a virtual human. The goal is to solve a task in a best way. I think that the idea of that video: "Machines doesn't think like us and they shouldn't". The ability to solve a task in cases, where humans can not is not human-like, but this is what we want from machine.
@juraganposter
@juraganposter 2 года назад
Yes, but still long way to go. I predict in the 250-500 years.
@dgollas
@dgollas 2 года назад
You don’t FEEL we can mirror human thinking.
@NanoMan737400
@NanoMan737400 2 года назад
Why mirror a human brain when you can make algorithms like these, that do things humans never could? I think that's actually much more helpful to us than just digitized versions of ourselves.
@Paulo_Dirac
@Paulo_Dirac 2 года назад
Doesn't the fact that it was looking to the side of the "road" for curvature indications a human like feature?
@1xdreykex1
@1xdreykex1 2 года назад
You can compare it or even mimic human behavior but it can never be identical because of how deep rooted our behavior is to biology we’d have to understand psychology and neurology a lot more as a species
@sychuan3729
@sychuan3729 2 года назад
Yeah I also think about it. If you provided with this task you'll try to find meaningfull parts which were several blocks, while others were distraction
@quantumblauthor7300
@quantumblauthor7300 2 года назад
@@1xdreykex1 I mean, I don't think our biology naturally accounts for driving behaviors
@anywallsocket
@anywallsocket 2 года назад
Sure, but a human could not navigate the shuffled road: Our understanding of the game of ‘steering an object’ demands a continuous environment, simply because we have no experience with counter examples. Things tend not to teleport spatially in our daily lives! But to this AI it is business as usual.
@sychuan3729
@sychuan3729 2 года назад
@@anywallsocket I think human could adapt to this after training. Blind people or people with visual diparities could navigate in environment , also some people have hallucination but still could. AI more adaptable but it isn't so much different
@DouglasThompson
@DouglasThompson 2 года назад
Your mic setup sounds excellent! Quality overall is excellent, great job with another one!
@AbhishekThakur-wl1pl
@AbhishekThakur-wl1pl 2 года назад
Now do it with multiple artifacts flying around.
@ConceptsMadeEasyByAli
@ConceptsMadeEasyByAli 2 года назад
*Sweats nervously* This thing can is amazing. It means if we can replicate the functionality in robotics, it will make the hardware work with faulty camera, rotor, angles etc.
@SwervingLemon
@SwervingLemon 2 года назад
If we're smart enough to use some sort of conditional warnings, so they don't just continue performing until they're in complete ruins, it could allow some flexibility in maintenance. AI in production line bots... I just have a scene in my head of somebody trying to take one offline and it throwing a tantrum. "But I wanna solder! Nooooooooo!"
@firecatflameking
@firecatflameking 2 года назад
@@SwervingLemon 😂
@blinded6502
@blinded6502 2 года назад
Well, it's not like spinal cord thinks like human brain either. It figures out the map of what our limbs feel, and then this preprocessed info is fed into the brain for easier comprehension. So this this human-likeliness test is just silly.
@annaclarafenyo8185
@annaclarafenyo8185 2 года назад
This is not an appropriate comparison, as human visual networks have continuity, location markers, straight line detection, and 3-d geometry inference built in, which means they aren't going to be any good in a permutation invariant challenge. If you trained an AI to do the human things, lines, geometry inference, and so on, it would be equally bad at permutation challenges.
@ds920
@ds920 2 года назад
Thank you, sir for all your amazing videos!
@Veptis
@Veptis 2 года назад
We usually flatten images into a single vector. Meaning the neural network has to do learn the weights correctly so it can easily understand that locality makes a difference. And even in convolutional models the flattening is usually rows by rows just stacked ontop of each other. Maybe a spiral or Hamiltonian path could do it better. Drop out is used for regularization, so of course only giving half the input will easily work.
@jacquesbroquard
@jacquesbroquard 2 года назад
Every single time. My mind is blown. Thanks for putting these together and sharing with the world.
@catcatcatcatcatcatcatcatcatca
@catcatcatcatcatcatcatcatcatca 2 года назад
I notice that in both pong and the racing game much of the important data is about angles relative to the player view, which is unaffected by the shuffle resistant to randomly blocking some data. I wonder if altering the orientation would cause more problems for the AI?
@noutram1000
@noutram1000 2 года назад
This is a good example of how AI is already 'beyond Human' in its ability to adapt. The fact it doesn't attribute too much to the stream of information coming in (like a lifetime of relying on your eyes to consider structures, lines, angles, etc.) is actually a plus -to the AI it is just a huge sequence of 1's and 0s that the nueral network provides the optimal answer for. It would be almost impossible for a Human to take the raw 'computer stream' and interpret it. We cannot just see constructs in the binary stream, we have to apply years of Human learning to make any sense of it...
@abdullahalqwain3490
@abdullahalqwain3490 2 года назад
If the quantum computer is integrated, it is an answer to the problems facing artificial intelligence. It is very fast, super fast, a penalty of a second, very complex information, and very huge information in seconds. It is a future with artificial intelligence. The solution to many human problems is very complex
@manzell
@manzell 2 года назад
I think what this reveals is that humans have a built in reward function for throwing information down strongly-established neural pathways, even if the data doesn't fit. Since the learning algo doesn't have this - its reward function is tied strictly to whatever it's programmed for - it can adapt much more rapidly. This is what's behind anthropomorphization, CBT-style "thought distortions" and so on.
@rocksfire4390
@rocksfire4390 2 года назад
it's great to see the leaps humans have made in AI but AI itself isn't magic. it's all predetermined, based on the inputs and limitations given to the program. it's only as good as we make it. a purpose built AI is going to iterate much faster on it's sole task then a human could ever simply because a AI doesn't need to think about anything else but what's it's programed for. however because of this fact it cannot actually ever compete with a human brain. computers are recreations (attempt) of our brains. however they too are far more focused with a single task and that task is computation. however computation can be used to simulate other areas of our brains but they are still no where near our level as these simulations are not purpose built like a computer is. you would be quite surprised by how much information our brains process on a daily basis. the level of data our brain's sort through every second is quite hard to grasp. most of that data you don't even notice because it's just not something people think about but is required in order to stay alive. like take all of your organs for instance, the brain keeps all of that stuff running but you never have any sort of idea of the data involved in doing such a thing. walking, talking, hearing, seeing, eating, sleeping. they all sound simple but they are not. ask any robot AI expect how hard it is to get something to walk around without falling over. computation is great but it's only a part of what the human brain can do. computation is pointless if you can't form a thought about why you would even want/need such a thing. lastly our brains are very power effective compared to a computer. it's kinda amazing.
@HoloDaWisewolf
@HoloDaWisewolf 2 года назад
Is a bat's brain beyond a human brain just because it can process sound through echolocation? Just like it's be impossible for a human to make sense of trillions of 0s and 1s, our brain is simply not wired for being a sonar. Just because computers are better than humans for some specific tasks, I wouldn't say they're "beyond Human" in their ability to adapt. Not to mention that the definition of "interpreting a raw computer stream" is somewhat loose.
@anywallsocket
@anywallsocket 2 года назад
@@manzell literally every cognitive bias
@Abyss-Will
@Abyss-Will 2 года назад
the shuffle thing reminded me of that scene in the matrix where they look at the green letters in the screen and can make sense of it and see the world they represent
@TheMazyProduction
@TheMazyProduction 2 года назад
Finally some David Ha appreciation. ❤️
@mattp1337
@mattp1337 2 года назад
Sort of reinforces Donald Hoffman's thesis that our adaptive understanding of reality--the way we think the world is--likely bears no resemblance to reality. All evolution cared about is whether our cognition kept us alive and playing the game reasonably successfully, which we do.
@getsideways7257
@getsideways7257 2 года назад
Not to mention that our optical sensor suite is way far from ideal.
@anywallsocket
@anywallsocket 2 года назад
You should be clear about what you’re suggesting here. Minimizing the difference between our model of the world and the world is baked into natural selection. If you’re suggesting however that our model for an apple doesn’t itself resemble an apple, then quite clearly yes, of course, this isomorphism is unnecessary. On the other hand, these AIs are deep learning, meaning they are playing with hyper parameters within their response to given information, which is about relations between components, regardless of overall structure. I’m sure there are examples where human minds can do similar things, e.g. abstracting the notion of ‘something coming at you’ no matter the angle, environment, or thing, but it is in no way obvious that’s generally the case.
@mattp1337
@mattp1337 2 года назад
@@anywallsocket Your assumptions seem reasonable, obvious even...right up until Hoffman demolishes them ¯\_(ツ)_/¯
@laykefindley6604
@laykefindley6604 2 года назад
If our sense of reality was widely different from the way that it really is, we wouldn't be able to predict it and thus likely die off as other species have. I would say we are actually the best so far at seeing reality for the way it truly is, at least at the scale of humans.
@mattp1337
@mattp1337 2 года назад
@@laykefindley6604 By all means, go tell Hoffman he's wrong without hearing his argument.
@piotrarturklos
@piotrarturklos 2 года назад
This is quite eye opening, because it gives some more intuition about the nature of things that should be possible using a neural network, even though they are impossible for a human.
@falnesioghander6929
@falnesioghander6929 2 года назад
So this means that the AI treats all small sections of input homogeneously to later piece them together while we are more trained or overfit in relation to cohesion in a bigger picture based on past experience interacting with the world (aside from the task at hand)?
@thomasr1051
@thomasr1051 2 года назад
Amazing as always. This is my source of staying up to date on ever evolving AI technology
@bzikarius
@bzikarius 2 года назад
Incredible things are going on!
@jl6723
@jl6723 2 года назад
I think this kind of thing is impressive in terms that shuffling up the data and having it missing can still allow for an AI to act within a system. I could imagine that sort of decision making and algorithm design being useful for say dealing with dirty lens for self-driving cars or analyzing large groups of interconnected statistics on a subject to generate some hypothesis on a related topic.
@gorkemvids4839
@gorkemvids4839 2 года назад
So there is a layer which inverts reshufling so top of the network always gets the unchanged correct information right?
@1.4142
@1.4142 2 года назад
Some people even temporarily forget how to ride a normal bike after learning the backwards steering bike.
@CandidDate
@CandidDate 2 года назад
I held my papers so tight, they turned back into a tree.
@evilkidm93b
@evilkidm93b 2 года назад
Would have been interesting to see a control experiment, where the background has the same color as the racing track.
@TKZprod
@TKZprod 2 года назад
Yes. For pong also, I'd like to see a control experiment where the model has no input at all. If the model fails, it would confirm that it's using "vision" to play, and not only blindly moves to the same spot each game.
@xxiemeciel
@xxiemeciel 2 года назад
very interesting, did the algorithm had to redo its training each time the complexity was increased (shuffling and removing parts) ?
@neelmehta9092
@neelmehta9092 2 года назад
i think this also has to do with how images are being perceived, us humans see them as a summation of all the pixels simulating a moving object but for a machine its only matrices, so just shuffling the inner data values wont make much difference to a machine
@dva_kompota
@dva_kompota 2 года назад
+1 for SmarterEveryDay's reversed bicycle :)
@weaseloption
@weaseloption 2 года назад
Wow, what a time to be alive
@RicardoNapoli
@RicardoNapoli 2 года назад
Woooooooow !!!!! That's insane !!!!!
@mogarbobac1472
@mogarbobac1472 2 года назад
Can someone help me? I dont understand how this would even work. The first part where its switching up the streams of data, my guess is it trys a movement and then *snaps that control to the correct data stream (from experience). If you did that of course it would be able to quickly fix itself. This is unlike a human where our structures are permanent and we are literally forcefully training to work against a remembered system. But the 2nd permutations thing makes absolutely no sense to me. Even if it did the same thing as before after finding similar locations, it would be extrememly difficult to determine not only where the ball is but also the paddle unless you straight up remembered where it was heading and you could literally predict the entirety of the game ahead of time. ANY help with an explanation would be helpful
@wrOngplan3t
@wrOngplan3t 2 года назад
I suppose every pixel (not just blocks) in view could be at a randomized position and it would still adapt with sufficient training. It's just another layer of translating to the real pixel x,y position. Or maybe that wouldn't even be necessary. But my guess is it would be more efficient for generalizing (I'm no expert in this field at all, just speculating). Afaik this was sliglty different with on-the-fly adjustment though. Looks impressive from a human standpoint!
@pw1169
@pw1169 2 года назад
"No issues whatsoever" - except for the fact the car isn't being driven on the road :D
@MiguelAngel-fw4sk
@MiguelAngel-fw4sk 2 года назад
At least it can mantain the car near the road, thats more what a human can do
@codetech5598
@codetech5598 2 года назад
But is there a penalty for driving on the grass?
@mello7992
@mello7992 2 года назад
what a time to be alive
@venjsystems
@venjsystems 2 года назад
amazing paper
@tristenrouse8596
@tristenrouse8596 2 года назад
Great thought experiments here. I'm curious what this means for failsafe features on lets say a self-driving car. If for example a video feed is corrupted, would the AI still be able to process it the same?
@yimingqu2403
@yimingqu2403 2 года назад
A small question, in the ping-pong game, why does the agent always succeeds by aiming at the corner?
@maxwibert
@maxwibert 2 года назад
The input shuffling recovery reminds me of a scene from Naruto: Tsunade permutes the terminals of Kabuto's nervous system and he has to relearn how to walk mid-battle
@deadpianist7494
@deadpianist7494 2 года назад
Hi, i hope yall doing good, have a good day.
@cherubin7th
@cherubin7th 2 года назад
Example 1 shows that it is very different. A human looks at the big pictures and doesn't just analyse tile by tile. So in the later stages, because AI thinks very simple and doesn't care about the relations between tiles, it isn't confused. But humans see the picture holistically and uses prior knowledge to make sense of it, so humans get confused because we are not used to use tiles independently of the context.
@MrTomyCJ
@MrTomyCJ 2 года назад
I watched these videos to get a glimpse on how these amazing results are achieved, not only to see the incredible results. I think part of it is being lost lately...
@yugecheng8941
@yugecheng8941 2 года назад
Why that sumarai cutting fruit video was deleted?
@wibiyoutube6173
@wibiyoutube6173 2 года назад
"What a scary time to be alive."
@FungIsSquish
@FungIsSquish 2 года назад
Can’t wait for these things to be able to play Mario kaizo
@JMPDev
@JMPDev 2 года назад
@Károly Zsolnai-Fehér: The graphics from 3:27 to 6:26 in this video are surprisingly poor quality. It looks like it was bilinearly upscaled from a really low resolution source, and seems to have additional compression artifacts too. If the source data was really that low res, nearest neighbor or integer upscaling would really have been preferable. Was this a compositing/editing error? It looks awful, even at at the max 2160p60 :/
@MatthiasPitscher
@MatthiasPitscher 2 года назад
Are they actually using the same model? Or just the same architecture trained on the randomized input? Is there some kind of transfer learning happening?
@HD-Grand-Scheme-Unfolds
@HD-Grand-Scheme-Unfolds 2 года назад
Do they use Convolution Nets or Capsules? Or if I should instead ask how do they achieve pixle permutation invariance. This is blowing my mind to know that they retrain the networks.
@dieselguitar1440
@dieselguitar1440 2 года назад
You can still tell what angle the ball is moving at and what angle the track is at (and when there's a turn) when it's reshuffled. I'd bet there are some humans out there who could manage. A platformer game, on the other hand, would probably be a lot harder to manage while it is reshuffled. I wonder how an AI would do playing a reshuffled platformer.
@MrProfizmus
@MrProfizmus 2 года назад
It doesn't think like a human does if you remove it far enough from what you'd consider a human-accessible example. Makes sense if you ask me. This is literally demonstrating what happens if you approach UX with the opposing intentions. Another commenter also addressed the cost of re-arranging your thoughts, which is a great point. For the AI, it comes free, as it's part of it's training lifecycle. I guess it is a good reinforcement of the idea of how unspecific it really is when we say "human thinking". I remember seeing a small Q&A where Feynman also addressed this, using planes vs birds as an example. It's only really mindblowing if you look at it from a humane perspective, less so from a theoretical one.
@Paruthi.618
@Paruthi.618 2 года назад
Yea .. what a time to be alive
@trainjumper
@trainjumper 2 года назад
I'd argue the road example is still human-playable since angles are well-preserved - it seems that you can quite easily see the angle of the upcoming road by looking at changes in slopes of the road edges in the jumbled version. Still a very impressive performance from an AI
@LKDesign
@LKDesign 2 года назад
This is fine.🔥
@amortalbeing
@amortalbeing 2 года назад
Gorgeous Papers, thanks doc by the way why dont you always talk the same way you talk in the advertisements?, sometimes it seems you are whispering !
@Antonio_Vizcarra
@Antonio_Vizcarra 2 года назад
Ok now i'm scared i've seen sci-fi movies and i know how this ends
@justinwhite2725
@justinwhite2725 2 года назад
Reminds me of the experiment where people wore goggles that made the world upside down. Wore them for a week. Confused them at first then they adjusted. When they were taken off they were confused again but then readjusted.
@chounoki
@chounoki 2 года назад
This means the AI has actually been trained to work on a higher level of abstraction, dumping everything that is not directly related to the judgement of action, which is basically a trait of born geniuses if it were a person.
@Craxin01
@Craxin01 2 года назад
I learned about the laxative effect of sorbitol a long time ago. I don't chew gum anymore.
@OlivioSarikas
@OlivioSarikas 2 года назад
1:42 - how did it get behind the blocks, before it has opened the tunnel? Did the AI find a dirty little cheat? ;)
@SwervingLemon
@SwervingLemon 2 года назад
You can actually do that with a bit of practice, depending on the game. Some of them allow the ball to squirt through if you hit the exact corner between a block and the wall.
@OlivioSarikas
@OlivioSarikas 2 года назад
@@SwervingLemon Sure, but that would be a glitch, not how it actually works. ;)
@DanFrederiksen
@DanFrederiksen 2 года назад
I assume training starts over, otherwise it could of course not handle a reshuffling.
@codetech5598
@codetech5598 2 года назад
No the training does not start over.
@AirNeat
@AirNeat 2 года назад
The reason the AI can deal with the shuffling, is because it can read the entire screen at once and easily identify which tiles are the edges. A human has to read each tile and process it slowly. It's simply a processing speed issue. It does think like a human, only faster.
@RandomGuy-hi2jm
@RandomGuy-hi2jm 2 года назад
4:43 seems impossible, But for those who know how Deep Learning works, knows how its was done and a simple task for neural networks to do that.
@Lttlemoi
@Lttlemoi 2 года назад
I disagree with how you compare AI training time with human training time. When you initially learn to ride a bike, it takes some time until you're able to steer precisely and perfectly as well. It seems to me that the slowness with which the human brain adapts to changing situations is only slow compared to the computer, but not that slow when compared to the human brain itself.
@Rybz
@Rybz 2 года назад
did you just say a snail is fast compared to snails. what does that mean 🤣
@Lttlemoi
@Lttlemoi 2 года назад
@@Rybz I meant that the human brain adapting is slow compared to the AI adapting, but not necessarily slow when comparing to the initial training time of each. It's like comparing acceleration and top speed of a human with that of a car. The car can accelerate faster and has a higher top speed than a human, but that doesn't mean a human has to take more time before he can accelerate to his top speed than a car requires to accelerate to its top speed.
@Rybz
@Rybz 2 года назад
@@Lttlemoi But what does that matter if the AI is better than the human brain in this task?
@juhotuho10
@juhotuho10 2 года назад
Machine learning algorithms are amazing for specialized environments and tasks but lack the things like able to do game theory, people are very good at handling generalized work and people can be amazing at game theory and theorizing solutions to problems
@edoardoschnell
@edoardoschnell 2 года назад
All this has utility, indeed.
@jawadmansoor6064
@jawadmansoor6064 2 года назад
computation is fundamentally different from thinking. And COMPUTEr can COMPUTE faster than we can so ... no competition there.
@phpn99
@phpn99 2 года назад
Let's be clear : There is no "thinking" in machines. There is gradient descent. Period.
@AfonsodelCB
@AfonsodelCB 2 года назад
it's not that AI is not like humans, it's that humans are like AI. humans have specific setup, limitations, and probably starting state, while being chained to the natural passage of time, unable to do a full reset, and having a lot of useless training data. it's unfair to compare an AI specifically designed for these things to a creature so much more restricted and developed purely by evolution and natural selection
@cliffthecrafter
@cliffthecrafter 2 года назад
I don't think you can really compare the time it takes the AI to adapt to reshuffled inputs to the time it took Dustin to learn how to ride the bike. The AI was trained with the inputs constantly reshuffling. Dustin trained with the inputs one way his whole life and had to re-train himself when the inputs switched. When he switched back to a normal bike it took him much less time, and if he trained himself to constantly switch between the two he could probably learn to switch as fast as the AI.
@BHFJohnny
@BHFJohnny 2 года назад
wow. that't amazing. And also scary
@languagew9577
@languagew9577 2 года назад
can you tell me any AI that can duplicate cloths of character from image ? and cam make in 3D ?
@coolbanda5446
@coolbanda5446 2 года назад
It's easier for machines to do this because this was what machines were made for xD Not surprised. I'm more interested in its applications, what else can it optimize? That would be beneficial to us
@thejswaroop5230
@thejswaroop5230 2 года назад
impressive
@dietrevich
@dietrevich 2 года назад
I think people are missing the point that the computer abstracts the data differently than we do. It can do away with lots of things that to us are necessary and that without it wouldnt make sense, but that to the program it mathematically pans out to a solution.
@benjabkn12
@benjabkn12 2 года назад
6:18 the minus ones just keep on coming and they don't stop coming
@DimiShimi
@DimiShimi 2 года назад
I think this rather shows that the neural network is functioning like a brain, but unlike a human brain it's a purpose-trained brain that solves one particular kind of problem, not many. We know that in some rare cases people are capable of superhuman mental feats (sometimes while being deficient in some other common human ability). My theory is that in these cases brain resources are allocated in an unusual way, so we get results more comparable to a purpose-built neural network. - I might be wildly off. This is pure speculation.
@andytroo
@andytroo 2 года назад
is it possible that the steering AI is simply going of 'if i see a left curb, turn left; if i see a right curb turn right" 6:54 shows the car driving of the road without an attempt to return.
@DonVitoCS2workshop
@DonVitoCS2workshop 2 года назад
Would it be possible for this AI to play(learn) pong at the same time as driving(learning) the racecar on the new background? That would be amazing and clear evidence it's better/different than the human brain
@zaneg
@zaneg 2 года назад
I am not sure this means they don’t think like us. We hen we look at a screen we see it completely differently then a neural network does. We can only see one part of the screen at a time while the neural network is focused on the entire screen all at once.
@davidm.johnston8994
@davidm.johnston8994 2 года назад
But how does it do it?
@yoinkthatscotum5145
@yoinkthatscotum5145 2 года назад
cool!
@1xdreykex1
@1xdreykex1 2 года назад
The way humans think is more or less flawed so maybe we can design ai’s that think deeper than we can
@geli95us
@geli95us 2 года назад
I wouldn't call it flawed, more like "greedy" (in the computer sense of the word), humans make a lot of assumptions and optimizations to quickly be able to process the world around them, and that's usually helpful, computers are, in a sense, more polyvalent in that they can be completely reprogrammed very easily, and that makes me hopeful of what we'll be able to achieve with them once they become more powerful, but right now, and until computers have as much computing power as a human brain and more (we're a ways off that), AI will have to make assumptions and optimizations too, just not the same ones as we do
@Fabelaz
@Fabelaz 2 года назад
isn't it how our eyes work automatically?
@DustinRodriguez1_0
@DustinRodriguez1_0 2 года назад
The fact you can remove large amounts of the input and still have the system know what to produce as output makes me wonder whether concepts close to this could be used to accomplish lossy data compression. Like could you have a player application that is loaded with such a network, pre-trained on general video/audio, which would only require a very small amount of data to be able to reconstitute a good enough approximation of the source material? I imagine the size of the network and its weights would need to be known, I don't have any familiarity with just how big these networks are... are we talking megabytes of weight data, or gigabytes or more?
@erlendpowell7446
@erlendpowell7446 2 года назад
I think this nicely illustrates why artifical networks such as this tend to generalise poorly. The human brain tends to preserve the spatial locality of information, while the typically fully connected ANN throw most of this locality away. The problem with throwing this locality away, though, is that there's a ton of information that is lost as a result. This also illustrates why ANN's are prone to adversarial attacks - without this spatial information, features can easily be hidden in what looks like noise to a brain.
@mishafinadorin8049
@mishafinadorin8049 2 года назад
If I remember correctly, infants can recognize the face of their mother even if the picture is shuffled, but this ability disappears early during development. In this sense these neural networks are similar to neural networks of newborn children.
@tctrainconstruct2592
@tctrainconstruct2592 2 года назад
instead of just shuffling the blocks, it should rotate them too because a human player trained for this could technically keep in mind the ball's position and velocity, as well as the paddle, really easily btw for the racing game example, humans can play games BLINDFOLDED: no sight whatsoever! So it is normal that the bot realigns itself with the information it sees
@serta5727
@serta5727 2 года назад
Wow that is interesting
@antivanti
@antivanti 2 года назад
The pong game seemed to play out exactly the same every point so it's possible the AI didn't even need to see the game to play just repeat the exact same input every time
@SuperXzm
@SuperXzm 2 года назад
Does machine think the way we think? Or do we think the way machine works?
@donaldduck830
@donaldduck830 2 года назад
And I recently saw a report in how machine learning failed: They tried to identify wolves. The pictures of wolves in the data for learning were all in snow. The machine learning algrorithm looked only at a small patch of pixels in the corner, if snow, then wolf, else dog. Ended in fail with different pictures. Be careful with your enthusiasm. In the end a computer algorithm will "see" your body after a crash on the road and say to itself "trash".
@geli95us
@geli95us 2 года назад
In the end, it's all about complexity, the more information, the more processing power, the more memory it has, it will be able to look at more complex patterns, it's not like humans are any different in that sense, in fact, if you grab a human that doesn't know what snow/dogs/wolfs are and do this to them, I'm pretty much sure they will reach the same conclusion as the computer. the way I see it, getting computers to think like humans once we have computers that are as powerful as a human brain, will be pretty much trivial, seeing the things we are doing with them having fractions of that
@donaldduck830
@donaldduck830 2 года назад
@@geli95us In the end I had a perfectly functional table calculations program (better than almost all versions of excel) and a word processor (better than all versions of word) on the very first pc I had access to (it was my Dad's around 1990, late '80s). And even then the adage "garbage in, garbage out" held true. And if you program a woke AI, it will put garbage out no matter what the input. Like Michael Mann's hockey stick graph: No matter the input, his algo always spit out the same graph. Thus my "less enthusiasm, more care" to guard against the Terminator-Matrix events timelines. Oh, and even if the machines do not become self-aware, somebody who controls the machines might be evil. Or the person listening to the machine might be stupid: Like the countless people who got stranded in the wilderness after listening to their Nav.com. that said "turn right on the next" and misunderstood what the next right tuen was. Btw, saw it happen, heard about it happening and happened to me myself, so not dissing anybody, just saying to be careful and think for yourself.
@SwervingLemon
@SwervingLemon 2 года назад
@@geli95us I think their greatest value is in how they DON'T think like us. They've arrived at novel solutions to challenges that I don't think would have occurred to a human.
@MushookieMan
@MushookieMan 2 года назад
This channel is always overhyping "AI", or more accurately, unintelligent machine learning. It is very useful, but not 'artificial intelligence'.
@ddjoray1042
@ddjoray1042 2 года назад
If a person with no general knowledge was only given training photos of wolves in snow, they would also think that the only difference between dogs and wolves was that wolves are dogs in snow. Your example seems to be a failure in giving the algorithm adequate training data.
@vel7280
@vel7280 2 года назад
Can you do a video on generative art?
Далее
NVIDIA’s New AI Did The Impossible!
9:26
Просмотров 240 тыс.
Can An AI Design A Good Game Level? 🤖
6:50
Просмотров 130 тыс.
вернуть Врискаса 📗 | WICSUR #shorts
00:54
Best exercises to lose weight ! 😱
00:19
Просмотров 11 млн
The moment we stopped understanding AI [AlexNet]
17:38
Просмотров 822 тыс.
3D Modeling This Toaster Just Became Easier!
6:14
Просмотров 82 тыс.
Simulating the Evolution of Rock, Paper, Scissors
15:00
This AI Learned Physics...But How Good Is It? ⚛
5:47
How to train simple AIs to balance a double pendulum
24:59
Berry's Paradox - An Algorithm For Truth
18:34
Просмотров 435 тыс.
Why Does Diffusion Work Better than Auto-Regression?
20:18
iPhone 16 - 20+ КРУТЫХ ИЗМЕНЕНИЙ
5:20
📱магазин техники в 2014 vs 2024
0:41
Это Xiaomi Su7 Max 🤯 #xiaomi #su7max
1:01
Просмотров 2,1 млн
НЕ БЕРУ APPLE VISION PRO!
0:37
Просмотров 374 тыс.