Тёмный
No video :(

The Thousand Brains Theory 

Microsoft Research
Подписаться 326 тыс.
Просмотров 45 тыс.
50% 1

The Thousand Brains Theory: A Framework for Understanding the Neocortex and Building Intelligent Machines
Recent advances in reverse engineering the neocortex reveal that it is a highly-distributed sensory-motor modeling system. Each cortical column learns complete models of observed objects through movement and sensation. The columns use long-range connections to vote on what objects are currently being observed. In this talk we introduce the key elements of this theory and describe how these elements can be introduced into current machine learning techniques to improve their capabilities, robustness, and power requirements.
See more at www.microsoft....

Опубликовано:

 

17 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 100   
@cafeliu5401
@cafeliu5401 5 лет назад
Followed him here after Lex interview :P
@koningsbruggen
@koningsbruggen 5 лет назад
same
@OttoFazzl
@OttoFazzl 4 года назад
I followed him even before the Lex interview :D
@JohnPretto
@JohnPretto 4 года назад
as well as I. Great stuff.
@tosvarsan5727
@tosvarsan5727 3 года назад
me too
@markjohnson7510
@markjohnson7510 3 года назад
Same
@simonhughes1284
@simonhughes1284 5 лет назад
This is an very interesting idea. I feel there are a lot of follow up questions: 1. How is information integrated across the different columns? Is voting sufficient for this? What is the purpose of the voting, is it to decide which motor outputs to take? It blow my mind to think that we basically consist of lot of small computers, which all co-ordinate a set of motor outputs, but it does certainly seem both feasible and a somewhat novel idea. 2. How does memory work? The hippocampus is critical for the formation of long term memory, and is not part of the neocortex. Memory is critical for our sense of self, and thus you could argue consciousness (although I think people place too much importance on the idea consciousness in cognition, but I digress).
@NumentaTheory
@NumentaTheory 5 лет назад
See our papers at numenta.com/papers for answers to these questions
@eduardocobian3238
@eduardocobian3238 Год назад
AI and robotics are not separable. Genius Jeff.
@Marcos10PT
@Marcos10PT 4 года назад
I've learned a pattern in Microsoft research videos: they don't like to show the slides on screen for some reason 😁
@ramblinevilmushroom
@ramblinevilmushroom 3 года назад
They want you to go to their website, thats where the slides are.
@saturdaysequalsyouth
@saturdaysequalsyouth 3 года назад
@@ramblinevilmushroom I thought he was referring to the fact that Hawkins is using a Mac.
@matthieuthiboust8869
@matthieuthiboust8869 5 лет назад
Fascinating and very promising biological-oriented approach to overcome current deep learning limits. Our neocortex is an invaluable source of inspiration to build AI models with strong noise robustness and continuous learning by design. I'm eager to see the next steps involving "voting" between cortical columns and cortical-subcortical interactions (thalamus, basal ganglia, hippocampus, ...)
@jeff_holmes
@jeff_holmes 2 года назад
Am currently reading (listening) to Hawkin's book, "A Thousand Brains". Fascinating stuff. Am surprised that this video doesn't have more views.
@armandooliveira3712
@armandooliveira3712 4 года назад
thsi is the most important ai research to date...
@w1ndache
@w1ndache 5 лет назад
Crazy RU-vid algorithm recomended this after i watched lex interview
@AirSandFire
@AirSandFire 4 года назад
How is that crazy?
@TomAtkinson
@TomAtkinson 5 лет назад
1:00:41 my understanding is that the central terminating synapses are stimulant, and the dendritic synapses inhibitory. Really interesting to hear the way you described this part! Your words were that they prime the cell to fire, this makes a lot of sense. Calcium bridges at the synapse are then formed in that rare case a dendritic action potential leads to a main axial firing.
@MegaNexus77
@MegaNexus77 3 года назад
Very, very cool theory! I tested the HTM technology using the Etaler framework which uses OpenCL to speed up the simulation
@nothinhappened
@nothinhappened 3 года назад
how many found their way here after seeing his interview on Lex Fridmans Artificial Intelligence podcast?
@Pencilbrush1
@Pencilbrush1 5 лет назад
finally next stage... Its happening ... been following the development of Jeff and Numenta since On intelligence. 15 years cant believe it!
@diy-bunny
@diy-bunny 4 года назад
Yeah, 15 years... Time flies, and Jeff is old now.
@swanknightscapt113
@swanknightscapt113 5 лет назад
Please show the presentation when the speaker is explaining its content. I don't need to see the speaker face when his is looking and pointing at the slides, in case you don't already know.
@OttoFazzl
@OttoFazzl 4 года назад
Don't thank me: numenta.com/resources/videos/thousand-brains-theory-of-intelligence-microsoft/
@badhumanus
@badhumanus 5 лет назад
Jeff Hawkins is one of the most brilliant minds in neuroscience and AGI research. He's light years ahead of the deep learning crowd. He's mistaken about one thing though, which is the notion that the brain creates a complex model of the world. There are neither enough neurons nor energy in the brain to maintain such a model. If we had a model of the world in memory, we would have no trouble navigating in familiar places with our eyes closed. Fact is, we need to keep our eyes open. There's no need for a complex model: the world is its own model and we can perceive it directly. We do retain a relatively small number of high-level bits of things we have experienced but only if they are recalled repeatedly. Low level details are either forgotten within seconds or written over by new experiences. Unlike deep neural nets, the brain can instantly perceive any complex object or pattern even if it has never seen it before, i.e., without a prior representation in memory. Perception without representations is the biggest pressing problem in neuroscience, in my opinion. Good luck in your research.
@roseburgpowell
@roseburgpowell 5 лет назад
in your non-representational model: 1) Do you agree that each column performs the same function for one or (many) more sensorimotor units (habituated/habituating networks) of possible movement/behavior and 2) what function do the grid cells in the bottom of each column perform?
@poprockssuck87
@poprockssuck87 5 лет назад
I posted this earlier on the Lex Fridman podcast with Jeff Hawkins: How much of the brain is redundancy? I ask this because Hawkins talks about robustness, and there is the apparent benefit that ANNs don't need this really. Even if ANNs can't ever be as dense or efficient as brains, they may be able to compensate because of this. It appears that current neurons in ANNs are actually just some vague representation of neuronal connectivity and/or action potential. Furthermore, current ANNs seem to just be complicated versions of a SINGLE neuron or several neurons in series, not any real representation of a brain or even a neuronal cluster. Firstly, ANNs need to be able to construct themselves relative to the complexity of the problem at hand. They should be able to create layers and nodes as new data is introduced and new patterns are discovered. Also, layers need to be more dynamic, as does connectivity among nodes. In recent years, too much effort has been put into making ANNs deeper, which is like stringing neurons in series. Yes, this allows for something approaching "memory" but neglects constructing a more natural form of pattern recognition through weights existing across connections and not the nodes themselves. As Hawkins mentions, sparse connectivity is the goal if we are going to try and mimic the brain, and this can only be done if layers aren't treated as some fixed block. Currently, there are only heuristics about how many layers and nodes should be involved, and this can't be right. Being able to construct an ANN relative to the problem at hand is another apparent potential advantage of ANNs. You could potentially have ANNs with a size or complexity that is accurately proportional to the question, as scaling or uniting these for something approaching AGI would take fewer resources.
@dakrontu
@dakrontu 5 лет назад
If a cortical column has most of its connections internal, with far fewer coming from outside, that has advantageous implications for building electronic equivalents.
@dakrontu
@dakrontu 5 лет назад
Andrea Cvečić Eh???
@alexiscao8749
@alexiscao8749 2 года назад
Why cannot the camera follow the content in the presentation?
@BradCaldwellAuburn
@BradCaldwellAuburn 3 года назад
As everybody has already said, please stay on the pertinent slide while he's still talking about it. Also, Jeff is so easy to understand, and I love that he talks fast because it makes it fun and you know exactly what he is trying to convey. Subatai however I wish would slow down enough to make the connection between what Jeff was talking about and whatever this vector sparce stuff he is talking about is. I know what sparce means, but not in this new context so please define that term. Also, i know what a vector generally is (force plus direction), but he must mean something else, so please define your new meaning of vector. Also, I have a gist of what "binary" means (something to do with 2), but please clarify what your meaning of binary is in this context. For example, at 1:02:35, Subatai says, "Each learning event is like creating a sub-sample consisting of a very sparce vector." What the what?? Not knowing Subatai's specialized meanings, my brain gets an image of lots of forces and directions and all of the sudden when learning happens the brain groups together a set (implies multiple objects) of one very rare force and direction vector (that's one thing not a set). If he would just define the fancy words first, I could understand.
@egor.okhterov
@egor.okhterov 2 года назад
Vector in mathematics is not force plus direction. Vector is an ordered set of numbers. For example, (1, 2) is a vector. (2,1) is also a vector. Vector (1, 2) is different from vector (2, 1). You could represent many things with vectors. For example, vector (1, 2, 7) could represent location of my head.
@afterthesmash
@afterthesmash 4 года назад
26:00 His quick answer to this question should be part of his standard presentation order. A bit mind-blowing that Jeff thinks this particular detail can be left for another time.
@jasonsebring3983
@jasonsebring3983 5 лет назад
Maybe consensus is the part of the secret sauce behind being aware or "conscious". This is something that wouldn't be just invented by people without some inspiration from biology so understanding more about our brain cannot hurt.
@dakrontu
@dakrontu 5 лет назад
It occurs to me that towards the end of puberty there is a massive culling of neuronal connections that leads from the adolescent brain to the adult brain. That culling makes things more sparse, and given the emphasis on the importance on sparseness in this video, I wonder if it has a similar relevance in improving the brain overall by culling connections. Also does any of this shed light on sleep? Is sleep just random free cycling within the brain or does it serve some purpose that can now be more precisely defined than the usual assumption of reinforcing wanted connections from the preceding wake period and culling the unwanted ones?
@egor.okhterov
@egor.okhterov 2 года назад
As far as I remember, sleep is needed to transfer new information from short term memory to the long term memory. This is information is also processed and repackaged along the way.
@AngelOne11
@AngelOne11 3 года назад
I think this is definitely helpful for OO programming and AI in the long term
@MatthiasKlees
@MatthiasKlees 5 лет назад
Brainjogging from one of the greatest geniuses of our tines
@DeepLearningTV
@DeepLearningTV 4 года назад
"Grid cells in the neocortex" - a core hypothesis behind this theory - this should be relatively easily verifiable, right? I am assuming that the different types of neurons in the different parts of the brain are known. If thats the case, is this a new discovery and if so, why was it missed so far? I realize I sound skeptical, but I am really trying to understand. Thanks Jeff and Subutai for your work :-)
@vast634
@vast634 3 года назад
The neurons and morphology are certainly known. But how they process information and for what purpose within the context of the whole brain seems to be still very vaguely known.
@rickharold69
@rickharold69 5 лет назад
Super awesome !!
@MenGrowingTOWin
@MenGrowingTOWin 5 лет назад
It may be that AI systems will be nothing like the human brain in the same way that jet planes are nothing like birds.
@erwinmoreno23
@erwinmoreno23 5 лет назад
But they are bound by the same rules to certain extent. So its good to understand the principles by which they operate
@MenGrowingTOWin
@MenGrowingTOWin 5 лет назад
@@erwinmoreno23 of course it's helpful you are right.
@michelechaussabel732
@michelechaussabel732 5 лет назад
To me comparing the brain to a computer is like comparing life on earth to possible life on another planet. No reason to think it’s anyway near the same.
@TheReferrer72
@TheReferrer72 5 лет назад
@@michelechaussabel732 Alien Life will still be an entropy-increasing mechanism, human brain and computers are computation substrates.
@michelechaussabel732
@michelechaussabel732 5 лет назад
I’m afraid you lost me with the substrates, but thank you for the reply
@fredxu9826
@fredxu9826 3 года назад
great talk. also the name "Subutai" is so savage
@kp2718
@kp2718 5 лет назад
4:20 Then 75% percent of human brain by volume (as on a slide) or area? Maybe 75% of brain outsie area if u ask me.
@Jamie-my7lb
@Jamie-my7lb 4 года назад
Konrad Yeah, he meant volume.
@akompsupport
@akompsupport 5 лет назад
Does the second speaker have a github repo? Thanks in advance.
@Kaikuns
@Kaikuns 4 года назад
You could take a look at Numentas repo which includes all their research work and source code for papers that he mentions: github.com/numenta/
@manfredadams3252
@manfredadams3252 5 лет назад
Would have been nice to see more of the presentation and less of the presenter. Micro$oft must have had a middle school intern as their media guy this day.
@johnappleseed7601
@johnappleseed7601 5 лет назад
Amazing content, but the production is an embarrassment. For such a spatial topic why isn't the camera focused on the presentation. Microsoft Research you flunked this video, redo the video with Sight and Sound included, thank you.
@dakrontu
@dakrontu 5 лет назад
Yes, and the audio level could have been made more consistent too.
@Theodorus5
@Theodorus5 3 года назад
Love Hawkins :)
@richardnunziata3221
@richardnunziata3221 5 лет назад
How about some working code that people can play , expand , learn with say for a game engine , simulator, or classification / segmentation....
@AleksandarKamburov
@AleksandarKamburov 5 лет назад
For code and other information go to numenta.org . There is a list with shortcuts on the right with Community Resources.
@sgaseretto
@sgaseretto 5 лет назад
You can try NuPIC github.com/numenta/nupic
@i-never-look-at-replies-lol
@i-never-look-at-replies-lol 6 месяцев назад
Shit. I only have 999 brains. No wonder why I'm so dumb.
@ericmanuel3201
@ericmanuel3201 5 лет назад
If people can know what will happen in the future is a foolish! Time & time again every thing has it's own time! & THIS IS ALL VANITY!!! all things must past! So let us not be foolish !!Thank you
@dakrontu
@dakrontu 5 лет назад
What they were talking about was the brain predicting the future on a statistical basis, not foretelling it. Let's say you are driving a car. As your experience grows, you learn to anticipate dangers such as people stepping in front of the vehicle or other vehicles going thru red lights. This anticipatory capability has great survival value for any species. For example while being chased by predators. Humans are not chased much by predators these days, but they do plenty of other things requiring anticipation of likely events.
@naptastic
@naptastic 3 года назад
Ok, yes, they're both very handsome men, but can you please show the slides for more than five seconds each? It would be much easier to understand the graph they're talking about if it were visible while they were talking about it.
@scottunique
@scottunique 5 лет назад
Someone get this man access to supercomputing, please.
@OrthodoxDAO
@OrthodoxDAO 5 лет назад
You know he is not exactly poor, antisocial or marginalized, don't you
@scottunique
@scottunique 5 лет назад
I was actually slightly sarcastically highlighting the fact that he came there to beg for supercomputing. At least that's what I saw.
@OrthodoxDAO
@OrthodoxDAO 5 лет назад
@@scottunique I have not watched it, but my comment was straight-faced. He can probably get the "academic packages" from the big cloud providers anytime he wants, if he is not already a recipient. And his circles are not exactly broke, even clowns Invacio managed to buy some xyzFLOPS of compute. The latter was an ICO of course, but if he really gets desperate...
@vast634
@vast634 3 года назад
Large computing resources make only sense, once the algorithms are nailed down. They are still in process of researching the proper architecture. Its a misconception that just throwing more resources at the computation will magically make the ai have true intelligence. It works only on specific problem sets currently in deep learning (classifying things based on huge datasets). But they argue that a change in the architecture of neural nets is necessary.
@smyk1975
@smyk1975 5 лет назад
The first part of the presentation is like Hinton’s Capsule Networks except for lack of mathematical model and rigorous thinking.
@arunavaghatak8614
@arunavaghatak8614 5 лет назад
That is because both this model and Capsule Networks were inspired by the human brain. It lacks the mathematical modelling because our brains don't do the math either that Capsule networks use.
@hegerwalter
@hegerwalter 5 лет назад
Hinton Capsule's talk ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-6S1_WqE55UQ.html
@nickeaton1255
@nickeaton1255 5 лет назад
Thank god for playback speed .75 for the first guy lol
@monkyyy0
@monkyyy0 5 лет назад
thank god for 2x playback for everything
@OttoFazzl
@OttoFazzl 4 года назад
Usually, I watch lectures on 1.5x - 2x, but not for Jeff Hawkins.
@vast634
@vast634 3 года назад
@@monkyyy0 For an information dense presentation such as this, 2x speed would be way too fast, unless you know the topic, or dont want to catch all the details.
@koningsbruggen
@koningsbruggen 5 лет назад
Is it me or is this the same theory as Kurzweil his Pattern Recognition Theory of Mind
@aaronturner6760
@aaronturner6760 3 года назад
Sales pitch to Microsoft
@somnathsikdar6657
@somnathsikdar6657 3 года назад
Poor camera work.
@Turbo_Tastic
@Turbo_Tastic 5 лет назад
Great video, but he's missing something huge.. there are radio frequency connections too.. nothing physical, just parts of the brains neurons communicating remotely with each other wirelessly. A team of researchers studying the brain have discovered a brand new and previously unidentified form of “wireless” neural communications that self-propagates across brain tissue and is capable of leaping from neurons in one part of the brain to another, even if the connection between them has been severed. The discovery by biomedical engineering researchers at Case Western Reserve University in Cleveland, Ohio could prove key to understanding the activity surrounding neural communication, as well as specific processes and disorders in the central nervous system.
@AkashSwamyBazinga
@AkashSwamyBazinga 5 лет назад
Can you please post some papers or articles about this discovery ?
@Addoagrucu
@Addoagrucu 4 года назад
This is misinformation. The "wireless" communication is very local and only happens when the minute electrical current carried by the dendrites creates a minute magnetic field around the direction of the current, very minimally affecting the action potentials of nearby neurons. This process is not implicated in any major functions of the brain, and consequently, all the claims you've stated are bogus.
@peterrandall9381
@peterrandall9381 5 лет назад
The vibe in this room is killer. Maybe because the speakers are a little nerveous, but it makes me uncomfortable.
@deric18roshan18
@deric18roshan18 5 лет назад
Somebody tell Jeff to slowdown while talking.
@afterthesmash
@afterthesmash 4 года назад
Eventually I had to slow my playback from 1.5 to 1.25, because he really talks fast.
@rgibbs421
@rgibbs421 3 года назад
and um
@kennethgarcia25
@kennethgarcia25 3 года назад
Am I the only one who finds really annoying his rapid, high pitched voice which claims to have "discovered" the work many others were primarily responsible for.
Далее
Building Neural Network Models That Can Reason
1:19:56
Просмотров 29 тыс.
7 Days Stranded In A Cave
17:59
Просмотров 37 млн
Growing Living Rat Neurons To Play... DOOM? | Part 1
27:10
These Illusions Fool Almost Everyone
24:55
Просмотров 2,1 млн
Quantum Computing for Computer Scientists
1:28:23
Просмотров 2,1 млн
Unreasonably Effective AI with Demis Hassabis
52:00
Просмотров 66 тыс.
#59 JEFF HAWKINS - Thousand Brains Theory
2:35:06
Просмотров 77 тыс.
PsyWar: Enforcing the New World Order | Dr. Robert Malone
1:14:12
7 Days Stranded In A Cave
17:59
Просмотров 37 млн