Тёмный

What the Brain says about Machine Intelligence 

Numenta
Подписаться 5 тыс.
Просмотров 18 тыс.
50% 1

"What the Brain says about Machine Intelligence"
Jeff Hawkins
Co-founder, Numenta
21 Nov 2014

Наука

Опубликовано:

 

14 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 29   
@diy-bunny
@diy-bunny 4 года назад
After 5 year, it is still refreshing to watch again. Great work, Jeff and Numenta team.
@michelstronguin6974
@michelstronguin6974 9 лет назад
This is obviously the right approach to true machine intelligence. How sad that it is only getting a few thousand views. Great video Jeff!
@RaduOleniuc
@RaduOleniuc 9 лет назад
This is so interesting. I guess the key problem is training the database using a sound hierarchical structure of concepts in the first place. Learning all the ups and downs of philosophy. This is like the archaeology of knowledge (or concepts), but already knowing what to find, how the object it will look at the end, what shape it will have and so on. But seeing how the machine tries to dig for it, and helping along the way. Very cool stuff. When he said about that "fox - rodent" analogy it blew my mind, as that for me was clearly a creative process.
@SuperHeroINTJ
@SuperHeroINTJ 9 лет назад
Thank you for sharing.
@dragolov
@dragolov Год назад
Bravo! Thank you so much!
@DrPaulPhipps
@DrPaulPhipps 9 лет назад
Years ago, I loved reading Hawkins' book "On Intelligence". Its refutation of Searle's Chinese Room argument, is still the best I've heard (Dennett can take a back seat). Questions for Jeff: NEUROSCIENCE PERSPECTIVE: D1 dopamine receptors activate before "novelty seeking" behavior. D2 dopamine receptors activate before "familiar-reward seeking" behavior. Without modeling this, wouldn't your system have severe Parkinson's? Concern about "the future" is restricted to a single time-step! (Tracking of episodic context may be maintained by the hippocampus, but that alone can't give direction to behavior.) If the basal ganglia intimidates you, then why not look at cocaine or morphine studies? MACHINE LEARNING PERSPECTIVE: Your system is clearly doing lossy compression of a data stream. Mathematically, how do you characterize the signals kept in memory? The EEs who made the jpeg, mp3, etc. standards assumed that the brain is doing wavelet analysis. Is your system doing wavelet analysis (generally, or as a special case)? If not, how (and why) do your signals differ from wavelets? ACADEMIC PERSPECTIVE: Are you actually skimming hundreds of abstracts? Isn't it more efficient to invite professors to lunch? Or simply barge in during their office hours! You guys know your work is important and interesting; so, don't be shy.
@Numenta
@Numenta 9 лет назад
Hi Paul, you have some great questions that I will address briefly here. Feel free to repost to our mailing lists for more detailed discussion. Neuroscience perspective: You make some very good points about behavior. We don't currently model behavior and instead rely on classification of the internal state of models that do not generate behavior to make predictions, classifications, or compute an anomaly score. We are actively looking into how the neocortex interacts with subcortical components to generate behavior. Any progress we make or discussions with the community will be posted on the NuPIC theory mailing list (numenta.org/lists/). Machine learning perspective: We don't currently have any sort of rigorous mathematical characterization of the current theory. Academic perspective: The theory work happens in a bunch of different ways. A lot of the work is based on and constrained by papers and neuroscience knowledge while we try to find mechanisms that meet our functional goals. Ultimately our theory advances as we are able to reconcile functional mechanisms that accomplish specific goals with biological constraints.
@IanAtkinson555
@IanAtkinson555 9 лет назад
Dennett's ideas of how the brain works agrees very closely to this model - not in the fine detail of THM, but in the general way that representations of sensory input or things in the world are not fundamentally different for different senses.
@DrPaulPhipps
@DrPaulPhipps 9 лет назад
Ian Atkinson Yeah, I agree that they agree. So, specifically, I liked the way Hawkins modified the Chinese Room experiment to include a story, not just a short isolated question. That way, the system's anticipation of what comes next in the story would be a good hint that the system as a whole does "get" the very answer it gives to the eventual question.
@scientious
@scientious 7 лет назад
+Paul Phipps > Years ago, I loved reading Hawkins' book "On Intelligence". Its refutation of Searle's Chinese Room argument, is still the best I've heard It doesn't seem that you actually remember what you read. page 16 "As for me, I think Searle had it right. When I thought through the Chinese Room argument and when I thought about how computers worked, I didn't see understanding happening anywhere." page 28 "As Searle showed with the Chinese Room, behavioral equivalence is not enough." So, Hawkins doesn't refute Searle; he actually agrees with him. BTW, Searle is correct on this point. However, to show you that Hawkins does not actually grasp the argument, we have this part: page 16 "The ultimate defensive argument of AI is that computers could, in theory, simulate the entire brain. A computer could model all the neurons and their connections, and if it did there would be nothing to distinguish the "intelligence" of the brain from the "intelligence" of the computer simulation. Although this may be impossible in practice, I agree with it." What Hawkins doesn't understand is that the brain cannot be simulated within the bounds of language theory. In other words, a Turing Machine cannot simulate the brain. This isn't a matter of computational power or memory, even if we use the correct numbers rather than Kurzweil's estimates. This is the difference between a neural based model (which he is pretty good at) and an actual brain model (which he is woefully inadequate at). Early in the book he says: "The agenda for this book is ambitious. It describes a comprehensive theory of how the brain works. It describes what intelligence is and how your brain creates it." Really? Let's check the date on that book. '2004'. That was thirteen years ago. Where is his engineering model of the brain? In reality, his theory is fractional. It only covers a little bit of what you need to actually explain brain data modeling and processing. It is by no means a comprehensive theory. Some of Hawkins' insights are correct, but others aren't. Regardless of how highly you think of Hawkins' work, he is going to have to deal with reality when the general theory is published. In other words, a general theory already exists, but Hawkins doesn't have it and isn't close to it.
@DrPaulPhipps
@DrPaulPhipps 7 лет назад
Certainly, Hawkins did not have it all figured out, but the idea that the cortex is especially focused on sequence prediction still persuades me, even if I am pretty ignorant, and even if most neuroscientists wouldn't consider the idea as revolutionary as Hawkins tries to make out. I don't follow all of your reasoning, exactly. You said, "...the brain cannot be simulated within the bounds of language theory. In other words, a Turing Machine cannot simulate the brain." Can you explain what conception of "the brain" you're using that is beyond computability? Are you invoking quantum computations? or the feeling that the brain is tapping into some amazing "uncountable" power of continuous mathematics that will never be adequately approximated with numerical methods? Or (sorry if this seems rude) are you unintentionally clinging to an essentialist notion of matter being "beyond" information, and thereby in a sacred category? Unless we are philosophically going all the way back to subjective-objective duality (we can go there if you'd like), then all that scientists have ever uncovered is data (of finite size), and stories to organize that data. In order words, "energy", "matter", and "electromagnetic forces" are just characters in an awesome assemblage of stories we have made to organize data into patterns that seem to remain over time, as we look to acquire data that matches or does not match. Looking for the "real stuff of the Universe" is just a symbolic hold over of naive realism, the "mud" that everything must be made of.
@NicholasOsto
@NicholasOsto 9 лет назад
The book On Intelligence is one of my favorite books of all time. I believe the concepts and frameworks discussed hold the most potential for AI development and animal psychology. This video is a great supplement to On Intelligence for diving deeper into the theory.
@Numenta
@Numenta 9 лет назад
Check out a new talk from Jeff Hawkins on What the Brain says about Machine Intelligence. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-izO2_mCvFaw.html
@profyeah25
@profyeah25 4 года назад
like #319 been looking 4 this mana from heaven
@JustNow42
@JustNow42 2 года назад
Copying the brain may not be a very good strategy. The brain is waymore complicated than we thought at the beginning. New braincells are discovered and we have no idea what they do. Alone the problem of modeling a single neuron requires a deep layer of may be 8. And we are talking about may be 10^20 bits necessary to match the brains capacity. Right now we do not even have AIs that can learn from its mistakes.
@RelatedGiraffe
@RelatedGiraffe 8 лет назад
Interesting. What are the benefits of this type of network compare with more conventional deep architectures, like for example deep belief networks or deep convolutional networks?
@Numenta
@Numenta 8 лет назад
+RelatedGiraffe Hello! Check out our blog post here for more info on our approach vs. others: numenta.com/blog/machine-intelligence-machine-learning-deep-learning-artificial-intelligence.html
@RelatedGiraffe
@RelatedGiraffe 8 лет назад
Numenta Thanks for the link, I will read the blog post.
@walteralter9061
@walteralter9061 8 лет назад
This is some smack me shit pregnant with beauty and the promise of Renaissance. My first question is how does psychological architecture derive from brain physiology? How are categories created? How are priorities created? How is inhibition mobilized? Can the brain be described as a feedback engine? I think a dandy app would be the detection of camouflaged sociopaths and sociopath networks.
@billyheng4824
@billyheng4824 8 лет назад
Brain M still long way to go... but what could be missing HARDWARE ? ANALOG ? Numenta is just SW layer on top of existing HW. Should such approach require new HW ??
@IanAtkinson555
@IanAtkinson555 9 лет назад
What the Brain says about Machine Intelligence.
@XxXVideoVeiwerXxX
@XxXVideoVeiwerXxX 9 лет назад
Ok, explain this to me. He says near the beginning that they want to essentially copy the neocortex and how it functions for machine intelligence. THEN, he immediately says, WE DO NOT WANT TO COPY HUMAN INTELLIGENCE. What? They want the foundation of machine intelligence to be the same as humans (neocortex) but don't want them to be the same? What?
@bandpractice
@bandpractice 9 лет назад
The neocortex is one (very important) component of the human brain. There are hundreds of different regions in the brain, ones regulating sleep/wake cycles, breathing, hunger, fear, etc. He only wants the part that he see as central to our higher intelligence. Having human emotions is, at least for most applications, not useful.
@XxXVideoVeiwerXxX
@XxXVideoVeiwerXxX 9 лет назад
Uh huh, so people want to make sociopathic robots still?
@walteralter9061
@walteralter9061 8 лет назад
It's coming, get used to it and try to suss the part where he makes it clear that this is an OPEN SOURCE project which, at some point, just might experience attempts to commandeer it for nefarious purposes, but not before the project has learned to track commandeering patterns and to find and detect their source, thus creating a nice new target acquisition map for revelation to the general public. Think of whistle blowing carried to a pro-active phase.
Далее
😮Новый ДИРЕКТОР Apple🍏
00:29
Просмотров 24 тыс.
I Missed With The Bottle😂
00:12
Просмотров 4,3 млн
The Thousand Brains Theory
1:30:07
Просмотров 45 тыс.
Cosmology in Crisis? Confronting the Hubble Tension
36:26
Superintelligence | Nick Bostrom | Talks at Google
1:12:56
What The Brain Can Tell Us
54:49
Просмотров 7 тыс.
This is why Deep Learning is really weird.
2:06:38
Просмотров 368 тыс.
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Просмотров 172 тыс.
Собери ПК и Получи 10,000₽
1:00
Просмотров 2,6 млн
Собери ПК и Получи 10,000₽
1:00
Просмотров 2,6 млн