Тёмный

A Fruitful Reciprocity: The Neuroscience-AI Connection 

MITCBMM
Подписаться 56 тыс.
Просмотров 10 тыс.
50% 1

Dan Yamins, Stanford University
Abstract: The emerging field of NeuroAI has leveraged techniques from artificial intelligence to model brain data. In this talk, I will show that the connection between neuroscience and AI can be fruitful in both directions. Towards "AI driving neuroscience", I will discuss a new candidate universal principal for functional organization in the brain, based on recent advances in self-supervised learning, that explains both fine details as well as large-scale organizational structure in the vision system, and perhaps beyond. In the direction of "neuroscience guiding AI", I will present a novel cognitively-grounded computational theory of perception that generates robust new learning algorithms for real-world scene understanding. Taken together, these ideas illustrate how neural networks optimized to solve cognitively-informed tasks provide a unified framework for both understanding the brain and improving AI.
Bio: Dr. Yamins is a cognitive computational neuroscientist at Stanford University, an assistant professor of Psychology and Computer Science, a faculty scholar at the Wu Tsai Neurosciences Institute, and an affiliate of the Stanford Artificial Intelligence Laboratory. His research group focuses on reverse engineering the algorithms of the human brain to learn how our minds work and build more effective artificial intelligence systems. He is especially interested in how brain circuits for sensory information processing and decision-making arise by optimizing high-performing cortical algorithms for key behavioral tasks. He received his AB and PhD degrees from Harvard University, was a postdoctoral researcher at MIT, and has been a visiting researcher at Princeton University and Los Alamos National Laboratory. He is a recipient of an NSF Career Award, the James S. McDonnell Foundation award in Understanding Human Cognition, and the Sloan Research Fellowship. Additionally, he is a Simons Foundation Investigator.

Опубликовано:

 

16 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 11   
@GiorgiGigauri-d4o
@GiorgiGigauri-d4o 19 дней назад
Ai videos -moment switching looks exactly how dmt entities change their forms and shapes
@willd1mindmind639
@willd1mindmind639 Год назад
I think this way of looking at the brain to model computer neural networks omits the key difference between brains and computers. Brains have discreteness built in which makes the process of learning to identify patterns and shapes along with relationships between them much easier. Computers have no intrinsic means of generating discrete elements to distinguish one element from another, such as in a collection of pixels. Therefore the computer can never match the way the brain learns things because of that lack of discrete data encoding that is based on bio molecular values. (To see this best, look at the cells in the skin of a camouflaging octopus). So the fundamental behavior of computer neural networks is building a model that approximates the base classifier or set of classifiers (dog, cat, human) that you want to use as part of identification. Because without that base classifier there is no way to identify anything in a computer imaging pipeline. That is why unsupervised learning doesn't work because there are no base models to compare against. And this is where the contrast approach seems to work, but even there, it doesn't have the fidelity and flexibility of the way human brains work. Local aggregation is a mathematical approximation totally different to how brain neural networks work. A child will still be able to distinguish two dogs based on the type of fur, color of fur and other discrete characteristics that a computer neural network has no way of understanding innately. Because these unsupervised are still generalizing a high level classifier, such as dog, versus really understanding all the characteristics and elements that make up a dog: legs, tail, fur, ears, snout, tongue, etc. Ultimately all computer neural networks operate on a mathematical model that tries to generate discreteness through classifiers based on computational processing. That imposes a cost that doesn't exist in biology at a far lesser degree of fidelity and detail. Brains don't have built in previously trained classifiers for things
@doudouban
@doudouban 10 месяцев назад
a child could see, touch and smell with live feedback, while AI is facing cold image data. I think if we give it a full functioning body and improved algorithms, machines might learn much faster and could be improved much faster.
@willd1mindmind639
@willd1mindmind639 10 месяцев назад
It is a difference in how data is encoded where in the brain for example, each color captured by the retina has a specific discrete molecular encoding separating it from other colors. Which means that the visual image in the brain is a collection of multiple networks of these discrete low level molecular values. There isn't any "work" required to distinguish one color from another or one "feature" from another based on these values. Whereas in computer neural networks, everything is a number, so you have to do work to convert those collections of numeric values into some kind of "distinct" features. Most of the reason why current computer neural network frameworks still use pre-existing encoding formats for imagery is because they are designed to be portable and operate on existing data formats. And the other reason is because algorithms like convolutions are based on pixels in order to work.@@doudouban
@narenmanikandan4261
@narenmanikandan4261 5 месяцев назад
@@willd1mindmind639 I see what @doubouban is saying in related to your earlier comment. if we did give the capability to sense things in the physical model, this would greatly increase the learning speed since a large section of the use-cases of AI is in something that relies on physical interaction (such as the 5 sense). Then, i guess the real challenge lies in data that isn't necessarily physical (such as code and text).
@AlgoNudger
@AlgoNudger Год назад
Thanks.
@hyphenpointhyphen
@hyphenpointhyphen Год назад
I like the parsimony approach. Not sure if i get this right but couldn't a working type of memory then selectively grant access to lower level *-topic maps in parallel for feedback in so called higher brain functions? The foundational model delivering the mappings and basic functionality for higher brain functions to access and optimize (learn) target functions, whichever are useful in a social context, thus in light of evolution stabilize genetics. Some more months and CAPTCHAs won't work anymore. If those evolutionary parameters are hard-coded, shouldn't there be genes markable/knockable during development determining the connection strength?
@jerryzhang3506
@jerryzhang3506 Год назад
👏👏👏
@MaudWinston-t8n
@MaudWinston-t8n 4 дня назад
Jones Daniel Thomas Frank Thompson George
@richardnunziata3221
@richardnunziata3221 Год назад
not unlike evolution of eye saccadic
@hyphenpointhyphen
@hyphenpointhyphen Год назад
Care to explain? You mean as error correction of flow?
Далее
Invariance and equivariance in brains and machines
52:51
CBMM10 Panel: Neuroscience to AI and back again
1:26:35
КОГДА МАМА НАШЛА ТЕБЕ НЕВЕСТУ
00:55
How Your Brain Organizes Information
26:54
Просмотров 585 тыс.
What’s the endgame of neuroAI? Patrick Mineault
46:48
CBMM10 Panel: Research on Intelligence in the Age of AI
1:27:21
Brain Criticality - Optimizing Neural Computations
37:05
The Next Generation Of Brain Mimicking AI
25:46
Просмотров 144 тыс.
The Neuroscience of Learning - Bruce McCandliss
21:20
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Просмотров 180 тыс.
КОГДА МАМА НАШЛА ТЕБЕ НЕВЕСТУ
00:55