Тёмный

Evelina Fedorenko - Neural Net language models as models of language processing in the human brain 

Conference on Language Modeling
Подписаться 314
Просмотров 645
50% 1

I seek to understand how our brains understand and produce language. Patient investigations and neuroimaging studies have delineated a network of left frontal and temporal brain areas that support language processing, and work in my group has established that this “language network” is robustly dissociated from both lower-level speech perception and articulation mechanisms, and from systems of knowledge and reasoning (Fedorenko et al. 2024a Nat Rev Neurosci; Fedorenko et al., 2024b Nature). The areas of the language network appear to support computations related to lexical access, syntactic structure building, and semantic composition, and the processing of individual word meanings and combinatorial linguistic processing are not segregated spatially: every language area is sensitive to both (e.g., Shain, Kean et al., 2024 JOCN). In spite of substantial progress in our understanding of the human language system, the precise computations that underlie our ability to extract meaning from word sequences have remained out of reach, in large part due to the limitations of human neuroscience approaches. But a real revolution happened a few years ago: a candidate model organism emerged, albeit not a biological one, for the study of language-neural network language models (LMs), such as GPT-2 and its successors. These models exhibited human-level performance on diverse language tasks, including those long argued to only be solvable by humans, and often producing human-like output. Inspired by the LMs’ linguistic prowess, we tested whether the internal representations of these models are similar to the representations in the human brain when processing the same linguistic inputs, and found that indeed LM representations predict neural responses in the human language areas (Schrimpf et al. 2021 PNAS). This model-to-brain representational similarity opens a lot of exciting doors to investigations of human language processing mechanisms (for a review see Tuckute et al., 2024 Ann Rev Neurosci). I will discuss several lines of recent and ongoing work, including a demonstration that LMs align to brains even after a relatively small amount of training (Hosseini et al., 2024 Neurobio of Lang), a closed-loop neuromodulation approach to identify the linguistic features that most strongly drive the language system (Tuckute et al., 2024 Nat Hum Beh), and work on the universality of representations across LMs and between LMs and brains (Hosseini et al., in prep.).

Опубликовано:

 

20 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 2   
Далее
Human vs Jet Engine
00:19
Просмотров 108 млн
Robert Sapolsky: The Illusion of Free Will
2:58:34
Просмотров 349 тыс.
Roger Penrose: Time, Black Holes, and the Cosmos
1:09:22
Просмотров 285 тыс.
Where Is Everything In The Universe Going?
56:48
Просмотров 812 тыс.
6. Introduction to the Human Brain
56:48
Просмотров 114 тыс.
Yann Dubois: Scalable Evaluation of Large Language Models
1:37:47
6. How Do We Communicate?: Language in the Brain, Mouth
56:31
Going Beyond Einstein: Linking Time And Consciousness
3:32:29
Human vs Jet Engine
00:19
Просмотров 108 млн