Тёмный

LLM UNDERSTANDING: 29. Gary LUPYAN "What counts as understanding?" 

Stevan Harnad
Подписаться 873
Просмотров 229
50% 1

WHAT COUNTS AS UNDERSTANDING?
Gary Lupyan
University of Wisconsin-Madison
ISC Summer School on Large Language Models: Science and Stakes, June 3-14, 2024
Wed, June 12, 9am-10:30am EDT
ABSTRACT: The question of what it means to understand has taken on added urgency with the recent leaps in capabilities of generative AI such as large language models (LLMs). Can we really tell from observing the behavior of LLMs whether underlying the behavior is some notion of understanding? What kinds of successes are most indicative of understanding and what kinds of failures are most indicative of a failure to understand? If we applied the same standards to our own behavior, what might we conclude about the relationship between between understanding, knowing and doing?
GARY LUPYAN is Professor of Psychology at the University of Wisconsin-Madison. His work has focused on how natural language scaffolds and augments human cognition, and attempts to answer the question of what the human mind would be like without language. He also studies the evolution of language, and the ways that language adapts to the needs of its learners and users.
Hu, J., Mahowald, K., Lupyan, G., Ivanova, A., & Levy, R. (2024). Language models align with human judgments on key grammatical constructions (arXiv:2402.01676). arXiv.
Titus, L. M. (2024). Does ChatGPT have semantic understanding? A problem with the statistics-of-occurrence strategy. Cognitive Systems Research, 83.
Pezzulo, G., Parr, T., Cisek, P., Clark, A., & Friston, K. (2024). Generating meaning: Active inference and the scope and limits of passive AI. Trends in Cognitive Sciences, 28(2), 97-112.
van Dijk, B. M. A., Kouwenhoven, T., Spruit, M. R., & van Duijn, M. J. (2023). Large Language Models: The Need for Nuance in Current Debates and a Pragmatic Perspective on Understanding (arXiv:2310.19671). arXiv.
Aguera y Arcas, B. (2022). Do large language models understand us? Medium.
Lupyan, G. (2013). The difficulties of executing simple algorithms: Why brains make mistakes computers don’t. Cognition, 129(3), 615-636.

Опубликовано:

 

28 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 4   
@wwkk4964
@wwkk4964 2 месяца назад
Best presentation
@GerardSans
@GerardSans Месяц назад
Using examples of a bicycle to understand a car just because both use the same road make no sense. LLMs are not human and use different processes not human cognition. These examples just don’t translate to anything meaningful. Focus on how the Transformer and data driven processes work and forget human cognition. There are no brains anywhere in an LLM.
@GerardSans
@GerardSans Месяц назад
Anthropomorphism and anthropocentrism are well documented and any researcher serious about AI should be familiar to these cognitive biases that exist and introduce subtle but important distortions to our perception and interpretation of available information.
Далее
Teaching a Growth Mindset - Carol Dweck
14:30
Просмотров 402 тыс.
FATAL CHASE 😳 😳
00:19
Просмотров 709 тыс.
КАК БОМЖУ ЗАРАБОТАТЬ НА ТАЧКУ
1:36:32
Hacking The Unconscious | Rory Sutherland
33:31
Просмотров 92 тыс.
10 Mental Models for Learning
13:46
Просмотров 45 тыс.
The Behaviorist Theory of Mind
17:15
Просмотров 68 тыс.
Calculus at a Fifth Grade Level
19:06
Просмотров 8 млн
FATAL CHASE 😳 😳
00:19
Просмотров 709 тыс.