Тёмный

LLM UNDERSTANDING: 30. Jackie CHEUNG "How Do We Know What LLMs Can Do? Benchmarking and Evaluation" 

Stevan Harnad
Подписаться 873
Просмотров 217
50% 1

HOW DO WE KNOW WHAT LLMS CAN DO? BENCHMARKING AND EVALUATION
Jackie Chi Kit Cheung
Computer Science, McGill University
ISC Summer School on Large Language Models: Science and Stakes, June 3-14, 2024
Wed, June 12, 11am-12:30pm EDT
ABSTRACT: Conflicting claims about how large language models (LLMs) “can do X”, “have property Y”, or even “know Z” have been made in recent literature in natural language processing (NLP) and related fields, as well as in popular media. However, unclear and often inconsistent standards for how to infer these conclusions from experimental results bring the the validity of such claims into question. In this lecture, I focus on the crucial role that benchmarking and evaluation methodology in NLP plays in assessing LLMs’ capabilities. I review common practices in the evaluation of NLP systems, including types of evaluation metrics, assumptions regarding these evaluations, and contexts in which they are applied. I then present case studies showing how less than careful application of current practices may result in invalid claims about model capabilities. Finally, I present our current efforts to encourage more structured reflection during the process of benchmark design and creation by introducing a novel framework, Evidence-Centred Benchmark Design, inspired by work in educational assessment.
JACKIE CHI KIT CHEUNG is associate professor, McGill University’s School of Computer Science, where he co-directs the Reasoning and Learning Lab. He is a Canada CIFAR AI Chair and an Associate Scientific Co-Director at the Mila Quebec AI Institute. His research focuses on topics in natural language generation such as automatic summarization, and on integrating diverse knowledge sources into NLP systems for pragmatic and common-sense reasoning. He also works on applications of NLP to domains such as education, health, and language revitalization. He is motivated in particular by how the structure of the world can be reflected in the structure of language processing systems.
Porada, I., Zou, X., & Cheung, J. C. K. (2024). A Controlled Reevaluation of Coreference Resolution Models. arXiv preprint arXiv:2404.00727.
Liu, Y. L., Cao, M., Blodgett, S. L., Cheung, J. C. K., Olteanu, A., & Trischler, A. (2023). Responsible AI Considerations in Text Summarization Research: A Review of Current Practices. arXiv preprint arXiv:2311.11103.

Опубликовано:

 

28 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 1   
Далее
КОТЯТА В ОПАСНОСТИ?#cat
00:36
Просмотров 491 тыс.
"Когти льва" Анатолий МАЛЕЦ
53:01
pumpkins #shorts
00:39
Просмотров 7 млн
LLM vs NLP | Kevin Johnson
10:36
Просмотров 16 тыс.
How I’d learn ML in 2024 (if I could start over)
7:05
What is RAG? (Retrieval Augmented Generation)
11:37
Просмотров 146 тыс.
Learn to Evaluate LLMs and RAG Approaches
19:14
Просмотров 10 тыс.
КОТЯТА В ОПАСНОСТИ?#cat
00:36
Просмотров 491 тыс.