Тёмный

Hallucination in Language Models is NEVER Going Away 

Tunadorable
Подписаться 10 тыс.
Просмотров 485
50% 1

Read the paper here
arxiv.org/pdf/...
Support my learning journey on patreon!
patreon.com/Tu...
Discuss this stuff with other Tunadorks on Discord
/ discord
All my other links
linktr.ee/tuna...

Опубликовано:

 

1 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 2   
@pdcx
@pdcx 6 месяцев назад
what a paper. full of useful info.
@timmygilbert4102
@timmygilbert4102 6 месяцев назад
A simpler interpretation is that recall is over fitting the data while hallucinations are over generalization, ie it's not a bug but a feature, it's working as intended. Separating recall and generalizations is probably the way to solve the issues, which is what RAG does. Over generalisation is key for creativity and reasoning, because you apply known pattern and extrapolate, ie hallucinate. Categorizing type of hallucinations is probably better, ie context of when to apply it, or literal failure like hallucinations cause by collision of homonymous concept, such as type safety in programming vs in society. They are equivalent to the mangled hands ✋ in generative art. I call these continuity knots.
Далее
ВЫЗВАЛ ЗЛОГО СОНИКА #Shorts
00:38
Просмотров 58 тыс.
БАГ ЕЩЕ РАБОТАЕТ?
00:26
Просмотров 96 тыс.
Turns out LLMs don't memorize that much
31:05
Просмотров 3,3 тыс.
Chain-of-Verification: How to fight AI Hallucination
29:14
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Просмотров 183 тыс.
This is why Deep Learning is really weird.
2:06:38
Просмотров 389 тыс.
Going Beyond Einstein: Linking Time And Consciousness
3:32:29
ICML 2024 Tutorial: Physics of Language Models
1:53:43
Просмотров 24 тыс.
The Boundary of Computation
12:59
Просмотров 1 млн