Тёмный

AI models don't hallucinate 

David Rostcheck
Подписаться 65
Просмотров 2,6 тыс.
50% 1

Everyone knows AI models hallucinate - except they don't. We lifted the word from cognitive science - but we lifted the wrong word. What AI models do is something different... they confabulate.

Наука

Опубликовано:

 

11 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 28   
@cosmoproletarian8905
@cosmoproletarian8905 12 дней назад
I will use this impress my fellow newly made AI experts
@luszczi
@luszczi 12 дней назад
YES. God it's one of my pet peeves. This is such a fundamental mistake: it confuses an efferent process with an afferent one.
@jasonshere
@jasonshere День назад
Good point. There are so many words and ideas catching on which originate from a faulty understanding of definitions and language.
@WaynesStrangeBrain
@WaynesStrangeBrain 12 дней назад
I agreed with enough of this, but can't comment on the last part you said, with what we can do about it. If it is a parallel to autism or memory problems, what would be the analogous cure for humans?
@davidrostcheck
@davidrostcheck 12 дней назад
That's an interesting question. Many autistic people suffer from sensory overload, so in that aspect it's a little like hallucination; I believe they benefit from a highly structured routine and environment (but I'm not an expert there). With memory problem, that's confabulation so the key to combatting it is increasing information density. LLMs do this via Retrieval Augmented Generation (RAG), supplying pertinent information to the model. For humans, the equivalent would be a caregiver helping to prompt with relevant memory cues. In the Amazon series Humans, there's an aging robotics scientist who suffers from memory issues; his wife has passed on and he keeps his synthetic (android) Odie, who is well past recycle date, because Odie remembers all the shared experiences with his wife and helps act as an artificial memory system, prompting him by telling him stories so he doesn't forget her.
@CheapSushi
@CheapSushi 12 дней назад
@@davidrostcheck Great comment. That's what I was or am hoping self-hosted AI would be for me. A way to help me remember my experiences & understand the world around me and be a kind of truth North Star and analogue to a benevolent caregiver/guard/parent/friend/etc. To me truth (or accuracy of my internal world model) is more and more important as I grow older. I also wanted to say that your communication about "hallucinate" versus "confabulate" is one of those truths that really should matter but society will "lie" or "misinform" about. I wish ultimate truth or accuracy would be more valued.
@sathishkannan6600
@sathishkannan6600 11 дней назад
What tool did you use for captions?
@davidrostcheck
@davidrostcheck 11 дней назад
I used riverside.fm
@lLenn2
@lLenn2 12 дней назад
Still going to use hallucinate in my papers, mate
@davidrostcheck
@davidrostcheck 12 дней назад
Yes, the ship has sailed. But I think, like electrical engineering defining the current as the opposite fo the actual flow or computer scientists and cognitive scientists using 'schema' differently, it's going to remain a point of friction for students in future years. ¯\_(ツ)_/¯
@lLenn2
@lLenn2 12 дней назад
@@davidrostcheck They'll get over it. What does log mean to you, eh?
@CheapSushi
@CheapSushi 12 дней назад
What papers did you write?
@lLenn2
@lLenn2 12 дней назад
@@CheapSushi lol, I'm not going to dox myself
@huntermunts9660
@huntermunts9660 12 дней назад
For a neural network to visualize aka hallucinate, is a dedicated visualization layer required for the design of the NN?
@davidrostcheck
@davidrostcheck 12 дней назад
If I understand the question correctly (let me know if not), for an actual visual hallucination, yes, you'd need a visual layer, a source of random noise merged with the visual layer, and a way to control the thresholding. Interestingly enough: that's the basic architecture of a diffusion model, the AI models we use for creating visual imagery. And if the noise vs. information density gets high, you get some trippy and often creepy hallucinations from them.
@Maluson
@Maluson 12 дней назад
I totally agree with you. Great point.
@joshmeyer8172
@joshmeyer8172 12 дней назад
🤔
@steen_is_adrift
@steen_is_adrift 12 дней назад
I will never use the word confabulate. Nobody knows what that is. Everyone knows what a hallucination is. Calling it a hallucination while not technically the correct word will convey the correct meaning. Using the correct word is pointless if the other party doesn't understand the meaning.
@davidrostcheck
@davidrostcheck 12 дней назад
The problem is for practitioners in the AI field. As we build cognitive entities, they ways that we interact with them come more and more from cognitive science. If we understand that a model is confabulating, we can do things about it. All those techniques come from cognitive science, so there's a limit to how good you can be at AI without learning cog-sci, and the misaligned terms cause confusion there.
@steen_is_adrift
@steen_is_adrift 12 дней назад
@@davidrostcheck it's a good point, but I get the feeling that anyone this would apply to would not be confused by calling it a hallucination either.
@gwills9337
@gwills9337 11 дней назад
Ai researchers have been extremely cavalier and flippant in their stolen valor, stolen words, mis representation of “consciousness” lmao, and the output isn’t consistent or reliable.
@davidrostcheck
@davidrostcheck 11 дней назад
I think it's common across many sciences that researchers use terms from another field without a full understanding. Many AI researchers don't know cognitive science well. This is a handicap since many techniques, such as model prompting, are now really directly taken from their human cog-sci equivalents. But LLM models do produce consistent output, provided you set their temperature (creativity) to 0.
@jahyegor
@jahyegor 13 дней назад
What about visual hallucinations?
@davidrostcheck
@davidrostcheck 13 дней назад
They're again a sensory thresholding problem. For example, if you put someone in a sensory deprivation tank, they get no visual stimuli so their visual system will progressively lower the noise threshold, trying to increase sensitivity, until it starts perceiving the noise in the visual system (from random firings, pulse pressure causing pressure waves through the eye, etc) as objects.
@chepushila1
@chepushila1 12 дней назад
@@davidrostcheck What about those caused by mental disorders?
Далее
Why LLMs hallucinate | Yann LeCun and Lex Fridman
5:47
Has Generative AI Already Peaked? - Computerphile
12:48
Nobody Can Do it🚗❓
00:15
Просмотров 1,7 млн
How our cells (nearly) perfected making nanobots
12:32
The Hidden Complexity of Wishes
11:28
Просмотров 370 тыс.
This Disease is Deadlier Than The Plague
10:53
Просмотров 6 млн
The Man Who Solved the World’s Hardest Math Problem
11:14
AI passed the Turing Test -- And No One Noticed
8:46
Просмотров 405 тыс.
The True Story of How GPT-2 Became Maximally Lewd
13:54