Hi, I'm David Rostcheck, American technologist from Dallas, Texas. Originally trained in physics, currently working in IT, cross-trained in biology and cognitive science. I talk about AI alignment and solving complex problems. My views are my own. See davidrostcheck.com.
Nahh, I call it a hangover... you know when he's waking up from months of being processed and there's too much light. It uses words but they don't really match reality.
Its Always tomorrow ,its allways in the future ...😅😅😅its the same Hype again and again , what they dont say is they are burning tons of Money and they are loosing a ton of money couse they cant make a reuilable software😅
Well, to be fair, OpenAI is collecting quite a lot of revenue ($2B in Dec 2023, estimated 3.4B as of July 2024). It's just that training their models, per Aschenbrenner's paper, currently also costs about that, so... 😅
Good to know, I don’t have a background in Cognitive Science or ML/DL but from the little bit of research I have done I felt that these LLMs didn’t “hallucinate” but didn’t know what to call it. I’ll research more on what you’ve provided Sir. Thanks for this
Ai researchers have been extremely cavalier and flippant in their stolen valor, stolen words, mis representation of “consciousness” lmao, and the output isn’t consistent or reliable.
I think it's common across many sciences that researchers use terms from another field without a full understanding. Many AI researchers don't know cognitive science well. This is a handicap since many techniques, such as model prompting, are now really directly taken from their human cog-sci equivalents. But LLM models do produce consistent output, provided you set their temperature (creativity) to 0.
Yes, the ship has sailed. But I think, like electrical engineering defining the current as the opposite fo the actual flow or computer scientists and cognitive scientists using 'schema' differently, it's going to remain a point of friction for students in future years. ¯\_(ツ)_/¯
I will never use the word confabulate. Nobody knows what that is. Everyone knows what a hallucination is. Calling it a hallucination while not technically the correct word will convey the correct meaning. Using the correct word is pointless if the other party doesn't understand the meaning.
The problem is for practitioners in the AI field. As we build cognitive entities, they ways that we interact with them come more and more from cognitive science. If we understand that a model is confabulating, we can do things about it. All those techniques come from cognitive science, so there's a limit to how good you can be at AI without learning cog-sci, and the misaligned terms cause confusion there.
I agreed with enough of this, but can't comment on the last part you said, with what we can do about it. If it is a parallel to autism or memory problems, what would be the analogous cure for humans?
That's an interesting question. Many autistic people suffer from sensory overload, so in that aspect it's a little like hallucination; I believe they benefit from a highly structured routine and environment (but I'm not an expert there). With memory problem, that's confabulation so the key to combatting it is increasing information density. LLMs do this via Retrieval Augmented Generation (RAG), supplying pertinent information to the model. For humans, the equivalent would be a caregiver helping to prompt with relevant memory cues. In the Amazon series Humans, there's an aging robotics scientist who suffers from memory issues; his wife has passed on and he keeps his synthetic (android) Odie, who is well past recycle date, because Odie remembers all the shared experiences with his wife and helps act as an artificial memory system, prompting him by telling him stories so he doesn't forget her.
If I understand the question correctly (let me know if not), for an actual visual hallucination, yes, you'd need a visual layer, a source of random noise merged with the visual layer, and a way to control the thresholding. Interestingly enough: that's the basic architecture of a diffusion model, the AI models we use for creating visual imagery. And if the noise vs. information density gets high, you get some trippy and often creepy hallucinations from them.
They're again a sensory thresholding problem. For example, if you put someone in a sensory deprivation tank, they get no visual stimuli so their visual system will progressively lower the noise threshold, trying to increase sensitivity, until it starts perceiving the noise in the visual system (from random firings, pulse pressure causing pressure waves through the eye, etc) as objects.
Admittedly I'm not the most active RU-vidr, heh. I work in technology, mostly AI/ML, but I have a lot of cognitive science background too. I got into personality analysis when working on university admissions systems that used AI to draw insight from student applications. I have been publishing in other media (see davidrostcheck.com/, most recently in AI alignment and aggregate intelligence) but I'm getting back into video now.
Thank you so much for this David. I'm a psychology undergraduate trying to get into data science. The content is really comprehensive. I'm really intrigued by your in-depth knowledge on both NLP and personality research. On a side note, your take on personality research is remarkably similar to what Jordan Peterson has been talking about during his lecture! :D
Thanks, Lai. The personality research is definitely very informed by Dr. Peterson's "Personality and its Transformations" class; I used that a roadmap/overview to get started when I was learning it, augmented with some other sources (my wife's background is in Psychology, so that was also very helpful!)
Thanks for the presentation! Was wondering if also personality would be different if someone wrote in another language? For example someone learned a 2nd language, and so has a more limited vocabulary in that 2nd language, and only used that 2nd language as input into the ibm watson personality insight test, would that affect the outcome as compared to using their native language?
Thanks! That's a great question. It might, although you'd hope that your writings in different languages would produce similar output if you were sufficiently fluent. I happen to be bilingual, so I'll try this and let you know.