Тёмный
David Rostcheck
David Rostcheck
David Rostcheck
Подписаться
Hi, I'm David Rostcheck, American technologist from Dallas, Texas. Originally trained in physics, currently working in IT, cross-trained in biology and cognitive science. I talk about AI alignment and solving complex problems. My views are my own. See davidrostcheck.com.
Комментарии
@pwreset
@pwreset 5 дней назад
Nahh, I call it a hangover... you know when he's waking up from months of being processed and there's too much light. It uses words but they don't really match reality.
@noway8233
@noway8233 7 дней назад
Its Always tomorrow ,its allways in the future ...😅😅😅its the same Hype again and again , what they dont say is they are burning tons of Money and they are loosing a ton of money couse they cant make a reuilable software😅
@davidrostcheck
@davidrostcheck 7 дней назад
Well, to be fair, OpenAI is collecting quite a lot of revenue ($2B in Dec 2023, estimated 3.4B as of July 2024). It's just that training their models, per Aschenbrenner's paper, currently also costs about that, so... 😅
@JohnathanDHill
@JohnathanDHill 13 дней назад
Good to know, I don’t have a background in Cognitive Science or ML/DL but from the little bit of research I have done I felt that these LLMs didn’t “hallucinate” but didn’t know what to call it. I’ll research more on what you’ve provided Sir. Thanks for this
@jasonshere
@jasonshere 24 дня назад
Good point. There are so many words and ideas catching on which originate from a faulty understanding of definitions and language.
@sathishkannan6600
@sathishkannan6600 Месяц назад
What tool did you use for captions?
@davidrostcheck
@davidrostcheck Месяц назад
I used riverside.fm
@MrBratkenSolov
@MrBratkenSolov 20 дней назад
It's called diarization. Also you can use it locally with whisperx, for example
@gwills9337
@gwills9337 Месяц назад
Ai researchers have been extremely cavalier and flippant in their stolen valor, stolen words, mis representation of “consciousness” lmao, and the output isn’t consistent or reliable.
@davidrostcheck
@davidrostcheck Месяц назад
I think it's common across many sciences that researchers use terms from another field without a full understanding. Many AI researchers don't know cognitive science well. This is a handicap since many techniques, such as model prompting, are now really directly taken from their human cog-sci equivalents. But LLM models do produce consistent output, provided you set their temperature (creativity) to 0.
@lLenn2
@lLenn2 Месяц назад
Still going to use hallucinate in my papers, mate
@davidrostcheck
@davidrostcheck Месяц назад
Yes, the ship has sailed. But I think, like electrical engineering defining the current as the opposite fo the actual flow or computer scientists and cognitive scientists using 'schema' differently, it's going to remain a point of friction for students in future years. ¯\_(ツ)_/¯
@lLenn2
@lLenn2 Месяц назад
@@davidrostcheck They'll get over it. What does log mean to you, eh?
@lLenn2
@lLenn2 Месяц назад
@@CheapSushi lol, I'm not going to dox myself
@steen_is_adrift
@steen_is_adrift Месяц назад
I will never use the word confabulate. Nobody knows what that is. Everyone knows what a hallucination is. Calling it a hallucination while not technically the correct word will convey the correct meaning. Using the correct word is pointless if the other party doesn't understand the meaning.
@davidrostcheck
@davidrostcheck Месяц назад
The problem is for practitioners in the AI field. As we build cognitive entities, they ways that we interact with them come more and more from cognitive science. If we understand that a model is confabulating, we can do things about it. All those techniques come from cognitive science, so there's a limit to how good you can be at AI without learning cog-sci, and the misaligned terms cause confusion there.
@steen_is_adrift
@steen_is_adrift Месяц назад
@@davidrostcheck it's a good point, but I get the feeling that anyone this would apply to would not be confused by calling it a hallucination either.
@joshmeyer8172
@joshmeyer8172 Месяц назад
🤔
@luszczi
@luszczi Месяц назад
YES. God it's one of my pet peeves. This is such a fundamental mistake: it confuses an efferent process with an afferent one.
@lumipakkanen3510
@lumipakkanen3510 Месяц назад
AI models are children with dementia, gotcha, thanks!
@davidrostcheck
@davidrostcheck Месяц назад
😆
@Maluson
@Maluson Месяц назад
I totally agree with you. Great point.
@cosmoproletarian8905
@cosmoproletarian8905 Месяц назад
I will use this impress my fellow newly made AI experts
@bigutubefan2738
@bigutubefan2738 Месяц назад
Pedantic much?
@WaynesStrangeBrain
@WaynesStrangeBrain Месяц назад
I agreed with enough of this, but can't comment on the last part you said, with what we can do about it. If it is a parallel to autism or memory problems, what would be the analogous cure for humans?
@davidrostcheck
@davidrostcheck Месяц назад
That's an interesting question. Many autistic people suffer from sensory overload, so in that aspect it's a little like hallucination; I believe they benefit from a highly structured routine and environment (but I'm not an expert there). With memory problem, that's confabulation so the key to combatting it is increasing information density. LLMs do this via Retrieval Augmented Generation (RAG), supplying pertinent information to the model. For humans, the equivalent would be a caregiver helping to prompt with relevant memory cues. In the Amazon series Humans, there's an aging robotics scientist who suffers from memory issues; his wife has passed on and he keeps his synthetic (android) Odie, who is well past recycle date, because Odie remembers all the shared experiences with his wife and helps act as an artificial memory system, prompting him by telling him stories so he doesn't forget her.
@huntermunts9660
@huntermunts9660 Месяц назад
For a neural network to visualize aka hallucinate, is a dedicated visualization layer required for the design of the NN?
@davidrostcheck
@davidrostcheck Месяц назад
If I understand the question correctly (let me know if not), for an actual visual hallucination, yes, you'd need a visual layer, a source of random noise merged with the visual layer, and a way to control the thresholding. Interestingly enough: that's the basic architecture of a diffusion model, the AI models we use for creating visual imagery. And if the noise vs. information density gets high, you get some trippy and often creepy hallucinations from them.
@jahyegor
@jahyegor Месяц назад
What about visual hallucinations?
@davidrostcheck
@davidrostcheck Месяц назад
They're again a sensory thresholding problem. For example, if you put someone in a sensory deprivation tank, they get no visual stimuli so their visual system will progressively lower the noise threshold, trying to increase sensitivity, until it starts perceiving the noise in the visual system (from random firings, pulse pressure causing pressure waves through the eye, etc) as objects.
@chepushila1
@chepushila1 Месяц назад
@@davidrostcheck What about those caused by mental disorders?
@werthersoriginal
@werthersoriginal Месяц назад
Welcome back, 6yrs! Do you still do work in Personality Analysis?
@davidrostcheck
@davidrostcheck Месяц назад
Admittedly I'm not the most active RU-vidr, heh. I work in technology, mostly AI/ML, but I have a lot of cognitive science background too. I got into personality analysis when working on university admissions systems that used AI to draw insight from student applications. I have been publishing in other media (see davidrostcheck.com/, most recently in AI alignment and aggregate intelligence) but I'm getting back into video now.
@MansSuperPower
@MansSuperPower 4 года назад
Thank you.
@laimeilin6708
@laimeilin6708 5 лет назад
Thank you so much for this David. I'm a psychology undergraduate trying to get into data science. The content is really comprehensive. I'm really intrigued by your in-depth knowledge on both NLP and personality research. On a side note, your take on personality research is remarkably similar to what Jordan Peterson has been talking about during his lecture! :D
@davidrostcheck
@davidrostcheck 5 лет назад
Thanks, Lai. The personality research is definitely very informed by Dr. Peterson's "Personality and its Transformations" class; I used that a roadmap/overview to get started when I was learning it, augmented with some other sources (my wife's background is in Psychology, so that was also very helpful!)
@laimeilin6708
@laimeilin6708 5 лет назад
@@davidrostcheck no wonder haha! Plsss keep up with the good work . :P
@davidrostcheck
@davidrostcheck 6 лет назад
Presentation deck is available here: www.slideshare.net/DavidRostcheck/nlp-and-personality-analysis
@davidrostcheck
@davidrostcheck 6 лет назад
Version with improved audio available here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-MJw1AgfbB_4.html
@michaelwarner1215
@michaelwarner1215 6 лет назад
Thanks for the presentation! Was wondering if also personality would be different if someone wrote in another language? For example someone learned a 2nd language, and so has a more limited vocabulary in that 2nd language, and only used that 2nd language as input into the ibm watson personality insight test, would that affect the outcome as compared to using their native language?
@davidrostcheck
@davidrostcheck 6 лет назад
Thanks! That's a great question. It might, although you'd hope that your writings in different languages would produce similar output if you were sufficiently fluent. I happen to be bilingual, so I'll try this and let you know.
@b3armonk
@b3armonk 6 лет назад
I watched many first principles and this is the only one I understood. A bit more energy might be great.
@user-gn1ms9et5c
@user-gn1ms9et5c 6 лет назад
Useful!