Тёмный
Stevan Harnad
Stevan Harnad
Stevan Harnad
Подписаться
Humanity's greatest crime against humanity is the Shoah (so far). But the Eternal Treblinka we inflict, gratuitously, on nonhuman animals, every day, everywhere, is humanity's greatest crime. It has to stop. It has to stop.
LLM UNDERSTANDING: 27. Michael LEVIN
1:28:42
3 месяца назад
LLM Understanding:  24. Panel4
1:17:28
3 месяца назад
LLM Understanding: 23. David CHALMERS
1:28:51
3 месяца назад
Комментарии
@lionhuang9209
@lionhuang9209 4 дня назад
Very informative!
@lionhuang9209
@lionhuang9209 4 дня назад
Where can we download the slides.
@BetsyWillie-t8f
@BetsyWillie-t8f 11 дней назад
Johnson Carol Clark Kenneth Jackson Steven
@LOGICALGUY-jm5fu
@LOGICALGUY-jm5fu 12 дней назад
Sir Stevan Harnad ,can you do a podcast with another youtuber called skeptico on consciousness
@brendawilliams8062
@brendawilliams8062 13 дней назад
Thankyou
@BetsyWillie-t8f
@BetsyWillie-t8f 16 дней назад
Allen Lisa Rodriguez Maria Thompson Melissa
@JustNow42
@JustNow42 28 дней назад
So this stepwise progress is akin to steps of time, but for the size of the steps and effect. We actually do not know much about time, what is the step size ( quantum but surely random?) may be it is continuous and first of all what is stepping? Well it is connected to gravity may be even so that space is the emergent part and time create causality?
@faster-than-light-memes
@faster-than-light-memes 29 дней назад
This is remarkable
@GerardSans
@GerardSans Месяц назад
Using examples of a bicycle to understand a car just because both use the same road make no sense. LLMs are not human and use different processes not human cognition. These examples just don’t translate to anything meaningful. Focus on how the Transformer and data driven processes work and forget human cognition. There are no brains anywhere in an LLM.
@GerardSans
@GerardSans Месяц назад
Anthropomorphism and anthropocentrism are well documented and any researcher serious about AI should be familiar to these cognitive biases that exist and introduce subtle but important distortions to our perception and interpretation of available information.
@GerardSans
@GerardSans Месяц назад
I disagree that science doesn’t know if AI has consciousness. It’s well established that AI has no “intelligence”, “reasoning” or even “understanding” beyond its very well defined architecture. Pretending this is otherwise is a fallacy and falls right under anthropomorphic biases. AI is data driven and its capabilities and failure modes are well documented in scientific literature. None of its underlying building blocks or algorithms point to any similarity with human cognition even for language as is defined by NLP or word embeddings which go through the Transformer architecture in predefined steps all well known and described to individual operations. The claims made by the guests are completely out of place or touch with current understanding and lean heavily in unnecessary speculation. We do know and the answer is no. AI systems don’t “think”, “understand” or “reason” in any meaningful way comparable to human cognition. This is 100% understood and verifiable.
@HoriaCristescu
@HoriaCristescu 2 месяца назад
I think the best counter to "Stochastic Parrots" is to remember that LLMs perform search. In pre-training they learn from past human experience and searches, where humans try to solve various problems. But after deployment, LLMs interact with and search in the real world. AlphaZero was a model that showed the unbounded capabilities of search to go beyond human level. Search is the underlying activity that creates novel discoveries, but it is necessarily an environment based activity.
@lyeln
@lyeln 2 месяца назад
Please someone present him with the concept of agents, the basics of the transformer and why it's RADICALLY different from word2vec, and the new findings from Anthropic.
@VikrantSingh-se2zb
@VikrantSingh-se2zb 2 месяца назад
Thanks for the wonderful presentation.
@lyeln
@lyeln 2 месяца назад
I think you need to do the tests again on Anthropic's models (Opus and Sonnet 3.5)
@wwkk4964
@wwkk4964 2 месяца назад
Best presentation
@GerardSans
@GerardSans 2 месяца назад
It’s surprising that these findings are not common knowledge for the author and linguistic community. They lack a clear understanding of Transformers and their inner workings.
@forheuristiclifeksh7836
@forheuristiclifeksh7836 3 месяца назад
1:00
@wwkk4964
@wwkk4964 3 месяца назад
Just riddled with out of date and fallacious reasoning.
@asafh04
@asafh04 3 месяца назад
Terrific talk
@psyofinfo1307
@psyofinfo1307 3 месяца назад
The presentation was delivered with great fluency and skilful storytelling. Nick is a wonderful speaker. Greatly appreciated!
@GerardSans
@GerardSans 3 месяца назад
I really liked how you connected language models to broader information-theoretic principles. Just a couple of thoughts on using perplexity to probe grammar in LLMs. It's a cool idea, but I'm not sure it gets at the heart of the issue. Remember, LLMs are amazing at mimicking patterns, but that doesn't mean they actually "understand" grammar in the way we do. Their fluency is kind of superficial - they're basically just repeating what they've seen in the training data. They don't have a consistent internal "world model" to build a coherent grammar on. So, the "grammar" they seem to have is probably fragmented and inconsistent across their latent space, which makes it really hard to generalize and compare across models.
@fhsp17
@fhsp17 3 месяца назад
The brain operates as a hierarchical Bayesian inference system, continuously minimizing free energy across nested Markov blankets to perceive and interact with a branching universe. The complexity and dynamism of this process lead to computational irreducibility, where the exact state of the system cannot be simplified without performing the entire computation.
@fhsp17
@fhsp17 3 месяца назад
He's definitely pointing the right way
@ideami
@ideami 3 месяца назад
excellent talk, how would you connect these probabilistic languages of thought with the computational language used in Wolfram Alpha for example? is the probabilistic nature part of the difference?
@gustavnilsson6597
@gustavnilsson6597 3 месяца назад
Perhaps free will is made possible as an experience due to the nature of computational irreducibility and the bounds of the computational capacity of the observer. How is program sophistication defined? How can we conclude a program is sophistically equivalent to that of an observing brain? Is it when the observer no longer is able to formalize generalizations or abstractions about the program? - "Time goes more slowly (when moving) because you have used up more computational budget of recreating yourself moving across space." Is time a function of computation? From a neuroscience perspective, time-slowing is considered a function of recollection and not perception, are these ideas compatible? I find what Stephen Wolfram is doing to be truly fascinating. The inclination to map out the concepts with semantic significance is brilliant. I wonder if it is possible to generate embeddings of the theorems that entails this rather than the superficial details. Can a cellular automaton be used as a benchmark to test the intelligence of a program and how well it is able to predict patterns? Like an inductive reasoning ARC-puzzle.
@thecomputingbrain2663
@thecomputingbrain2663 3 месяца назад
Explainability assumption 4:58
@thecomputingbrain2663
@thecomputingbrain2663 3 месяца назад
Pockets of reducibility 1:11:28
@thecomputingbrain2663
@thecomputingbrain2663 3 месяца назад
Lesion experiments on LLMs 1:04:45
@Real-HumanBeing
@Real-HumanBeing 3 месяца назад
In simple words: LLM "Understanding” is the dataset, and it’s only as good as it contains the dataset accurately. Forgetting and abstracting away the dataset isn’t happening
@uninteressant2196
@uninteressant2196 3 месяца назад
1:05 ish. I think humans also rarely solve a new problem without learning how to first, and the times they do they are randomly writing down formulas and using known recipes in hopes of a clean solution. We learn so much from our peers when we grow up but we tend to forget that. Maybe pure self learning is just trying out random things until something works. If i encounter a new problem now i try the (now less) random things that worked in the past, and only then start exploring.
@raxneff
@raxneff 3 месяца назад
I never knew what cognitive scientists do all day and ask myself if its even scientific in a Popper sense. Now seeing this as a computer scientist gave me much confidence into this subject. Well done!
@therobotocracy
@therobotocracy 3 месяца назад
Kinda sounds like lisp could be good at this probabilistic programming.
@restrollar8548
@restrollar8548 3 месяца назад
Bro thinks he's unified physics! Delusional
@synthclub
@synthclub 3 месяца назад
"We cannot recognize the best among us, because we simply do not have the competency to be able to recognize how competent those people are."
@brendawilliams8062
@brendawilliams8062 13 дней назад
@@synthclubyou can recognize the best or they wouldn’t have your attention. Spending time amongst them is addictive and you learn a few things
@dr.mikeybee
@dr.mikeybee 3 месяца назад
Stephen has a truly modern mind.
@jzsfvss
@jzsfvss 3 месяца назад
He is a physicist.
@MatterandMind
@MatterandMind 3 месяца назад
Thank you. This is one of the most important and insightful videos on AI in recent years.
@anywallsocket
@anywallsocket 3 месяца назад
does he not tire of giving the same lecture over and over?
@dr.mikeybee
@dr.mikeybee 3 месяца назад
He has integrated new ideas into it. For example, this idea that hobbling AI is limiting their ability to computationally reducible algorithms is new and profound.
@MAMware
@MAMware 3 месяца назад
thru the years i have seen similar parts but always i hear something interesting, something to think about
@joemejia2246
@joemejia2246 3 месяца назад
The interruptions on wolfram before he wraps up his argument with totally derailed misunderstanding was cringe. Specially by the Asian guy
@NightmareCourtPictures
@NightmareCourtPictures 3 месяца назад
To answer the first question in the QA related to church Turing (1:09:00) in a way that might be more colloquial: is that the church Turing thesis is a statement that all systems can be computed by a Turing machine, where as computational equivalence is a statement that all systems are Turing machines, and he spent the NKS book mostly proving this by showing that these CA’s can emulate each other and thus the proving R110 that one can in principle create a string of emulations to R110 and thereby proving the universality of the whole rule class and by extension that all systems following rules (even simple ones) are Turing machines. As a result, if all systems are Turing machines according to Computation Equivalence then they must also sit in the same problem class as the halting problem, which means one can’t construct a way to prove that an algorithm will determine if the system will halt. Therefor all systems even finite simple ones are irreducibly difficult to know what they will do.
@forheuristiclifeksh7836
@forheuristiclifeksh7836 3 месяца назад
33:35 wolframephysicsproject
@NightmareCourtPictures
@NightmareCourtPictures 3 месяца назад
Reducing weights in a neural network = smoking a crack pipe
@NightmareCourtPictures
@NightmareCourtPictures 3 месяца назад
It’s interesting that the idea that certain things “just work” just like how the cellular automata just “do what they do” and there’s no scientific description for how they work or why… that it just is. mind blowing stuff.
@PetraKann
@PetraKann 3 месяца назад
Coupling fancy algorithms to large data bases is not Artificial Intelligence
@BUY_YOUTUB_VIEWS_987
@BUY_YOUTUB_VIEWS_987 3 месяца назад
every day is a good day when I watch your videos
@RalphDratman
@RalphDratman 3 месяца назад
It's a little bit scary to imagine that ML models might some day become better at math than humans -- just as chess software became able to win more games than any human.
@RalphDratman
@RalphDratman 3 месяца назад
I'm enthusiastic about an AI mathematician, especially if the AI will help me understand more math.
@BrianPeiris
@BrianPeiris 3 месяца назад
Thanks for uploading this. Prof. Mitchell's talk was particularly a helpful summary of the landscape, especially given Chollet's renewed ARC competition.
@jonathanmckinney4056
@jonathanmckinney4056 3 месяца назад
This talk and the subsequent discussion completely misunderstand the stochastic parrots position. Wow...
@jonathanmckinney4056
@jonathanmckinney4056 3 месяца назад
Christian Lebiere nailed it. The assumption problem is huge.
@SinergiasHolisticas
@SinergiasHolisticas 3 месяца назад
Love it!!!!!!!!
@SherriMSDRML-qm1pe
@SherriMSDRML-qm1pe 3 месяца назад
@dukeallen432
@dukeallen432 3 месяца назад
Thanks much for Dan tribute.