Neuroscience and artificial intelligence work better together. I interview experts about their work at the interface of neuroscience and artificial intelligence. The symbiosis of the two fields, how they overlap, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, consciousness, general AI, spiking neural networks, data science, and a lot more.
FWIW: it would be helpful to have a tiny bit more explicit discussion of the difference (as you two see it) between spikes and rate/population coding to frame the rest of the discussion. I’m still wondering if SNNs and spike dynamics cant be faithfully represented (idealised, abstracted) by rate (etc) coding ANNs in some fundamental in principle sense, in your views. Or, if the core dispute is just a about solving realism/resource/efficiency problems.
If you are asking seriously this, it means you feel some restlessness or concern about your skepticism. If this is the case, it means that skepticism is created by your ego (by beliefs, education, etc...,) and not aligned with your ultimate essence. But how are you going to believe what I'm telling you if you are recognizing exactly your skepticism? The only way is experimenting and experiencing on yourself (your brain, mind are body are your laboratories) and see if all those things some people is explaining (starting with meditation for example) do really have an effect on you and open new doors. And yes, they fucking work, at least that has been my experience being an engineer and still applying scientific-technical methods on my job daily, which are not incompatible with which is explained here for example. They are different PERSPECTIVES of the same reality.
My name is Rex JAMES JR Butterfield. It was only recently that God activated me to give a message to the entire world. This message involves the entire world at the exact moment that the world needs to hear to help save mankind from destroying itself. Please listen to me, I have all the proof you will need to verify my claims. I have stigmata, my name has been on the cross for 2,000 yrs as a sign as to who's up. You may not like or understand why God chose Rex. That's for to explain in person and my mission. I can dazzle your minds about human behavior and the universe and how everything interacts with everything. Please, I'm ready to come out because my message is so important and can unite the world as one instead of divide. Last week, I predicted a major shift in suffering and look at Bangladesh. And I feel more is to come. Please help me stop the war. And I know where Jesus is buried. And I'll inform you of things that only the church knows about the intimate mortal of Jesus Christ. Rex may be a man of the times but his message is UNIVERSAL. LITERALLY. HOW WOULD YOU LIKE TO SEE THE RING THAT REX IS WEARING, It has two crosses indicating the second coming and arrows pointing every direction and he got it two yrs ago when his belief in God was nil. Help me now. I am in a Queens NYC homeless shelter. God bless, let's unite the world now 🙏❤️ I have very intimate knowledge about the real universe and I know what happens after death and what is everyones mission in life
Now if you two could do this again for about 5 hours next time, 🙏🙏🙏💁♀️. Just… wow. I’m ordering the book, but just wanted to thank you both for this conversation. Some of it was a bit over my intellectual/academic pay grade, but it’s a topic I’ve been hungry for. (Really ANY attempt at reconciling aspects of Eco Psych and Neuroscience). My use-case for caring is that I’m one of the growing number of movement/skill coaches applying “Ecological Dynamics” (aka non-linear pedagogy, Constraints Led Approach, etc.). Plot twist: my athletes are all horses. It’s been a fascinating decade-long journey for me to work from the principles of context design, “constrain to afford” and for me, the biggest get influence was Eleanor Gibson, not as much JJ. I’m not a “Gibsonian”, just a practitioner with 100000% commitment to movement as emergent, and self-organization as my foundation. (I call it “body buy-in”, as in, the body parts can be instructed, manipulated, placed etc. and the coach can have us repeat/rehearse forever, but if it wasn’t the body’s own solution to a movement problem (task/environment constraints), it’s at best inefficient and at worst, dangerous. But I’m tossed out of the Gibsonian club for still using words like “unpredictable” 🤔. In AI world, I’m super enthused by the work of Ken Stanley, and use of exploration as a way to expose the “organism” to a large set of diverse experiences in movement problems, trusting their system to find the key invariants. In my world, with horse rehab, it is literally saving lives as horses are often euthanized when movement ability fails. Even horses diagnosed “neurological” or with severe arthritis can make astonishing progress when we design contexts that let self-organization do the heavy lifting of training. Ofc ecological dynamics is now being applied to human athletes at virtually all level of sport, including professional football, baseball, martial arts, cricket, basketball, etc. (I also used some of E.Gibson’s work to teach programming (in my former life); I created the Head First book series for O’Reilly that’s sold I think 3 million copies. Even my rough and crappy implementation of perceptual learning principles still led to surprising results!)
Maybe I skipped the section that clearly described prospective learning, but by 57 min mark, I just paused to go read part of the paper. My thoughts: First, this just sounds like branch prediction; Second, it's still bound by the curve fitting world of xNN's, which means it has no future path in AI. Third, brains might do something like this, but only in that we have a limit to the number of concepts we can keep active at one time (and not having the "correct" idea at the "right" time means taking the "wrong" action). It doesn't work for more complex problems, so likely isn't what our brains are doing.
**Cognitive science should inform AI research, not the other way around (at least, ML/DL should not be treated as an insight into how minds/brains work).
Being narcoleptic with cataplexy would like to see more research done. I've had dreams that later happened in real life. I've memorized dreams during deja Vue. I'm 45 years old and this has been a part of my life since I was a child.
Thank you for this. This book seems really important and extremely timely. There clearly are more and more people sharing Mazviita's concerns (her diagnosis, not necessarily her prescription). And you are in many ways the ideal interlocutor for these ideas. You are both getting very close to question the dogma of "no place for the subjective in science". This is an interesting angle of attack indeed. But the issue is less relativism, or perspectivalism (i.e., the postmodern notion the scientist has presuppositions, or a lens through which the world is interpreted). The problem is that the axiom in itself is ad hoc, only supported by tradition (Cartesianism), and logically questionable. A careful reading of the measurement problem of physics shows that it is the information, or measurement, not the apparatus, the determines the collapse of the wave function. It's not a matter of opinion. It's the measurement, information, or "knowledge" that affects the outcome. That observation alone is not sufficient evidence to topple the dogma, of course. But if you combine with the fact that we have run into a wall in neuroscience in aiming to explain subjectivity (e.g., perception) AFTER excluding it categorically from the domain of science, evidence starts to tilt in a Bayesian sense. And when we carefully start exploring putting the subjective / the observer back into the center, such as in first-person physics or in IIT, things start to progress again. Anyways, this is just a long-winded way of expressing that it is awesome to witness that even though this podcast seemingly started in embrace of computationalism / functionalism, you host these open and deep debates about whether this is indeed the correct direction to put all our efforts in.
Уважаемый господин Миддлбрукс! Здравствуйте! Меня зовут Сергей Зайцев. Недавно я наткнулся на ваше интервью с Gary Lupyan на BRAIN INSPIRED и пишу, чтобы выразить Вам свое восхищение. Может быть я не все понял из вашей беседы- я не специалист в нейробиологии и искусственном интеллекте. Но мне показалось, что Вам интересна тема происхождения языка, формирование понятийных единиц и речевое кодирование. Эту тему я самостоятельно изучал порядка десяти лет и был бы очень признателен возможности поговорить с Вами и, в какой-то мере, продолжить ваш разговор с Лупян. Для предварительного знакомства я предложил бы прочитать мою работу SOME ASPECTS OF THE PROTO-LINGUISTIC THEORY WITH THE EYES OF AN AMATEUR, размещенную на сайте журнала «Наука и Мир» (№ 2 (114), февраль), Том 1. Искренне ваш, Сергей Зайцев.
Wow, what an incredible talk! I can't believe this has so few comments; Alicia Juarrero's insights into complexity theory and the role of constraints are mind-blowing. Her ability to navigate the intricacies of complex systems, from brains to societies, is truly impressive. Juarrero's emphasis on the importance of constraints in understanding these systems is groundbreaking (although not necessarily "new" considering if delve into Aristotelian hylomorphism) . It's a refreshing perspective that challenges the traditional focus on linear causation, highlighting instead the dynamic interplay of constraints that shape the behavior and organization of complex phenomena. "Context Changes Everything: How Constraints Create Coherence," seems like a must-read for anyone interested in delving deeper into the fascinating world of complex systems. I appreciate how she not only introduces these concepts but also provides a roadmap for navigating them, acknowledging that thinking across levels of organization can be challenging....some real examples not just hypothetical speculations. (eg what might get in Hegel) Love to see how these ideas integrate practically with work of Michael Levine and biomorphology.
I don't have a problem with AGI as we can certainly spend useful time discussing it, as this talk demonstrates. I would certainly call the coffee-making robot AGI. It doesn't matter so much if it doesn't do everything that a human can do. It is perhaps worth noting that when we discuss such a robot, we implicitly are discussing the set of all robots that we imagine could be made with such AGI technology.
@@braininspired I did but which paper has this list? I guess I could look at all of them, and perhaps I will, but I thought it would be an easy question. Sorry to belabor it.
@@braininspired I was looking for a list formatted as a list but it was buried in a paragraph so I missed it during my first pass through the paper. Cheers!
Why bother with spikes: 1. It's the simplest language - only one word in it. It is as fundamental as a ping command in computer networks. 2. Cells are simplest living things, so they developed this language above chemical signaling for reason - to be able to talk p2p (peer to peer) instead of chemical paracrine broadcasting. 3. Multicellular organism can be viewed as a set of cells implementing some policy (in terms of RL) to survive for longer. Policy is a set of rules which every cell in this "community" has to follow for the benefit of their genes. So communication between individual cells drastically expands the variety of rules which may be included in this policy (and policy size and efficiency). Thus discarding spikes we ignore the reason the brain exists at all, lose connection to nature, and turn an epic fundamental study into some weird software engineering business.
Although an unheralded theory, I believe it is still well known and offers the craved for solution to resolving the dichotomy between sensorimotor paradigms of consciousness and looked for representation of anti affordance crowd. In fact the acquisition of ventral and dorsal stream algorithms through component motor efference copy, as an elaboration to sensorimotor theory, attached below, has something to say about effectiveness of such approaches www.nmr.mgh.harvard.edu/mkozhevnlab/wp-content/uploads/pdfs/courses/literature/Multimedia%20learning,%20collaborative%20activities%20and%20team%20work%20involving%20visual-spatial%20imagery/Vakalopoulos.pdf
I find it extraordinary that John suggests that all embodied theory denies representation. In fact this work from way back in 2005 below resolves the dichotomy between sensorimotor theory and representation using the concept of a component motor efference copy through extra pyramidal pathways. The loop categorises perceptual information through feedback during developmental periods into dorsal and ventral streams such that the resultant algorithms are the basis of sensorimotor based cognition. It behooves John to address the mechanisms entailed in the paper before cursorily dismissing such theories. It also is the responsibility of Brain Inspired to raise these potential solutions given rejection of sensorimotor theory is a major platform for John pubmed.ncbi.nlm.nih.gov/16006052/
48:56 but there's a basis for having the insight of inferring. Between humans, that basis is is shared (though people can be naive in a way where superficial differences deceive them)
At 37:36, when it is discussed how monkeys will play rock after they played paper and an opponent played scissor, the conclusion drawn that this is incompatible with model-free RL is wrong. A model-free method could easily switch to playing rock for many reasons. For example, if the action is thought to be as the best response to the last choice of the opponent, a model-free method will do exactly what is observed. There is no need for models, "imagination" and planning to achieve this behavior.
Hi there! (Max here) Appreciate the thought - can you explain you idea here further? I'm curious to learn. If the only information an agent has received is a negative reward from playing one of those actions (rock), why would such an agent (without any mental simulation of counterfactuals) assign a different predicted reward value to scissors from that of paper? When you say "for example, if the action is thought to be the best response to the last choice of an opponent, a model-free method will do exactly what is observed", I think this is a contradiction (unless I am misunderstanding you); a model free system cannot "think" about what the best response is to the last choice of an opponent, it is given a state, and selects an action, and then updates its policy/value function on the basis of temporal differences (in TD learning at least). By definition, the act of considering the prior choice and using counterfactual outcomes to modify predicted rewards is invoking model-based RL.
@@maxsbennett Doesn't it depend on the training? In something like Q-learning, it is easy to imagine how the algo will weight the Q values of various actions differently based on the experiences. Imagine a scenario where the training opponent tends to play rock often after the agent plays rock. The Q-value of playing scissors would increase because it is preferable to play scissors over rock/paper against the likely scenario of the opponent playing rock.
Bit of a long comment- TL;DR- part of what it means to be human is to continually resupply our brain cells with oxygen, water, and nutrition and to become functionally immortal by merging with AI would change all that…. Regarding the possibilities of merging humanity with a.i., at around 1:22:06 - I’m curious how the human brain metabolizes essentials (oxygen, water, and nutrients from our blood and how quickly brain death occurs when oxygen supplies are cut off) but also, how these needs for these essentials define our goals and our priorities as humans- and basically define everything that it means to be human- And how these would change if we ever could merge with AI. And all this to say- is it possible even on theory to replicate what the human brain does but not in a way in which it will die if it looses its supply of O2? But then it wouldn’t really be human anymore, would it? It’s almost like part of being human and carbon based life is to constantly to get oxygen and other materials as efficiently as we can until we reproduce. It’s not like Chat GPT has any fear of ‘death’ when the server is shut down, or any other kind of self-preservation instincts. I wonder what a functionally immortal being or race would do for pleasure or how values would change. Guess we’ll see! I’m 100% in favor of finding out. My thoughts are that we’re more than likely to meet our demise as a species at some point given the laws of thermodynamics and large probabilities and ash that- so this warrants exploring! -Related note (forgive me jumping topics here) but DNA carries with it a desire to replicate. Are there any good papers out there on an artificial DNA or a digital DNA that carries instructions to do something as well as to replicate itself so that it could evolve? Again- how it does this and what resources it consumes would define its purpose - and it may become our competition. (This is kinda scary). With theoretical immortality (from AI or from somehow merging with machines) comes a need to redefine our purpose. What would an immortal life form do for simple pleasures or ultimate meaning?
1:21:51 At the very end of this interview, there is a discussion on the future of humans and A.I. and Max brings up the point that humans may simply stop having biological children because they have so many flaws (like illnesses and mortality). 1). I think this can be expounded upon- just hearing this discussed in the context of this interview, a part of me finds this absurd - but this feeling of based on unfamiliarity rather than any well thought out notion of probability. To expound on this idea, I’d love to explore some examples of how human tastes and desires have evolved as we’ve animated to radical charges in the past that become commonplace. Everything- including our beliefs about the world and our values- are products of our brain and how it interprets data from our senses- so I find myself in full agreement that this is possible. The desire, however, to see our actual DNA replicated would somehow have to be accomplished (even symbolically) through the creation and/or adoption of artificial intelligence children. This is wild!
Oh my gosh I’m so jealous! I’ve been wanting to get @MaxSBennet on our podcast for a long time (don’t judge our RU-vid- it’s brand new. We have an audience of 6000 on audio podcast players) Good for you guys!!
This was a wonderful episode! As an engineering outsider myself, Max's research into this field is inspiring. Max is not trying to carve a way out of his field into neuroscience. He is trying to understand neuroscience in its own stock. Only when we democratize such information with the lens of someone who, as Max said, "doesn't want to prove himself right but is just trying to find what's right", can we truly make this information patch with other disciplines in science. This is a great cross-disciplinary talk, and exemplary of what other disciplines too must follow. Thank you for sharing this, Paul.
Wow, Paul. I have been following your podcast since its inception. It started strong, yet kept getting better and better. But this one blew my mind. I had no idea of any of these breakthroughs in anatomy, the satellite at SfN and the challenge. This is all extremely inspiring! It seems to me that we have largely moved on from "decoding" in the functional neuroscience realm. The task now seems to be more to understand what exactly gets decoded - or even do better. We arguably found new fruitful grounds in examining manifolds and topology ("representational geometry"). And even that is still limited (e.g., pairwise analysis only). Imagine all of that gets scaled up and functional findings merged with these structural approaches! I am so happy that they recruited you to moderate the session and help spread the word. They could not have done better. For anyone else interested to lear more about the challenge, go to: aspirationalneuroscience DOT org
I really liked how Laura approached the issue of "silencing" Pseudoscience and how its a matter of interest, not the validity of your peers. I wish you guys explored the more deeply topic of "silencing" with what happened between those who wrote the IIT and those who retorted that this "warning" has affected their chances to obtain funding or be taken seriously by those who control funding. How are those who are silenced supposed to deal with people whose definition of "progress" seems like pointing at other people and saying that "my scientific diligence (not their theory), says your approach is a waste of our time, and we should not take it seriously as a community". Was Anil Seth right in saying that we should not tell people what to think in this kind of open letters? If anything the response Zoom call the pro-IIT people had, was just a bunch of pluralist throwing their theories at each other, but no clear-cut path forward to defend themselves against this canceling of their science was taken.