Montreal AI and Neuroscience (MAIN) is an international conference organized by the UNIQUE Centre in collaboration with the Centre de Recherches Mathématiques (CRM) at the University of Montreal since 2017. For more information regarding MAIN 2020: www.main2020.org
Great talk! Overall super fascinated by the idea that neuron connectivity houses a combinatorial explosion of possible sequences, and over the lifetime of the organism, these pre-existing sequences are mapped to sensory input sequences. The entropy associated with these patterns is so high, that the brain can probably find a pattern that closely maps to a given input, like a new location or a new kind of tree. A fruit fly has lower mapping entropy than a human, so it makes sense that a fruit fly brain may be unable to find a pattern from the same input as a human, which manifests as having shorter-term sequence prediction (planning) and recall (memory), and lower intelligence.
Question -- I love the notion of flipping Hebbian law upside-down. During the trials, we observe place cells firing (modulated by theta phase), and followup SPW-Rs that "consolidate" or "reward" successful place cell sequences. It makes sense that we'd observe a higher probability of SPW-Rs as the number of trials increases. Wouldn't this indicate that Hebb's law is at least direction-agnostic? The SPW-Rs fire and strengthen the trial's place cell sequence, and during sleep, it makes sense that these sequences have the highest probability of replay (as they are the place cell sequences that resulted in a reward) if you subscribe to reinforcement learning. How can we say one way or another whether the "wiring" came before the "firing" in Hebb's law? It seems like a positive feedback loop, or a chicken-and-egg type situation.
@@steveflorida5849 (I try to post this again, because this reply seems to not appear when I log out). My initial thoughts are that these mental concepts are from a dynamic organization and structuring of stimuli and input data in human brains (and AI-systems), and not linked to or stemming from a local individual neuron or gene. The dynamic organizaton is variable because it depends on context, it's an interaction between many different factors, influences from the outside world and the characteristics of the receiving entity, that determine how those influences are processed and stored. It's also a dynamic that changes over time through accumulated knowledge and experiences. I think the experience of truth and morality is subjective, it also very much depends on how adults interact with children. I guess fear and empathy are very important human characteristics with developing a sense of truth and morality.
Super interesting! As a student of linguistics currently transitioning into a focus on neurolinguistics and computational linguistics, the intersection of human language processing and AI language processing is something I find fascinating.
In regards to the unfolding argument against IIT, IIT would say that a theory of consciousness must satisfy the a priori axioms of what consciousness must be and simple observable empirical analysis of behaviorally equivalent systems is not enough. So, a computer that, in theory, behaves the same as a brain simply does not satisfy the philosophical a priori conditions that any system which is conscious must satisfy. (Like being causally unified).
Interesting you have visual schema and attention schema. It's tricky to categorize, but so far my best result is real world schema (consisting of body schema and environment schema) and imagination (attention) schema. Either of these can be understood via 3D isomorphic model (provided by a bank schema of stored shape) or via 2D perspective viewpoint (visual field-like). For the latter, there is the avatar cyclopean 'real' eye, and the mind's eye. 'Real' eye can view either schema; mind's eye seems to only view imagination schema. This is my summary: Rings of Fire: 2023 Science of Consciousness Conference, Taormina, Italy ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Me4hxip7tU0.html
Please introduce a free online database for downloading calcium imaging data. Also, what are the hot topics in calcium imaging research today? I want to take an effective step in predicting the causes of brain diseases by using calcium imaging data research. I have good knowledge in the field of computer vision and digital image processing, but no idea to start research comes to my mind
does anyone read both Uddin and Karl Friston/Andrea Luppi/Robin Carhart-Harris? (higher entropy, functional connectivity, psychedelics, etc). Some people say psychedelics induce cognitive flexibility in SOME people... wow, "fewer brain state transitions"... What about David Badre cognitive control vs cognitive flexibility?
What an enlightening talk by Eve Fedorenko based on direct brain observations (EEG, fRMI, etc.). I agree with the idea that language and thought are only weakly overlapped. I was not aware of the MAIN (Montréal AI and Neuroscience) conference in my hometown. I should say that I am more a practioner than a theorist of NLP.
I swear if something happens to this guy, there won't be anyone left to believe in the "attention" in this world because this strong arficial pressure about its existence will be gone.
@@onqtam People pay attention to the error between their prediction of the souls behavior and the simplified model of awareness - AST, which is suggests a modeling of attention, with localisation.
8:20 buildable does not mean true. here: (defun sqrt (x) (* x x)) . I build you square root of x. It is conscious now, more conscious than only: (defun identity (x) x) .
Watching graziano's presentation as a cautionary tale, I now agree with schurger that talking about system 2 for a while would be more productive, but it must be about "how can these small receptive fields can globally organize into a picture, how can binocular rivalry can actually dominate in there, how can all senses be 'solved' for to be in the same integrated 'world'? ". And most importantly those terms are paradoxical in their original source because certainty and probabilistic approaches are both equally fast. Probabilistic integration doesn't take longer than any other certainty seeking decision. If one feeds the other, and if the final thing is certain, it would be paradoxical to thing the thing that comes before it is slow. However they already have a name: semantic engine (system 2), syntactic engine (system 1).
38:15 brain doesn't have to choose because of a bottleneck, it hosts many alternative ways of explaining the same data at the same time to switch between those explanations when it is necessary for its probabilistic approach. The MAP wins while others stay unused and unconscious. The system 2 takes only a small part of that "system 1" because they are the winners by some global and local challenge.
32:55 then system 2 becomes feelings and concsciousness. Your "system 2" solves for certainty. Your conscious feelings are all you are aware of and only they exists, not some heavy tail of probability, only a specially equation solution that spits out its own MAP best at that moment and it paints that solution with feelings.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at arxiv.org/abs/2105.10461
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at arxiv.org/abs/2105.10461
I have buddy doing breakthrough genomics work, he is using advanced math not even taught at universities - and out of reach of NIH. He cites this book:
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
Since Dennett, an illusionist, doesn't believe that phenomenal consciousness (e.g., the feeling of pain) exists, he won't worry about the problem of machine suffering in the way Seth does, only that machines might become our rivals for resources. Two different conceptions of consciousness are in play, one tied to functional capacities, (Dennett)) one to phenomenology (Seth), so there's some talking past each other going on when referring to consciousness. As Gopnik suggests at the end, we have the reward function on the one hand, and the conscious phenomenology of reward (fun, pleasure) on the other.