Тёмный
Simons Institute
Simons Institute
Simons Institute
Подписаться 62 тыс.
The Simons Institute for the Theory of Computing is the world's leading venue for collaborative research in theoretical computer science. Established on July 1, 2012, the Institute is housed in Calvin Lab, a dedicated building on the UC Berkeley campus. The Simons Institute brings together the world's top researchers in theoretical computer science and related fields, as well as the next generation of outstanding young scholars, to explore deep unsolved problems about the nature and limits of computation.
These presentations were supported in part by an award from the Simons Foundation.
CONNECT:
Newsletter | simons.berkeley.edu/newsletters/sign-up
Twitter | twitter.com/SimonsInstitute
Facebook | facebook.com/Simons-Institute-for-the-Theory-of-Computing-1121330171258592/
LinkedIn | www.linkedin.com/company/simons-institute-for-the-theory-of-computing/
Комментарии
@GrantCastillou
@GrantCastillou Час назад
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at arxiv.org/abs/2105.10461
@GrantCastillou
@GrantCastillou Час назад
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at arxiv.org/abs/2105.10461
@GrantCastillou
@GrantCastillou Час назад
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at arxiv.org/abs/2105.10461
@GrantCastillou
@GrantCastillou Час назад
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at arxiv.org/abs/2105.10461
@sirbedivere5670
@sirbedivere5670 7 часов назад
I can't believe someone actually stays in the Ugandan forest while making a remote presentation. That shows how dedicated they are with their research topic.
@CharlesVanNoland
@CharlesVanNoland 13 часов назад
This reminds me of "Predictive learning as a network mechanism for extracting low-dimensional latent space representations" (bioRxiv 471987). It seems that systems which learn to predict their own inputs tend to form these latent representations of their inputs. I'm of the mind that we're only years away from making proper thinking machines that don't rely on a static backpropagation-trained network to generate behavior, but are instead learning to perceive the world and themselves in it and behaviors, adapting to evolving circumstances. Exciting times!
@CharlesVanNoland
@CharlesVanNoland 13 часов назад
Thanks for ensuring there's accurate subtitles! I was going to cry trying to make out what he's saying, per the reverb in the room, and wasn't doing so well with it. :]
@user-wr4yl7tx3w
@user-wr4yl7tx3w 15 часов назад
Is it possible to make video instead of streaming it for all future recordings. Technical reasons.
@isonlynameleft
@isonlynameleft 3 дня назад
How quantum complexity got its groove back!
@bhagwandassoni3737
@bhagwandassoni3737 5 дней назад
जावा में।
@SarunasKleinys
@SarunasKleinys 7 дней назад
How would you deal with the problem of a lack of diversity, complexity, uniqueness in generated puzzels and their solutions?
@internadrian6541
@internadrian6541 8 дней назад
Good to see you Prof. Vazirani!
@prof_shixo
@prof_shixo 8 дней назад
Thanks Sergey for this interesting presentation.
@KaliFissure
@KaliFissure 8 дней назад
Time is a compactified dimension. This forces chirality, limits, conservation. All of them are caused by compactified time. One single Planck second is all the time needed to evolve the next condition. It is the physical equivalent of the "moment of calculation" in the wolfram physics model or all computational models (in fact). Time is compactified. As per limit theorem, There is a minima, Quantum flux, and a maxima. Event horizon. There is also conservation as we are closed in the present. According to limit theorem, the closure is between minima and maxima. Every event horizon is maxima..... Neutron decay cosmology Gravity gathers mass to event horizons. All matter is made neutrons at event horizons because of electron capture. Infalling neutrons (going at c?) do their kinetic energy off as mass for event horizon. Neutron itself takes an EinsteinRosen bridge from highest energy pressure conditions, to lowest energy density point of space where the quantum basement is lowest and easiest to penetrate. Neutron out in a deep void somewhere Soon decays into amorphous monatomic hydrogen, proton electron soup, Dark matter. The decay from neutron 0.6fm³ to 1m³ of amorphous hydrogen gas is a volume increase of around 10⁴⁵. Expansion. Dark energy. In time this amorphous hydrogen stabilizes and coalesces first into monatomic hydrogen And then into H2 and all the other elements The entire time falling down the gravity hill towards an event horizon. Loop This is where the closure is. Geometry and evidence say so. Neutron decay cosmology 🖖
@ChindiInvestir
@ChindiInvestir 13 дней назад
8:49 now that's a good joke
@mikiallen7733
@mikiallen7733 14 дней назад
is there a reason / statistical argument as to why all extreme value distributions are defined as continuous distributions , because one can make out ones on discrete domains as well ! is my intuition is right
@EV4UTube
@EV4UTube 14 дней назад
Watched the entire video. I felt like I was being dragged behind a truck on a cross-country journey. The sad thing is that I felt like I was so close to fully grasping what was being presented, but could never fully embrace it. Just a little effort to fill-in some gaps would have made a world of difference for my comprehension. Simply using actual 'nouns' and 'verbs' every once in a while (in lieu of alphabet soup) would have also helped. Abstractification is important and necessary, but at some point, the abstraction is so divorced from tangible / applied entities that I have a hard time translating the concepts to the real world. I guess I am just too much of a newb and am not sufficiently familiar with notation, concepts, standards, conventions, etc. Sadly, I tried to do some cursory research to see if I could find other explanations / videos on some of these concepts and came-up empty. I guess the peer-reviewed papers are my reference - I just doubt I will be able to understand that level of material any better.
@newmember-l4x
@newmember-l4x 14 дней назад
this reminds me a lot of t-SNE
@newmember-l4x
@newmember-l4x 14 дней назад
only 28 views. shame.
@diegoalejandrosanchezherre4788
@diegoalejandrosanchezherre4788 14 дней назад
Its very gratefull to see Lenny giving lectures Yet, he is a great and very creative mind, its feels like he always has something very important to Say about very deep notions of the fundamental behavior of the universe, a Titan of human thinkig!!!🙌👏
@chadx8269
@chadx8269 15 дней назад
Excellent, like that metric space idea.
@robvdm
@robvdm 18 дней назад
Super interesting talk. I love these sorts of problems.
@tarikozkanli788
@tarikozkanli788 22 дня назад
Is the one of the people asking question Dana Scott?
@JianchengDU
@JianchengDU 23 дня назад
34:21
@KatiePerry-js4xo
@KatiePerry-js4xo 24 дня назад
Count me in for the pre-sale, the giveaway deals are very attractive
@meorsoithought
@meorsoithought 25 дней назад
Your egg analogie is garbage, because energy is being added to the system in the scracking, scrambling and cooking, and no one scrambled the shell. Plus, it is becoming more homogenous, so is that more order or less than a separate egg and white.
@naderbenammar7097
@naderbenammar7097 26 дней назад
thank you 🙏🏻
@sm-pz8er
@sm-pz8er 29 дней назад
Very well explained. Thank you.
@calicoesblue4703
@calicoesblue4703 Месяц назад
Nice😎👍
@user-ic7ii8fs2j
@user-ic7ii8fs2j Месяц назад
people of different cultures view the world in entirely different ways. It depends on culture, language, genetics etc. for example, people who speak Navajo have an entirely different way of perceiving reality and breaking it down into components than western English speaking people. A shaman would also see the world completely differently to a western man
@user-to9ub5xv7o
@user-to9ub5xv7o Месяц назад
1. Introduction and Context (0:00 - 1:47) - Ilya Sutskever speaking at an event - Unable to discuss current technical work at OpenAI - Focused on AI alignment research recently - Will discuss old results from 2016 that influenced thinking on unsupervised learning 2. Fundamentals of Learning (1:47 - 5:51) - Questions why learning works at all mathematically - Discusses supervised learning theory (PAC learning, statistical learning theory) - Explains mathematical conditions for supervised learning success - Mentions importance of training and test distributions being the same 3. Unsupervised Learning Challenge (5:51 - 11:08) - Contrasts unsupervised learning with supervised learning - Questions why unsupervised learning works when optimizing one objective but caring about another - Discusses limitations of existing explanations for unsupervised learning 4. Distribution Matching Approach (11:08 - 15:32) - Introduces distribution matching as a guaranteed unsupervised learning method - Explains how it can work for tasks like machine translation - Links to Sutskever's independent discovery of this approach in 2015 5. Compression Theory of Unsupervised Learning (15:32 - 24:43) - Proposes compression as a framework for understanding unsupervised learning - Explains thought experiment of jointly compressing two datasets - Introduces concept of algorithmic mutual information - Links compression theory to prediction and machine learning algorithms 6. Kolmogorov Complexity and Neural Networks (24:43 - 30:52) - Explains Kolmogorov complexity as the ultimate compressor - Draws parallels between Kolmogorov complexity and neural networks - Discusses conditional Kolmogorov complexity for unsupervised learning - Links theory to practical neural network training 7. Empirical Validation: iGPT (30:52 - 35:46) - Describes iGPT as an expensive proof of concept for the compression theory - Explains application to image domain using next pixel prediction - Presents results showing improved unsupervised learning performance 8. Linear Representations and Open Questions (35:46 - 38:27) - Discusses mystery of why linear representations form in neural networks - Compares autoregressive models to BERT for linear representations - Speculates on reasons for differences in representation quality 9. Q&A Session (38:27 - 54:37) - Addresses questions on various topics including: - Comparison to other theories in cryptography - Limitations of the compression analogy - Relationship to energy-based models - Implications for supervised learning - Importance of autoregressive modeling - Relationship to model size and compression ability - Curriculum effects in neural network training
@Omeomeom
@Omeomeom Месяц назад
this was a fire talk but the title could be more descriptive. First of all it's misleading and made it sound so daunting that I put it off but it was very informative
@englishredneckintexas6604
@englishredneckintexas6604 Месяц назад
This was fantastic. I actually understand these concepts now.
@mostafatouny8411
@mostafatouny8411 Месяц назад
What a humble and nice person Luca is
@mcasariego
@mcasariego Месяц назад
What a great introduction to tomography and quantum estimation!!
@DistortedV12
@DistortedV12 Месяц назад
why should this be so profound and how is it relevant to real world?
@angelxmod3
@angelxmod3 Месяц назад
The title explains it "Platonic Representation". A platonic object exists outside of reality and reality is just a reflection of the perfect form. Think of a chair, it has 4 legs and a flat surface, it takes physical form and gets certain details but it is never a platonic chair. this hypothesis says that these models approach a platonic form representation that is evidence for the existence of platonic forms that exists outside of our reality.
@user-ic7ii8fs2j
@user-ic7ii8fs2j Месяц назад
35:20 This hypothesis suggests that representations of the world are universal
@alidogramaci7468
@alidogramaci7468 Месяц назад
I am delighted to see such good work is being carried out at Columbia. One question I have as I am midway into your presentation: What you call the effective data set: is it unique? Can you build a confidence or credible set (region) around it?
@pensiveintrovert4318
@pensiveintrovert4318 Месяц назад
Maybe preparing first would help sounding like a lecturer instead of a highschooler spitting out random statements.
@T_SULTAN_
@T_SULTAN_ Месяц назад
Fantastic lecture!
@OlutayoTella
@OlutayoTella Месяц назад
I can’t stop talking about the amazing potentials of BDAG, I’m sure of my great yield
@hyperduality2838
@hyperduality2838 Месяц назад
Comparison, reflection, abstraction -- Immanuel Kant. Abstraction is the process of creating new concepts or ideas according to Immanuel Kant. Creating new concepts is a syntropic process -- teleological. Syntax is dual to semantics -- languages or communication, data. Large language models are using duality, if mathematics is a language then it is dual. Sense is dual to nonsense. Right is dual to wrong. "Only the Sith think in terms of absolutes" -- Obi Wan Kenobi. "Sith lords come in pairs" -- Obi Wan Kenobi. Concepts are dual to percepts" -- the mind duality of Immanuel Kant. The intellectual mind/soul (concepts) is dual to the sensory mind/soul (percepts) -- the mind duality of Thomas Aquinas. Your mind/soul converts perceptions or measurements into conceptions or ideas, mathematicians create new concepts all the time from their observations, intuitions or perceptions. The mind/soul is actually dual. Mind (syntropy) is dual to matter (entropy) -- Descartes or Plato's divided line. Your mind converts entropy or average information into syntropy or mutual information -- information (data) is dual. Concepts or ideas are therefore syntropic in form or structure. Teleological physics (syntropy) is dual to non teleological physics (entropy) -- physics is dual. Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics! Duality creates reality! "Always two there are" -- Yoda. Physics is all about generalization or abstraction -- a syntropic process, teleological. Truth is dual to falsity -- propositional logic or Bayesian logic. Absolute truth is dual to relative truth -- Hume's fork. Truth is dual.
@GerardSans
@GerardSans Месяц назад
The hypothesis for learning convergence is false. There’s no universal representation of knowledge. There’s though confirmation, data and anthropomorphic biases to be learned here. See umwelt and differing latent space representations even for exact same Transformers. That’s enough to refute these claims.
@GerardSans
@GerardSans Месяц назад
Besides convergence is expected mathematically as these models are massive approximation functions. It’s only logical that they will convergence. It’s an approximation algorithm!
@GerardSans
@GerardSans Месяц назад
It’s strange because this is so contradictory with day to day practices that shows a massive gap between theoric AI researchers and practitioners. Each model is trained from scratch not fine tuned from a common ancestor. This is not possible precisely because the latent space representations are not compatible even between different versions of the same model.
@jorgesimao4350
@jorgesimao4350 Месяц назад
The data is similar, so similar umwelt. Architecture and algorithm then converge to similar solutions, to "explain" the data.
@kellymoses8566
@kellymoses8566 Месяц назад
It makes perfect sense for different AIs to learn similar representations of the same reality. This is similar to how science works.
@drdca8263
@drdca8263 Месяц назад
So far I’m only 24 minutes into watching this, but, this seems like a really important idea? In addition to the clear practical application, it seems to me like this should also imply something about like, the theory of the development of scientific theories ? If we replace the ML model with, e.g. Newton’s law of universal gravitation (regarding “observations Isaac Newton knew about before publishing anything about the universal gravitation” as the “training set” used to produce that theory, when we consider the requirement that the gold-standard data be independent of the training set... uhh... I guess this should give a... hm, maybe this isn’t as applicable to this case as I thought. Still, seems very important!) Edit: I suppose the points at 33:00 - 35:15 should temper my uh, somewhat wild imaginings for how widely this could be applied .. and also she goes on to point out connections to previous literature dealing with somewhat similar things that I hadn’t at all heard of, I guess one can tell that I haven’t really studied statistics in much depth.. Nonetheless, I continue to be of the opinion that this is *very* cool.
@ATH42069
@ATH42069 Месяц назад
'we can talk offline about this' -the evolution of language in the information age
@mhmhmhmhmhmhmmhmh
@mhmhmhmhmhmhmmhmh Месяц назад
έλα μωρε τώρε βρε