00:00:00 How did Schmidhuber's journey into the world of AI start 00:04:05 Meta-learning vs Transfer learning 00:06:35 Gödel machines (universal problem solvers) (Schmidhuber's 2004 paper) 00:11:30 Do you think about the P vs NP problem? 00:13:10 How important is that we have a strong formal theory behind our AI models? TL; DW: "There is nothing more practical than a good theory" haha 00:15:00 AGI will be a super simple algorithm? 00:17:35 Is evolution needed? Do we have to simulate our universe to get to AI? Is there a shortcut? 00:19:20 Does God play dice? (on the quantum level is the world stochastic or deterministic?) 00:25:35 Compression - "All the history of science is a history of compression progress" 00:30:30 Powerplay (AIs should propose the next (easiest) problems not only solve the ones given to them) 00:35:30 Are humans instances of a powerplay agent? 00:37:55 Creativity and intelligence (pure vs applied creativity) 00:41:30 Is consciousness a byproduct of problem-solving? 00:46:35 What is the value of depth in our AI models? 00:49:50 vanilla RNNs
43:56 "So it's compressing all the time the stuff that frequently appears. There is one thing, that appears all the time when the agent is interacting with its environment which is the agent itself. So just for data compression reasons it is extremely natural for this recurrent network to come up with little subnetworks that stand for the properties of the agent ... So just as a side effect of data compression during problem solving you have internal self models. Now you can use this model of the world to plan your future... Whenever it wakes up these little subnetworks that stand for itself then it is thinking about itself and it is exploring mentally the consequences of its own actions and now you tell me what is still missing in the gap to consciousness. " This is probably the best explanation for consciousness and why it is natural that an intelligent agent will have one. I'm really impressed by the deep thoughts of Jürgen Schmidhuber. This is definitely one of the most insightful and though provoking interviews in this series.
He is extremely intelligent. If someone will build AGI it will be his teams or DeepMind. Despite the hype, I don't think Hinton et. al are capable of actually paving the way.
@@plummyplumage LUCIANA IMOBERDORF PIA EXNER MOIRA BERNTZ INGRID GRUDKE MILAGROS SCHMOLL MARCELA KLOOSTERBOER TIZIANA HEINZE LUCINA VON DER HEYDE NICOLE NEUMANN VALENTINA SEWCZUK NAOMI PREIZLER AYELEN STEPNIK JAZMÍN STUART KARINA JELINEK ANA LIVCHICH CARLA PETERSEN DANIELA PFEIFF IMAN KAUMANN PILAR BOERIS LUCIANA RUBINSKA MARTINA STOESSEL A R G E N T I N A
@@spinLOL533 or tell everyone about it, that's probably a better option. The fundamental difference between matter and information makes the intuitive applied "limited resource" framework a fallacy in this case. More viewers will probably be a very good incentive for Mr Fridman to make even greater videos. So, please, do tell everyone you know with an interested AI about the channel.
Now this channel is interesting! Dr. Schmidhuber has been pretty actively giving public talks but a lot of people really don't know him or his work. He also rightfully gives credit to some of the early pioneers in the field like Fukushima. I am super glad that he is in this podcast series!!
@Robert w Since when? The last interview I saw with him he said (and this shocked me) "I think I've solved AGI, but I can't say more" then he gives a huge grin. He's not one to brag, so I took his comment to heart. Conspiracy: Did he discover AGI and get silenced by the powers that be???
@@namesurname7498 His comment got me curious so I watched a few videos in order to find it. It occurs from roughly 4 20 - 5 26, but it's worth watching the whole video for more context. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-PuVphWK1k70.html
How Juergen finishes the whole conversation, im almost speechless, WOW.. Lex: "Is that exciting to you, that we might be the first?" Juergen: "It would make us much more important, because if we mess it up through a nuclear war, then maybe this will have an effect on the development of the entire universe." Lex: "So lets not mess it up" Juerger: "Lets not mess it up"
I guess im asking randomly but does someone know of a tool to log back into an Instagram account?? I somehow lost the password. I would appreciate any help you can give me!
@Myles Jasiah i really appreciate your reply. I got to the site thru google and Im in the hacking process atm. Takes quite some time so I will reply here later when my account password hopefully is recovered.
This interview is great it that it distills in a very short time Schmidhuber's unified narrative. To be honest, the reason why everyone seems to get "Schmidhubered" is because he's thought of a general framework for cognition before anyone else! Everyone should read his more speculative musings to get insight on the AGI problem.
@@andrii5054 VILLA LA ANGOSTURA VILLA GESELL CERRO CATEDRAL CERRO CASTOR PUERTO BLEST PUERTO MADRYN BARILOCHE MAR DEL PLATA LAS LEÑAS OSTENDE SAN MARTÍN DE LOS ANDES COMODORO RIVADAVIA A R G E N T I N A
This guy has become a bit of a meme, with his claim any big development in deep learning, from GANs to the Transformer, was already contained in one of his papers 25 years earlier. But checking this, it turns out he is right. Not only that, I find his papers a lot clearer. He was just ahead of his time.
Yes very interesting. One side effect - he notes that somebody at Goog said there is not moat here. Yes - after 30 years this is all "prior art" and any patent protection of the basic concepts is likely to have run out. So open source can win without fear of patent trolls, presumably. His early work seems to have established the basic mechanisms. Pretty cool. Patent protection is granted for a limited period, generally 20 years from the filing date of the application. and Prior art is all public information that was available prior to the priority date of the patent and teaches the claimed invention of the patent.
This is an incredible conversation. The insights on Juergen's approach and insights (AGI algorithm will ultimately be simple, etc.) is extraordinary. One of the top videos on this channel of great interviews, or anywhere, right up there with the Musk video.
It is very interesting to see a consistent view regarding the "compression" idea from Juergen. Much of his answers, including the history of science development, general equations to describe the universe, revolved around the idea of using compact equations(models) to predict the outcome to an input elegantly. And, it was fascinating to see how he beautifully connected that idea with regards to personality and consciousness traits in an AGI as a mere bi-product of some compressed models that it uses to to make sense of its surrounding effectively.
Wow, just wow!!! This is a brilliant interview. Juergen Schmidhuber is a gun scientist and great human being. Bring him again dear Lex Fridman, he needs to be heard more now than ever.
this was like a talk with god. how fundamental Schmidhuber thinks is just impressive. loved the idea of progress in science being a compression of previous knowledge where the compression progress is the depth of your insight
Thank you for another great video, Prof. Schmidhuber sharing his vision and thoughts. Although I agree with most but baffled by other ideas that machines had helped us tackle and solve many problems in past decade without input from Humans!!!
One of my favorite things about this particular guest is that he practically never said yes or no to any of the questions. His answers were thoughtful and insightful, such that they would not be constrained to being narrow absolutes. Once I noticed it, I paid a lot of attention to it. He is an extremely good interviewee
Christmas is almost here and Lex Fridman offers us the best holiday gift : The inventor of LSTM discussing Artificial General Intelligence. This guy is like Ray Kurzweil but at a much deeper knowledge of machine learning and deeplearning (basically revolutionized RNN's by doing papers in 1997 on LSTM)..
I can't seem to find the work he mentions 'fastest way of solving all solvable problems' - this might have to do with Godel models?.. Can you link that please?
"When I was a boy I thought the most exciting thing is to solve the riddles of the universe & that means you have to become a physicist. Then I realized there is something grander, you can try to build a machine that learns to become a better physicist than I could ever hope to be & thats how I thought maybe I can multiply my tiny little bit of creativity into eternity. " (paraphrased)
Internal self models are a side effect of data compression during self modeling. Indeed a beautiful and compelling formulation of why consciousness may be an emergent property of modeling the world with an active agent in it.
Schmidhueber says a lot of things worth hearing here. Im particularly interested in the part about universe simulation. It makes sense that youd leave yourself with a lot more computer for the controller if you simply use the real world as the model, but it just doesnt seem as pure and natural how you get to the baby if its finely crafted first, instead of evolved in the same universe itll eventually live in.
"All the history of science is the history of compression progress." Wow. Give this man whatever prize he wants because he deserves it, no matter what his detractors say.
@@samlaf92 The way I see it, this is probably a transition period. As long as the people who are legit interested in the development of AI are serious about potentially best paths toward AGI and are willing to acknowledge the current limitations of resources and also the limatations of current techniques, we are on the right path. Look at the standard model of physics, physicists are aware that it is ugly and accept as much, it took centuries worth of effort from many great minds to build this. But now there are so many people trying to compress that whole information into the shortest possible equation to describe all the basic laws of universe. So for them, it was a transition period and so is this for AI researchers.
@@samlaf92, not necessarily. They might be overparametrized w.r.t. the available data for training the networks, but, if the model is correct, they also might be "underparametrized" w.r.t. all the possible samples that could be generated from the original distribution where the training data comes from. Once you have your net, you do not need any sample. It's like saying that a set of relativistic equations of motion are "overparametrized" when compared with a handful of points in time-space of some object.
Some comments: First, incredible interview, on both sides. Super stimulating ideas. Some comments from a lesser, but curious mind: 1. I loved Juergen’s explanation of how consciousness arises from solving problems. But if the universe is determined from the moment of the Big Bang, then wouldn’t decisions to solve problems be an illusion, i.e. both the question and the solution are already determined? And if it is an illusion that we decide things then wouldn’t consciousness also be an illusion, maybe like the sensation of déjà vu, just some recorder portion of our mind replaying what we appear to have experienced? And of course, where is the excitement in the discovery of science if our path is already determined? Finally, on this issue, what is the evidence that there is only one past? We assume this to be true, but what is the evidence for that? 2. Job loss through AI. I agree that humans find new work to replace the old jobs lost to technology. However, this involves great social upheaval. I believe it requires financial depressions to work through the dead technologies. IMHO governments have used QE and other means to try and relate the economies and keep people working in jobs that the economy would not otherwise pay for, including working for zombie companies (companies that only exist because of super cheap debt). No one seems to have a solution so far on how to get to a world where people are working in new jobs without massive global debt, financial crises, social disorder, and possibly war (as in WW II), and little recognition of the relationship of technological advance, on the one hand, and deflation and its resulting social problems, on the other. Also, wouldn’t AI replace the person to person jobs if AI were completely successful in imitating people? In one of my stories, rich people pay poor people to just to stand around in groups and do nothing in particular, but even those jobs could be replaced by robots. Maybe we will require people to carry a certificate of humanity in order to get a job. 3. The long term existential threat of AI. What is the argument that AI will have an evolutionarily evolved eco-system? It’s also possible that each AI will find the last one as boring as humans, and it is also possible that competing AIs would swallow up and incorporate the consciousness of other AIs to expand their own. I would suggest that we teach AIs the great value of preserving lesser life forms, and not rely on their learning from our history of how to treat less powerful plants and animals (yikes!). Thank you. William L. Ramseyer
Meta-learning and meta-computation refines our concept of 'knowledge'. One example of this refinement is the AlphaZero Paradox which shows us that human belief can bias what we think we know.
2:17 ".... then you've solved all the problems. At least, all the SOLVABLE problems." That's a nod to the limits of computability. The "P versus NP" problem. These people are on the forefront of human history, and we get to watch them have a conversation, for free. What a world!
So good, I hope lex goes back to talking with more such hardcore scientists instead of taking to random famous people without any theme, we have Joe for doing that.
We are so lucky to have this channel bringing us the public so close to this minds thank you for it..and not to be evil here but is he trying to sound like Christopher Walken(maybe is just me)
"Meta" is the pairing of solutions with their corresponding problems. The compression of data is the expansion of abstraction. As Dostoevsky said in Crime & Punishment, only extraordinary people have the talent to utter a new word. A "new word" is the next level of theory, of abstraction. Intelligence is recursion.
An interesting thing is that Power Play, which i know nothing about, assumedly started out with great complexity. The human brain, assumedly, started out literally as the Big Bang. No step was skipped, in every instant along the way whatever passed on wisdom and intelligence and a physical form for problem solving had to survive. Idk, that may be a reason for why AGI will quickly surpass human intelligence. I guess it depends on what problems power play started on.
Look into lapel (lavalier) mics. I see your new setup, but the lapel mics limit how far away the subject can get from it and they are so small and unintrusive. Audio Technica has some great options around $100. :) I used them to shoot my full length doc on glassblowing I did awesome sound quality.
"...the deviations come along and all I have to do is calculate the deviations from the prototype..." @ 43:50 This gives a cognitive justification of Platonic idealism. A form or prototype serves to reduce time and memory costs.
I want to strongly emphasize that beauty being the same as simplicity is very subjective and although I agree with it in some scenarios I disagree in some others. Music is complexity, chaos and order combined that creates beauty, and that is the nature of our universe too.
Emotional consequences determine conscious decision making of humans and animals. Understanding these neural network processes will help develop simple algorithms for long short-term memory networks in my opinion. Even as an infant baby...