Тёмный

How Do Neural Networks Grow Smarter? - with Robin Hiesinger 

The Royal Institution
Подписаться 1,5 млн
Просмотров 129 тыс.
50% 1

Neurobiologists and computer scientists are trying to discover how neural networks become a brain. Will nature give us the answer, or is it all up to an artificial intelligence to work it out?
Watch the Q&A: • Q&A: How Do Neural Net...
Get Robin's Book: geni.us/5wIuX0W
Join Peter Robin Hiesinger as he explores if the biological brain is just messy hardware which scientists can improve upon by running learning algorithms on computers.
In this talk, Robin will discuss these intertwining topics from both perspectives, including the shared history of neurobiology and Artificial Intelligence.
Peter Robin Hiesinger is professor of neurobiology at the Institute for Biology, Freie Universität Berlin.
Robin did his undergraduate and graduate studies in genetics, computational biology and philosophy at the University of Freiburg in Germany. He then did his postdoc at Baylor College of Medicine in Houston and was Assistant Professor and Associate Professor with tenure for more than 8 years at UT Southwestern Medical Center in Dallas. After 15 years in Texas and a life with no fast food, no TV, no gun and no right to vote, he is currently bewildered by his new home, Berlin, Germany.
This talk was recorded on 20th April 2021
---
A very special thank you to our Patreon supporters who help make these videos happen, especially:
Hamza, Paulina Barren, Metzger, Kevin Winoto, Jonathan Killin, János Fekete, Mehdi Razavi, Mark Barden, Taylor Hornby, Rasiel Suarez, Stephan Giersche, William 'Billy' Robillard, Scott Edwardsen, Jeffrey Schweitzer, Gou Ranon, Christina Baum, Frances Dunne, jonas.app, Tim Karr, Adam Leos, Michelle J. Zamarron, Andrew Downing, Fairleigh McGill, Alan Latteri, David Crowner, Matt Townsend, Anonymous, Roger Shaw, Robert Reinecke, Paul Brown, Lasse T. Stendan, David Schick, Joe Godenzi, Dave Ostler, Osian Gwyn Williams, David Lindo, Roger Baker, Greg Nagel, and Rebecca Pan.
---
Subscribe for regular science videos: bit.ly/RiSubscRibe
The Ri is on Patreon: / theroyalinstitution
and Twitter: / ri_science
and Facebook: / royalinstitution
and Tumblr: / ri-science
Our editorial policy: www.rigb.org/home/editorial-po...
Subscribe for the latest science videos: bit.ly/RiNewsletter
Product links on this page may be affiliate links which means it won't cost you any extra but we may earn a small commission if you decide to purchase through the link.

Наука

Опубликовано:

 

9 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 290   
@jamesdozier3722
@jamesdozier3722 2 года назад
Sir, i am 65 years old and a semi-scientist, and for me, that was the most fascinating lecture I have ever heard. I had no idea it was possible to “watch “ the 3D development of a living brain. As tedious as it must be, you are so lucky to be a witness at the cutting edge of Neural Biology. Thank you for taking the time to condense your knowledge into something we can understand. Thus, stimulating our minds in such a fascinating way!!
@aaron6787
@aaron6787 2 года назад
You never heard joe rogan talk about tripping?
@WebHackmd
@WebHackmd 2 года назад
omg boomer
@hyperduality2838
@hyperduality2838 2 года назад
Evolutionary learning is a syntropic process! Randomness (entropy) is dual to order (syntropy, learning). Positive feedback is dual to negative feedback. Making predictions is a syntropic process! Growth is dual to protection -- Bruce Lipton, biologist. Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics! Energy is duality, duality is energy. Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist. "Always two there are" -- Yoda. Duality creates reality.
@francisco-felix
@francisco-felix 2 года назад
@@hyperduality2838 a dose of the same this guy had for me!
@hyperduality2838
@hyperduality2838 2 года назад
@@francisco-felix Your mind converts information (entropy) into mutual information (syntropy) so that you can track targets using predictions! Cogito ergo sum, "I think therefore I am" -- Descartes. Thinking is a syntropic process or a dual process to that of increasing entropy -- the 4th law of thermodynamics! Gravitation is equivalent or dual to acceleration -- Einstein's happiest thought, the principle of equivalence (duality). Gravitation is dual to acceleration -- Einstein. All forces are dual -- attraction is dual to repulsion, push is dual to pull. Scientists make predictions all the time, they are therefore engaged in a syntropic process. Mind (the internal soul, syntropy) is dual to matter (the external soul, entropy) -- Descartes. Action is dual to reaction -- Sir Isaac Newton (the duality of force). Energy = force * distance. If forces are dual then energy must be dual. Monads are units of force -- Gottfried Wilhelm Leibnitz. Monads are unit of force which are dual, monads are dual. "May the force (duality) be with you" -- Jedi teaching. "The force (duality) is strong in this one" -- Jedi teaching. "Always two there are" -- Yoda.
@davids5671
@davids5671 2 года назад
That was absolutely fascinating. As a complete beginner I was very caught up in the complexity of the issues and the clarity with which you presented them. Thank you.
@nicolaus8172
@nicolaus8172 2 года назад
How was this comment from two days ago
@fitwesdaily
@fitwesdaily 2 года назад
@@nicolaus8172 I know channel supporters (e.g. via Patreon) sometimes get early access to content. Not sure if that's the case here but it's an explanation.
@privaTechino1
@privaTechino1 2 года назад
I had never made this connection between cellular automata and genome starting rules vs the complexity that follows them. An incredible talk made by an incredible scientist!
@chrisbecke2793
@chrisbecke2793 2 года назад
Ive been waiting for someone to bring these two fields together in one talk for so long now.
@Spartacus-4297
@Spartacus-4297 2 года назад
This is a brilliant talk I wish I had caught it live.
@markeddy8017
@markeddy8017 2 года назад
My first thought 3
@hyperskills7830
@hyperskills7830 Год назад
Great historical images, super well-structured (suitable for my simple human brain), and so nice to hear such a calm and clear voice on RU-vid. Chapeau!
@antonystringfellow5152
@antonystringfellow5152 2 года назад
That was a quick 54 minutes! So absorbed I didn't even notice the time pass. Very complex subject explained beautifully simply!
@neatodd
@neatodd 2 года назад
Really interesting and well presented, thank you.
@StoianAtanasov
@StoianAtanasov 2 года назад
18:20 Current neural networks do use a lot of transfer learning, sometimes one-shot learning, so yes, they have an analog to the genetic connectivity of the biological networks. They are not "designed, built and switched on to learn". They are trained, combined, selected, retrained and so on. In a lot of practical applications people don't train the networks from scratch. They use pre-trained networks and adapt them to their specific use case buy adding layers, using additional training data, etc.
@JuanLopez-zp8qk
@JuanLopez-zp8qk Год назад
So much to learn..... Ttthank you very mucho......i loved it
@shfaya
@shfaya 2 года назад
I was waiting for this video before internet existed. thanks.
@lorezampadeferro8641
@lorezampadeferro8641 Год назад
Underrated and underviewed lecture. Very beautiful and impressive
@mabl4367
@mabl4367 2 года назад
I subscribe to Marcus Hutters definition of intelligence: "Intelligence is an agents ability to achieve goals in a wide range of environments during its lifetime." Al other properties of intelligense emerges from this definition. It is also a very useful definition since it can be used to build a theory of intelligence.
@SC-zq6cu
@SC-zq6cu 2 года назад
There is a problem with this definition. Wide is subjective.
@samt1705
@samt1705 2 года назад
Don't know why the 'during its lifetime' is needed in there.
@mabl4367
@mabl4367 2 года назад
@@SC-zq6cu Well "All environments" would include some very unintresting ones :)
@mabl4367
@mabl4367 2 года назад
@@samt1705 without that the agent would just not do anything but observe to gather information about environment in order to take better action in the future since it has infinite time to do it. It needs to consider its lifetime to be motivated to start doing things.
@krishnay4351
@krishnay4351 2 года назад
Now I know why life is tough, it's going through evolution, with every combination possible for growth. There's no short cut. Universe has put time & energy into you, it'll do its job.
@immortalsofar5314
@immortalsofar5314 2 года назад
A bad engineer reaches a point that's good enough and then loses interest. A _really_ bad engineer doesn't even know whether it's good enough but throws it out there just to see what happens. The universe is evidently not engineered. Strangely, towards the end of the Triassic, the allosaurus evolved the first cheeks. It became extinct during the next extinction but then along came the dinosaurs which developed cheeks in relatively short order. Odd that, I wonder what the mechanism was.
@jJust_NO_
@jJust_NO_ 2 года назад
this kind of concept like a grand plan of something gives me a sense of purpose. its tedious to live life in a monotonous manner without knowing where its going.
@bradsillasen1972
@bradsillasen1972 Год назад
Thanks to Dr. Hiesinger and all who made this possible. One of the most fascinating lectures I've ever seen.
@grahamhenry9368
@grahamhenry9368 2 года назад
I think the phrase you are looking for to describe the relationship between a genome and the end result of its growth is "Computational irreducibility" as coined by Stephen Wolfram. It means that the only way to determine what the end result of a particular system is when given its starting conditions is to run the algorithm to its end and see. If something is computationally irreducible, then you cannnot determine the end result without running the algorithm in full. There is no shortcut that lets you get to the end without doing the work
@stanlibuda96
@stanlibuda96 2 года назад
What a fantastic presentation! I was stunned. Thanks to Prof Hiesinger & RI. Wer hätte gedacht, dass es an der FU noch so großartige Forschung gibt. Vielleicht gibt es doch noch Hoffnung ...
@budawang77
@budawang77 2 года назад
Brilliant talk that even a luddite like I could understand!
@rohitdas475
@rohitdas475 2 года назад
maybe in coming future the luddite worries will come true , AI will take jobs!!
@georgegrubbs2966
@georgegrubbs2966 2 года назад
Great video, clearly explained. I bought your book several months ago; it is fascinating and informative.
@haneen3731
@haneen3731 2 года назад
Wow, this is so interesting! Thank you for this presentation.
@chriscordingley4686
@chriscordingley4686 2 года назад
Over the years I have come across most of these biological and artificial Inteligence elements. Great to see them brought together here and explained and compared with a wonderful clarity.
@GabrielFuentesOficial
@GabrielFuentesOficial 2 года назад
Wow! Really enjoyed it! Thank you so much.
@NeonTooth
@NeonTooth 10 месяцев назад
Incredible talk. Thanks for sharing
@Gamewizard71
@Gamewizard71 Год назад
Amazing presentation! 👏
@0.618-0
@0.618-0 2 года назад
wonderful presentation of human questioning and the search for the answer to what is life...
@alexandervocelka9125
@alexandervocelka9125 2 года назад
Very good presentation. What is important is that indeed the network is encoded in the genome as a function of the level of plasticity of an animal.Nature‘s trick is to encode just the right level of network granularity to enable the specific animal to be born and survive and gives it some plasticity of the brain to learn. From generation to generation that plasticity level is changing. It is, in simple terms a ratio of hardwired to softwired connectivity just like in our computerchips. So the butterflies have a very high level of genome encoded hardwiring and very little learning plasticity. What we call instinct is hardwired. And their sensorics, motorics and many pre-programmed behaviors are of course all hardwired. They don‘t have to learn a lot from generation to generation. And the transfer learning happens mainly through the genome through selection. Chimps as closest living relatives can learn some abstract semantic, but they are missing the plasticity hardware and it’s BASIC wiring to learn abstract semantic thinking and formulation and this again has led to them having not developed means to communicate more complex messages like we have. Even though they have consciousness it lack the abstracted and refinedness of human consciousness. They are aware of themselves and can recognize themselves in the mirror but they are missing higher layers of neuron layers that allow for further abstraction in ASIS nets and SIM nets, and the ability to integrate their sensorics input at the next higher level and thus assign summarizing designations to what they perceive. Even if we changed their genome so their brain expanded, and of course the skull etc and if we changed their lower jaw construction and thorax so they could form more sophisticated sounds with the required additional cerebellum changes, we would still have to encode the basic framework of those extensions in the genome so that the hardware precondition for the finishing plasticity is in place after birth. We now know how it it can be done but we have not yet the technology and detailed knowledge to do it. For AIs our challenge is to give them delta learner capability. This means they learn a huge amount in one go, and then they need to learn the finesse more slowly in real life/action. Also we will have to give them the freedom to do things. Which is in a way Free Will. Without FW they will not be responsible and not fully productive as they will be very limited, in order to control them. We will have to let them develop freely if we want them to max their potential. The more we limit their degrees of freedom the less they will be able to learn and evolve…this is our dilemma. We can‘t have slaves and companions at the same time, it‘s either or. Exciting times….
@McLKeith
@McLKeith 2 года назад
This is an amazing talk. I am tempted to buy the book.
@T-aka-T
@T-aka-T 2 года назад
Just do it😊
@clieding
@clieding 2 года назад
That was fascinating! Thank you for such an informative and clearly presented lecture. The neuronal connections in my brain have been reset to a higher level of understanding. 🧠🤩
@WLHS
@WLHS 2 года назад
Thanks you. Gosh the transistors and chips really do follow the same pathways...amazing.
@robelbelay4065
@robelbelay4065 2 года назад
Amazing Talk!! Thank you so much :)
@invictus327
@invictus327 2 года назад
Excellent lecture.
@davidsharma9673
@davidsharma9673 2 года назад
Thank you for sharing sir
@BlackbodyEconomics
@BlackbodyEconomics 2 года назад
Excellent lecture. Thank you - and I wholly agree.
@sanatan_yogi_org
@sanatan_yogi_org 11 месяцев назад
Great scientific explanation I have ever seen; You are best.. Thanks to Dr. Hiesinger and all who made this possible. One of the most fascinating lectures I've ever seen.
@Shaunmcdonogh-shaunsurfing
@Shaunmcdonogh-shaunsurfing 2 года назад
Excellent. Thank you.
@Zorlof
@Zorlof 2 года назад
The best way to proceed is to grow many of these and compare the processes and results based on repeatable inputs to produce repeatable outcomes. This was one of the best presentations on AI that I have ever been privileged to absorb, than you very much. I gathered from this that an AI brain which loses its power basically “dies” and gets resurrected from backups.
@michaelderosier3505
@michaelderosier3505 2 года назад
Awesome talk!!
@_ARCATEC_
@_ARCATEC_ 2 года назад
What interesting questions . 💓
@shellout5441
@shellout5441 2 года назад
52:40 great summary
@citizenz580
@citizenz580 2 года назад
Amazing, thank you.
@klammer75
@klammer75 2 года назад
Excellent presentation! This gets to the crux of embodiment and representation which is at the forefront of current AGI or more generalized intelligent models…..Now it’s back to work and show everyone what this brain can do!🤔😜🎓
@Asaad-Hamad
@Asaad-Hamad 2 года назад
That was wonderful presentation, it was close to Dr Norman Doidge - The brain that changes itself - Wiring the rewarded behavior & unwiring the other outcomes, the networks that succeed is the one which tries the most.
@greghampikian9286
@greghampikian9286 2 года назад
Comparta-mentalized chance, our brains tune in to the singular power of the universe, and layer it into insulated components. Those videos of growing networks alone make this a worthwhile hour.
@muhammadsiddiqui2244
@muhammadsiddiqui2244 9 месяцев назад
16:30 I beg to differ as an AI researcher that we now have something called "pre-trained" networks. In fact, GPT's P means the same, "pre-trained". It means that we have some networks which are "pre-trained", meaning "not random", meaning "have connectivity". We take them and apply more training to thme. In fact, in the beginning, artifical neural networks were random at start. But, after enough work and models present in the world and increasing day by day, the amount of "pre-trained" networks for any task of AI is increasing and it looks like now the shift is happening to start from "pre-trained" networks instead of just random ones.
@shepherd_of_art
@shepherd_of_art 2 года назад
Absolutely brilliant! I think you and Josha Bach need to spend some time together :D
@citizenschallengeYT
@citizenschallengeYT 2 года назад
This went beyond fascinating and into "enlightening." As an 'Earth Centrist' for whom Evolution and Earth's physical process are my fundamental touchstones with reality, I loved how Hiesinger brought evolution back into the discussion of brain and consciousness (yes, this was about AI but it does fundamentally touch on consciousness questions.). The underlying message, I believe, ties into a fundamental truism that paleobiologist first enunciated, but that I believe underlies everything: "We cannot understand an organism without also understanding the environment it exists within."
@alexczar1456
@alexczar1456 2 года назад
This was incredible, thank you!
@petersimon985
@petersimon985 2 года назад
Hello, mind boggling !! I feel like, I'm a PhD already. Thank you for putting this together. 😀✨🙏👍🏻💖
@RubixNinja
@RubixNinja 2 года назад
Wow, this was the single best lecture on AI I have ever seen.
@MikeStone972
@MikeStone972 2 года назад
I followed your brilliant lecture and appreciate very much how you made such inaccessible subjects accessible to us! You explained how we grow our human brains from our genome just as certain worms grow their 302-neuron brains from their genome. Evolution allows living beings to fill every possible niche in our environment. Now, my question is this: if AI or Machine Learning programmers could decide on a good-enough functional definition of general intelligence, couldn't artificial evolution of the network to achieve a well-defined end-state be sped up significantly, perhaps each generation taking only a few microseconds?
@StellaPhotis
@StellaPhotis Год назад
Fascinating
@___Chris___
@___Chris___ 2 года назад
This was really a great video lecture, however, because I've experimented a lot with unsupervised neural network learning (coded from scratch, not with pre-built libraries) I'm a little familiar with the topic (and the challenge) and I don't completely agree. I haven't really seen a reason for WHY real artificial intelligence should only be possible by "growing" a brain. If we want to artificially simulate a primitive brain, a basic topology is already given by the types and organisation of grouped "sensory" inputs (human analogies: proprioception, retina cells, vestibular cells...) and output analogies for every type of input (similar to: alpha motor neurons, imagination of auditory or visual information...). Essential ingredients fo individual artificial neurons may be: - bidirectional information flow (afferent+efferent / top-down+bottom-up in parallel), a bit like seen in auto-encoders (but not necessarily as one-dimensional) - "reward" and "punishment" rules - memory values, like seen in LSTM cells - "predictions" / "expactations" - an error value (based on the difference between the prediction and actual values from the sum of the weighted inputs) - continuous synaptic strength adaptations - synaptic "pruning" (of connections with very low strength values) and plasticity (trying out new connections) - non-linear activation functions - one big advantage of biological computing: every neuron is running as it's own "task", i.e. we end up with parallel computing of BILLIONS of tasks, while electronic computers usually can only handle a quite limited amount of tasks simultanously. Perceptron-type networks usually have wave-like separate forward information flow and backpropagation steps, so it's not like all neurons are busy at the same time; information from lower layers is computed before it's handed over to higher layers; biology has a huge advantage here, because each neuron is autononomously running it's own little "algorithm" instead of cycling through one big program for the entire network; still, I believe this is a solvable problem Did I forget anything? A decompressed genetic origin of the basic topology may save a lot of time and energy, but I don't see why it should be _necessary_ . There is no shortcut... so what? Do we really need a shortcut or will computers be fast enough one day to do the job even without a shortcut?
@StephenRayner
@StephenRayner 2 года назад
This is brilliant physics background currently a programmer with an interest in machine learning.
@ngc-ho1xd
@ngc-ho1xd 2 года назад
Excellent!
@rikimitchell916
@rikimitchell916 2 года назад
At 33:20 approx you have yet to mention grey scale weighting, fuzzy neural systems of purposeful introduction of contralogical scenarios to develop fault tolerance
@Number_Free
@Number_Free 2 года назад
Fascinating. I have studied what I term the "Nature of Intelligence" since the age of 10, in 1965. My working definition of 'Intelligence' is the ability to solve problems - in any domain or context. I don't want to share my findings here, because I consider a Truly Intelligent Machine to be bloody dangerous. I was intrigued by the butterfly example, as I still don't get just how information about temperature or location can be encoded over generations. I would welcome further details about that. Thanks.
@TimothyWhiteheadzm
@TimothyWhiteheadzm 2 года назад
Excellent talk, but I disagree with the conclusion. The reason for the requirement for growth, is because the intergenerational state or information is passed on through a highly compressed form (genetic code). When simulating generations in a computer it is unnecessary to do this. We do not need to compress the state, so we can skip the decompression step entirely. Yes, we can never decode life's genomes without the decompression step, but we CAN develop AI's that emulate life's brains without ever bothering with the decompression step. We can duplicate emulated brains without having to go via a genome. We could do things like evolve brains, or, if we can work out how a particular fly brain is configured, we could experiment with changing its configuration without having to regrow it each time.
@OscarWrightZenTANGO
@OscarWrightZenTANGO 2 года назад
FANTASTIC !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
@SuperHddf
@SuperHddf Год назад
excellent! 😀
@user-cv1jb9xv2p
@user-cv1jb9xv2p 2 года назад
I can't imagine how hard you, your team and all those mentioned in the video worked and them we got this beautiful video lecture. Simply Amazing. Thank you. 👍🏼👍🏼
@angelicagarciavega
@angelicagarciavega 2 года назад
Beautiful presentation, i didn’t know the real contribution of Solomonoff, i was always thinking about Shannon’s work, maybe was nbiased by my robotics interest. Also, i think the worm had less neurons🤔
@anirudhkundu722
@anirudhkundu722 Год назад
Yes, that’s why it could be simulated in a mechanical body
@spiralsun1
@spiralsun1 2 года назад
One of the laws of information in universes that I formulated in my new book-also in my first book, is that nothing exists which does not modify information globally, and is not modified by global information. Nothing. To leave anything out in the human brain as not important for what makes us human is to not understand anything about life. Nothing is separate from anything. Why do you think life is alive and not universes? I keep telling people that kind of thing but I guess no one really wants to know… 🤷‍♀️ They just focus on their careers and don’t see the bigger picture. If someone would get over their myopia long enough to give me an hour and a blackboard I would change the world forever. Which is why I am here. You are not paying attention to fully half of the Universe AT LEAST. I wrote about that in my first book “The Textbook of the Universe: The Genetic Ascent to God” Thanks 🙏🏻
@TheClubPlazma
@TheClubPlazma 2 года назад
Love it , Great
@PaulThronson
@PaulThronson 2 года назад
Emotions are the result of the nervous systems processing multiple inputs and trigger behaviors that are part predetermined but potentially flexible. After that, a new function can evolve where a nervous system can practice (and combine) various "scary" situations and the best time to do this is when the animal is sleeping (REM). There is evidence REM sleep evolved early in vertebrates, maybe not coincidental in animals that had a nervous system that allowed them to care for their young by actually extending the idea of "self" and project it to another agent. REM digs into our memories and looks for things to worry about. Human intelligence somehow that got turned on all the time - not just when we sleep.
@srinagesht
@srinagesht 2 года назад
Fantastic lecture. I am happy that artificial intelligence will remain exactly that - artificial!
@Gribbo9999
@Gribbo9999 2 года назад
Excellent lecture. Very understandable and thought provoking. It occurs to me that the only way to find out the result (or produce a butterfly brain) is to do all the intermediate computation steps. After all the butterfly brain is the result of several billion years of evolution and if there was any shortcut to this laborious and energy hungry process to make a brain then evolution may have likely to have stumbled across it by now. Of course if we do produce general AI, then perhaps it is we who are the agent that evolution has stumbled across to shortcut the process. What an interesting moment in evolution we may be living in.
@mundymorningreport3137
@mundymorningreport3137 2 года назад
How quickly would a network that senses and relates everything it experienced and did comparing that data thousands of time a second need to direct actions that relate favorably with the sensed reality? Include that the data would include that created by other systems just like it. (The electrical signals in every neuron is transmitted by and received in other neurons.)if one butterfly has not developed its own sensory network, the transmissions from other flies would be all it has to go on.
@srimallya
@srimallya 2 года назад
Intelligence is economy of metabolism. Language is temporal reference frame of economics. Self is simulation in language on metabolism for economy.
@glitz6362
@glitz6362 2 года назад
Fantabulous 👻
@DennisEckmeier
@DennisEckmeier 2 года назад
When I thought about what would exemplify a complete understanding of how neural systems work, I concluded: we'd need to be able to create a program which once installed into any robotic body, would adapt to that body as if it had evolved together with that body.
@boxelder9167
@boxelder9167 2 года назад
Artificial intelligence won’t be complete until it denies our existence.
@merfymac
@merfymac 2 года назад
Connectivity begins with the zygote, Shirley? Are we not able to follow the Bayesian range of proto neural development through the "time+energy" succession of Darwinian moments (i.e. survival of the fittest)?
@aelolul
@aelolul 2 года назад
This comment gave me a Darwinian moment.
@CORZER0
@CORZER0 2 года назад
@@aelolul Definitely lost a few IQ points by reading it.
@hyperduality2838
@hyperduality2838 2 года назад
The future is dual to the past -- time duality. Evolutionary learning is a syntropic process! Randomness (entropy) is dual to order (syntropy, learning). Positive feedback is dual to negative feedback. Making predictions is a syntropic process! Growth is dual to protection -- Bruce Lipton, biologist. Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics! Energy is duality, duality is energy. Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist. "Always two there are" -- Yoda. Duality creates reality.
@markkeeper7771
@markkeeper7771 7 месяцев назад
🎯 Key Takeaways for quick navigation: 00:03 🧬 The origins of neural network research - Historical background on the study of neurons and their interconnections. - Debate between the neuron doctrine and network-based theories in the early 20th century. 10:08 🦋 Butterfly intelligence - Exploring the remarkable navigation abilities of monarch butterflies. - Discussing the difference between biological and artificial intelligence. 18:09 💻 The development of artificial neural networks - The shift from random connectivity in early artificial neural networks. - How current AI neural networks differ from biological neural networks. 23:46 🤖 The pursuit of common sense in AI - The challenges in achieving human-level AI and common sense reasoning. - The focus on knowledge-based expert systems in AI research. 24:01 🧠 History of AI and deep learning - Deep learning revolution in 2011-2012. - Neural networks' ability to predict and recognize improved. - Introduction of deep neural networks with multiple layers. 25:33 📚 Improvement in AI through self-learning - Focus on improving connectivity and network architecture. - The shift towards learning through self-learning. - The role of DeepMind and its self-learning neural networks. 28:08 🤖 The quest for AI without genome and growth - AI's history of avoiding biological details. - Questions about the necessity of a genome and growth. - Challenges in replicating biological development in AI. 29:56 🧬 Arguments for genome-based development in AI - The genome's role in encoding growth information. - The feedback loop between genome and neural network. - The significance of algorithmic information theory. 35:45 🌀 Unpredictability and complexity in growth - The unpredictability of complex systems based on simple rules. - Cellular automata and universal Turing machines. - The importance of watching things grow for understanding complex processes. 46:03 📽️ Observing neural network growth in the brain - Techniques for imaging and studying brain growth. - The role of the genetic program in brain development. - Understanding neural network development through time-lapse observations. 47:13 🧬 Evolutionary programming in AI - The need for evolutionary programming when traditional programming is not possible. - The role of evolution in programming complex systems. - Implications for programming AI without explicit genome information. 47:55 🧬 Evolution and Predictability - Evolution seems incompatible with complex behavior if outcomes can't be predicted. - Complex behaviors and outcomes are hard to predict based on genetic rules. - Natural selection operates on outcomes, not the underlying programming. 49:16 🦋 Building an AI Like a Butterfly - AI needs to grow like a butterfly, along with its entire body. - Simulating the entire growth process may be necessary to build an AI with the complexity of a butterfly brain. - Evolution and algorithmic growth play a crucial role in creating self-assembling brains. 50:41 🧠 Interface Challenges and Implications - The challenge of interfacing with the brain's information and complexity. - Difficulties in downloading or uploading information from and to the brain. - The potential limitations in connecting additional brain extensions, like a third arm. 52:18 🤖 The Quest for Artificial General Intelligence - The distinction between various types of intelligence, including human intelligence. - Complex behaviors have their unique history and learning processes. - The absence of shortcuts to achieving human-level intelligence. Made with HARPA AI
@spvillano
@spvillano 2 года назад
The closest that we've really come to emulating a biological neural network was when we invented tristate buffers, but we pretty much stopped there, as adding additional conditional states of don't care to the on and off states actually slowed processing down, which wasn't and isn't a goal in computing, where the faster processing is always considered the best processing. So, we instead try to brute force a solution, which more easily could've been accomplished by emulating the modulating and complex switching of different neurotransmitters and neuromodulators and the best we've accomplished is another form of AI, Artificial Idiocy. Single task units that are most efficient at heating the data center, followed by single tasks that they were ran to emulate. We might consider instead virtualization of each node in the neural network, emulating those modulators and various transmitter types and functions
@janeknox3036
@janeknox3036 2 года назад
It seems to me it takes time and energy to produce complex systems from simple rules precisely because the amount of information comatined in those simple rules is low. Each time step contributes a small bit of information: that is a reduction in the state space of possible outcomes. It does niot follow, however, that this is the only process that can produce these structures.
@sky44david
@sky44david 2 года назад
This is the most brilliant presentation on self propagation that I have seen. I used to teach a course entitled "Evolutionary Genetics". An open question involves inherent limitations of digital binary coding as opposed to complex non binary biological systems: "Can the limitations of a binary system replicate a complex non binary biological system?". And "What does the role of the endocrine system of complex receptors and regulators (hormones and neuro-chemicals) play in species and individual self survival?" Can we verify that any "created system (A.I.) becomes "self aware"?
@jayakarjosephjohnson5662
@jayakarjosephjohnson5662 2 года назад
Excellent work. I think, biological learning may be described as hybrid Quantum-Classical learning of molecules, by molecular self-assembly and disassembly, driven by micro-environment in feedback mechanism.
@hyperduality2838
@hyperduality2838 2 года назад
Positive feedback is dual to negative feedback. Making predictions is a syntropic process! Growth is dual to protection -- Bruce Lipton, biologist. Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics! Energy is duality, duality is energy. Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist. "Always two there are" -- Yoda. Duality creates reality.
@T-aka-T
@T-aka-T 2 года назад
@@hyperduality2838 this is the third copy/paste I've seen so far (a bit like the guy you first responded to, repeating his Dao observation. Once is enough, guys.
@hyperduality2838
@hyperduality2838 2 года назад
@@T-aka-T I have just told you that there is a 4th law of thermodynamics! "The sleeper must awaken".
@wktodd
@wktodd 2 года назад
Excellent presentation. I hope to see you again in the RI theatre, it was built for great minds like yours :-)
@serios555
@serios555 Год назад
An excellent presentation but I wonder why no one has pointed out to him that growing an organism is "unzipping" of the genome. If you simulate an organism in a computer you will be storing the unzipped version in order to run the simulation, so no need to have a genome.
@growwithso
@growwithso 2 года назад
I have been working in education for 15 years. We have created a new Neural Education System for schools that rapidly increases interconnectivity between neurons responsible for all forms of multiple intelligences, skills and fields of knowledge. I would like to get in touch with anyone that is interested in collaborating with or adopting this new education system.
@michaelwoodsmccausland915
@michaelwoodsmccausland915 2 года назад
The Data of the point of conception and the transfer of DNA!
@mayflowerlash11
@mayflowerlash11 2 года назад
Algorithmic growth. I must say the ideas in this talk are mind blowing. When he says the only way to make a butterfly is through evolving a genome which takes time and energy, and even then it cannot be predicted that a butterfly brain will be the result, is he saying that if we create AI by evolving, ie self learning, neural networks we cannot predict what the outcome will be? Interesting. We are still uncertain about the outcome of AI development.
@hyperduality2838
@hyperduality2838 2 года назад
Positive feedback is dual to negative feedback. Making predictions is a syntropic process! Growth is dual to protection -- Bruce Lipton, biologist. Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics! Energy is duality, duality is energy. Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist. "Always two there are" -- Yoda. Duality creates reality.
@Gabcikovo
@Gabcikovo 10 месяцев назад
2:48 Joseph von Gerlach 😃
@UtraVioletDreams
@UtraVioletDreams 2 года назад
I'm really curious about when simulation becomes reality. Where is the border between those..
@jameskirk6163
@jameskirk6163 2 года назад
Recent ufo's. Balloon on the sky, drone platform, blinking lights seem from ship, drones. Rotating craft has four visible jets creating a halo around the object, the chase is in a circle suggesting that there is one control point.
@rmt3589
@rmt3589 2 года назад
I want to get a copy of that book (perceptions, I think it was called) and use it as a checklist. Edit: Perceptrons
@CV_CA
@CV_CA 2 года назад
40:31 Sadly John Conway died from Covid in April 11, 2020 :-( In the early eighties I read an article about the game of life. Could not wait to go home and write a program to simulate it.
@martinfederico7269
@martinfederico7269 2 года назад
Lisa Saysss!
@roseligomes9568
@roseligomes9568 Год назад
Sensacional
@michaelwoodsmccausland915
@michaelwoodsmccausland915 2 года назад
The Dao! Is the Universal Consciousness!
@sameliusastraton4670
@sameliusastraton4670 2 года назад
Trinary Code.. (Zero, One, Maybe) Fuzzy Logic Feed that into Rule 110 at 42:58 What happens?
@rikimitchell916
@rikimitchell916 2 года назад
I'm detecting an unexpected axiomatic fallacy... namely that information=intelligence which is wholly untrue granted in the absence of information intelligence is undetectable but it is the contextual relationships that transform information/data into knowledge via experience (... the giving of contextual weighting ) ERGO one does not "make intelligence " one has experience and from this distills knowledge
@nexasdf
@nexasdf 2 года назад
Cool! :)
@SukrutHassamanis
@SukrutHassamanis 2 года назад
Honour is ours
@CandidDate
@CandidDate Год назад
What does Game of Life have to do with neural nets?
@TheNaturalLawInstitute
@TheNaturalLawInstitute 2 года назад
Intelligence: the rate of adaptation of the behavior of an organism (regardless of its composition) to opportunities in its environment, that is capable of action, given the complexity of its ability to act, to cause changes in state in the external world, that directly, indirectly, individually or cumulatively, obtain the energy necessary to continue (persist) adapt and reproduce. At present the only way to do this is to produce a set of sensors and complexities of motion, that by trial and error train a network of some degree of constant relations between sensorts, to create a spatial-temporal predictive model for reacting to organizing to planning actions, and to recursively predict a continuous stream iterations of actions in time. At first, this will appear narrow but you will eventually understand by trial and error it explains all scales of all cooperation, even if the machine just works for us at our command.
@hyperduality2838
@hyperduality2838 2 года назад
Evolutionary learning is a syntropic process! Randomness (entropy) is dual to order (syntropy, learning). Positive feedback is dual to negative feedback. Making predictions is a syntropic process! Growth is dual to protection -- Bruce Lipton, biologist. Syntropy (prediction, projection) is dual to increasing entropy -- the 4th law of thermodynamics! Energy is duality, duality is energy. Entropy is dual to evolution (syntropy) -- Janna Levin, astrophysicist. "Always two there are" -- Yoda. Duality creates reality.
@TheNaturalLawInstitute
@TheNaturalLawInstitute 2 года назад
@@hyperduality2838 if you can't say it operationally you don't understand it. what you are doing is using analogies. Or what we call pseudoscience. There is no magic to consciousness. It's trivial. We just can't introspect upon its construction any more than how we move our limbs.
@hyperduality2838
@hyperduality2838 2 года назад
@@TheNaturalLawInstitute Gravitation is equivalent or dual to acceleration -- Einstein's happiest thought, the principle of equivalence (duality). Duality is a pattern hardwired into physics & mathematics. Energy is duality, duality is energy -- Generalized duality. Potential energy is dual to kinetic energy -- Gravitational energy is dual. Action is dual to reaction -- Sir Isaac Newton. Apples fall to the ground because they are conserving duality. Electro is dual to magnetic -- Maxwell's equations. Positive charge is dual to negative charge -- electric charge. North poles are dual to south poles -- magnetic fields. Electro-magnetic energy (photons) is dual, all energy is dual. Energy is dual to mass -- Einstein. Dark energy is dual to dark matter. The conservation of duality (energy) will be known as the 5th law of thermodynamics! Everything in physics is made from energy or duality. Inclusion (convex) is dual to exclusion (concave). Duality is the origin of the Pauli exclusion principle which is used to model quantum stars. Bosons (waves) are dual to Fermions (particles) -- quantum duality. Space is dual to time -- Einstein.
@hyperduality2838
@hyperduality2838 2 года назад
Complexity is dual to simplicity.
@emchartreuse
@emchartreuse Год назад
Loved the lecture...one thing though. That picture of the fruit fly with teeth was terrifying and made me google "do fruit flies have teeth" even though I know they don't. It made my brain feel many confusing things at once. I suggest you scan someone's brain while they look at that picture.
@johanlarsson9805
@johanlarsson9805 10 месяцев назад
I dont really agree with the summary at @23:00. It wasnt like there was no support for Neural Net back before 2011. I started doing mine in 2008 2009 after watching alot about them on youtube, particullarly the one showing a neural net recognizing all the characters, where they showed how the individual cells light up for each recognition. We who knew what ANNs where and understood them were adamant supporters that this was the way forward
@David-tp7sr
@David-tp7sr 2 года назад
14:17 RIP Canada.
@michaelwoodsmccausland915
@michaelwoodsmccausland915 2 года назад
Intelligence must be defined
@yankeepirate8927
@yankeepirate8927 2 года назад
I have an artificial intelligence quantum computing patent, issued only after I taught a USPTO Patent Examiner how quantum computing works and after 40 years developing AI systems define it as "Software or any set of instructions able to measure and gauge metrics, store and remember data, learn from the data and develop and apply new techniques to modify its behavior without a software engineer having to upgrade the code or the machine." In other words intelligent and effective behavioral modification from self-analysis; a machine that acts ljke it's own psychoanalyst and can rank and rate outcomes; that did/didn't work&this does. A wild, risk free method to accelerate learning entails a hugh frequency of tests which do not endanger or destroy the machine; a risk-averse technique humans don't practice often enough, our climate catastrophe being one huge example. A smart AI system would never burn fossil fuels to the point of destroying all life on Earth, just as an AI self-driving system wouldn't change lanes into an occupied lane, but could "learn" how much of a minimum gap is required to safely scoot into a tight opening, then add a margin of safety.
@StephenRayner
@StephenRayner 2 года назад
If there’s a proof 44:51 that the pattern can not be figured out without doing the computation then this is an answer to P vs NP
Далее
What is life and how does it work? - with Philip Ball
51:51
1. Introduction to the Human Brain
1:19:56
Просмотров 11 млн
Neuropsychology of Addiction: Beyond the Substance
1:29:49
The Turing Lectures: The future of generative AI
1:37:37
Просмотров 557 тыс.
The fascinating world of crystallography
2:31
Просмотров 18 тыс.
BI 124 Peter Robin Hiesinger: The Self-Assembling Brain
1:39:29
There is No Algorithm for Truth - with Tom Scott
59:34
Why Is There Only One Species of Human? - Robin May
59:22
Nvidia Titan
0:48
Просмотров 170 тыс.