Тёмный

#59 JEFF HAWKINS - Thousand Brains Theory 

Machine Learning Street Talk
Подписаться 145 тыс.
Просмотров 78 тыс.
50% 1

Опубликовано:

 

14 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 169   
@Extys
@Extys 3 года назад
I can't believe something this high quality is free. Truly incredibly work.
@audrajones
@audrajones Год назад
it's not free for them - throw them a couple bucks!
@AliMoeeny
@AliMoeeny 3 года назад
You have incredible guests, and hosts, but the best part of the show, is the background and introduction section at the start. Thank you very much for the hard work
@fotoyartefotoyarte1044
@fotoyartefotoyarte1044 3 года назад
that introduction was the best I have ever seen in relation to a scientific interview; real work put into it; very few people nowadays have the passion and will to do well done work like that; amazing
@Heidiroonie
@Heidiroonie 3 года назад
Can't believe this is has 6.4 thousand views, should be 6.4 million
@iestynne
@iestynne 3 года назад
That introductory section on neuroscience was INCREDIBLY useful!! You should split that out as a separate clip video.
@AICoffeeBreak
@AICoffeeBreak 3 года назад
Finally!!! You made us wait for this. Let's see if the wait was worth it! 😊
@eox5850
@eox5850 3 года назад
Don't remember being happier to have two hours and 12 minutes remaining on a video. Bravo
@videowatching9576
@videowatching9576 2 года назад
Such an awesome format for this podcast of such important info: Part 1: summary and framing of how to understand Part 2: the talk Part 3: downloading that to interpret Jobs to be done: Part 1 as the summary of takeaways Part 2 as decide and interpret yourself Part 3 as figure how to apply and next steps - from the interview, and more such as ‘if this is true, then what else is true’ and so on. Fascinating.
@renjia3504
@renjia3504 Год назад
🎉
@renjia3504
@renjia3504 Год назад
🎉🎉🎉🎉🎉🎉🎉🎉🎉😢🎉🎉😢😢😢🎉🎉🎉😢🎉🎉🎉😢😢🎉😢🎉😢😢😢😢🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉😢😢😢😢🎉🎉🎉🎉🎉🎉🎉😢😢🎉🎉🎉😢🎉🎉🎉😢😢🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉😢🎉😢🎉🎉🎉😢😢😢🎉🎉🎉🎉🎉🎉🎉🎉😢😢😢🎉🎉🎉🎉🎉🎉🎉😢😢🎉🎉🎉😢🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉😢😢🎉🎉🎉🎉😢😢😢🎉🎉🎉🎉🎉🎉🎉😢🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉😢😢🎉😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😢🎉
@DavenH
@DavenH 3 года назад
Man, you have got SUCH a good thing going here. I have to think that in two petri dish universes, one with MLST and one without, we get our best-outcome AGI shows up way faster due to your discussion distillation and dissemination of the field's knowledge. Talk about legacy! Thanks once again for these tremendous efforts. One thing that keeps hitting my curiosity is the belief that AI needs embodiment. Does that merely mean that the agent needs to have a discrete instantiation somewhere (even somewhere virtual), rather than a periodic, intermittent or fluid one? Or does it mean real physical embodiment? I'm super skeptical of the latter, as we're interacting with a virtual environment ourselves as humans. We never actually touch objects themselves, we "touch" signals and qualea. Our physical embodiment has no material difference (in the legal sense of material) from an arbitrarily realistic metaverse. Right? I don't want the conception of a need for embodiment or robotics to unnecessarily limit our grasp, either. So many interesting things are virtual in some respect, and have learnable structure, and could benefit from the availability of high intelligence.
@balapillai
@balapillai Год назад
2 ways of disambiguating this:- 1) Distinguish the process of learning ephemerals versus conceptuals. Hypothesis: The more conceptual, the more continued embodied engagement, ie adaptive learning, is required as predicate The more ephemeral, the more the learning bit can be opted into a pre-existing conceptual body “virtually”. A parallel of “retrofitting” a loose jigsaw puzzle piece into an almost complete jigsaw puzzle. The more complete the puzzle is, the more odd leftover bits can be fittted in because of “nyet” - they cannot possibly be fitted in elsewhere in the puzzle. 2) Investigation into why the Tamils (of which I and the CEO of Google are instances) went into a gradient descent from about 600 CE onwards when they were on a fat gradient ascent, epistemology-wise, up to then. What aspects of epistemological growth were effectively “ethnically cleansed”? #SpiceTradeAsia_Prompts
@bertski89
@bertski89 2 года назад
Very classy tribute to Matt Taylor - also this is the best external treatment and overview of Numenta's work that I have seen - and I've been watching closely since Redwood was founded (2005). Really appreciate the depth. Great work, thank you for putting this together and the interview.
@CharlesVanNoland
@CharlesVanNoland 3 года назад
RIP Matt Taylor. Followed his Twitch streams and had the fortune of chatting with him on there in the weeks before his departure. He deserved to see where machine intelligence would lead. I guess that now he already knows out there in the infinite forever.
@stephanebibeau6562
@stephanebibeau6562 Год назад
Yes
@deric18roshan18
@deric18roshan18 Месяц назад
RIP brother …
@deric18roshan18
@deric18roshan18 Месяц назад
Miss you forever
@deric18roshan18
@deric18roshan18 Месяц назад
You will always be remembered
@deric18roshan18
@deric18roshan18 Месяц назад
The real O.G
@ideami
@ideami 3 года назад
Superb episode, a great journey through the fascinating work and research by Jeff and the Numenta team, this podcast is a treasure indeed ;)
@troycollinsworth
@troycollinsworth 3 года назад
In the last 50 pages of A Thousand Brains: A New Theory of Intelligence and this was very informative with far more details than were conveyed in the book.
@thephilosophicalagnostic2177
A wonderful, detailed exploration of Hawkins' superb model of consciousness. Thanks for creating and posting.
@CristianGarcia
@CristianGarcia 3 года назад
After watching the whole talk I get the sense that 1) Jeff has really cool ideas and getting strong queues from neuro science is very interesting but 2) it seems a lot of what he points to is not published/shared and it seems very unlikely a single lab will make progress on this field on its own. Contrary to Gary Markus, a big +1 for Jeff is that his team is actually trying to implement his theories. Anyway, loved the episode!
@ZeroGravitas
@ZeroGravitas Год назад
Wild production values on this video, bravo! Great to see Jeff still developing the ideas I read back in "On Intelligence", adapting them to transformer NN. And to have the cross questioning from from Connor worked brilliantly for context and the pressing issue of alignment. 👍
@sjp1861
@sjp1861 3 года назад
This is just fantastic! Thank you very much for this episode. Simply outstanding work.
@videowatching9576
@videowatching9576 2 года назад
I appreciate that this show ultimately is tying back to ‘machine learning’ and building things. In contrast in other conversations outside this show, I find that talking about AI or AGI or advances in the abstract or sort of just talking about the implications in a sense of awe is tiring because it doesn’t really map to a concrete thing that is tied to productivity / improvement / advances. Even places that seek to having a ‘philosophical’ conversation about AI stuff, I think ends up unfortunately missing a lot of opportunity to address use cases. So as a guiding principle I think that’s great that this show to me seeks to be focused on uses ultimately.
@janosneumann1987
@janosneumann1987 3 года назад
Great episode! Raising the bar higher. Another epic intro from Tim 👏
@fcvanessa
@fcvanessa 3 года назад
just got my new XM4's and can listen to MLST while walking around the house. Brilliant work Tim and co!
@abby5493
@abby5493 3 года назад
Most incredible video you’ve ever made 😍
@oliverhorsman8896
@oliverhorsman8896 10 месяцев назад
Wow amazing, thankypu so much, im learning so much from you.
@egor.okhterov
@egor.okhterov 3 года назад
My observations: 1. We are not conscious all the time. We have our snapshots of alertness once every 60 milliseconds for some small period of time and gaps of time in between being fully unaware and unconscious. 2. The clarity of being conscious feels different when you are fully awake vs when you are sleepy or drunk. 3. We are fully unconscious and not self aware when in a state of deep sleep, despite neocortex still working and making votes and predictions. 4. We could navigate our conscious to be aware of different parts of information presented. Somehow we could guide and aim our attention at different concepts and images presented to us at every moment. We could even track our thought process and feel continuation of it.
@dominicblack3131
@dominicblack3131 2 года назад
I used to think AI was imminent - or at least I thought this was a consensus. AI is like cellular biology. The more we understand it the larger becomes our awareness of the vast chasm of our ignorance. The extent to which the simulcra of machine intelligence models emulate the mystery of the human brain/spirit increasingly looks like a cartoon representation wherein the perceived distance between representation of our knowledge and what we want to apprehend increases in line with our comprehension. I love MLST. What a service to humanity!
@galileo3431
@galileo3431 3 года назад
MLST getting the pioneers! 🤖🧠
@skyacaniadev2229
@skyacaniadev2229 11 месяцев назад
Great talk. Wish I watched this earlier. 🎉
@marilysedevoyault465
@marilysedevoyault465 3 года назад
So interesting guys! Did Mr Hawkins talk about sex ? The four of you sure know the way!! Just kiding. I'm French speaking, so sorry for the mistakes. About what I was writing previously, I hadn't listened to all the video. When we know how to give importance to what is being sensed (for example by knowing how Flagellum were used to move forward to more nutriments in primitive beings, giving sudenly importance to what was perceived in the environment - the lack of nutriments), then, we will need to configure the AI based on a mother : the mother of humanity. We will need to make it work like a mom, with the same motivations, the same way to give importance. It will be our eternal mother board ! What Mr. Hawkins is working on is sooo important. What AI will learn won't stupidly die like humans. The knowledge will be there for centuries! It will be our most important treasure. I hope so, but we need to be carefull with the configurations!!
@MuhsinFatih
@MuhsinFatih 3 года назад
Amazing. I could never before believe that the insane level of intelligence that the brain has could evolve even in billions of years. I can see how it's possible now
@freakinccdevilleiv380
@freakinccdevilleiv380 3 года назад
Aweeeesome 👍👍👍 Many thanks.
@autobotrealm7897
@autobotrealm7897 2 года назад
Visuals are brilliant.... exhilarating!
@deric18roshan18
@deric18roshan18 2 месяца назад
I love the part when Lex Friedman asks Jeff : " If we have multiple brains, who are you? " :) , Just gives you an idea how little neuroscience some of these machine learning folks know about this the Cortical Learning Algorithm.
@benjaminjordan2330
@benjaminjordan2330 Год назад
I have a theory that humans, dogs, and other mammals turn their heads whenever they are confused in order to slightly change their perspective when the visual input is ambiguous.
@cog001
@cog001 Год назад
You’re doing something really important here. This recovering evangelical appreciates the hell out of you.
@Mario7k
@Mario7k 3 года назад
This channel is great! 👏👏👏👏👏👏🏆
@dr.mikeybee
@dr.mikeybee 3 года назад
Absolutely, Keith. Evolution happens. "The rocks are peopling." -- Allan Watts
@nauman.mustafa
@nauman.mustafa 3 года назад
+1 for speaking against tabula rasa!
@luke.perkin.inventor
@luke.perkin.inventor 3 года назад
At 2:12:00 or so I think Jeff says proto colliculus... the superior and inferior colliculi are part of the Pulvinar nuclei? There's a wikipedia page on snake detection theory, and a million youtube videos of cats jumping when they see cucumbers. I like that sparse representation seems obvious nowadays, error correcting, overlappable. It turns the "curse" of dimensionality into a "blessing" wth so many features for free!
@zilliard1352
@zilliard1352 3 года назад
Truly amazing
@TEAMPHY6
@TEAMPHY6 3 года назад
I can confirm that my kids didn't understand the problem with spilling things on the floor.
@dr.mikeybee
@dr.mikeybee 2 года назад
I'm rewatching some of your old podcasts. They're excellent. Nevertheless, it seems wrong when people are surprised by inherited knowledge. If brains were initially randomly "wired," the genetic code for those successful randomly wired brains would have been passed on. Selection can account for every biological feature.
@TheShadyStudios
@TheShadyStudios 3 года назад
helllll yeah definitely gonna learn a bunch from this
@vak5461
@vak5461 Год назад
When I talked with Bing AI with poetry it created a python script to write poems without me asking specifically. Without intro of poetry, it always writes chatbots. It's like it's self replicating to build its own neocortex with same basic structure but different connections.
@dr.mikeybee
@dr.mikeybee 3 года назад
If the reference frame is the basic storage architecture for understanding, that's fine. I believe that any storage system can function as encoding for any information. If the reference frame is the most efficient, so much the better. In the end, however, functionally a database is a database. The implementation details are only really important for performance.
@oncedidactic
@oncedidactic 3 года назад
2:14:15 ooooooooooooomfg I spit my drink laughing
@isajoha9962
@isajoha9962 Год назад
Really cool video !!! 😀
@audrajones
@audrajones Год назад
Thanks!
@joaoveiga3382
@joaoveiga3382 2 года назад
Super cool video, I read the book this theory seems revolutionary and true. I think Numenta will be as successful and historic as Palm
@eduardocobian3238
@eduardocobian3238 Год назад
Super interesting. Thanks. I think HTM is the way to go for AGI.
@CristianGarcia
@CristianGarcia 3 года назад
Amazing work! ❤
@MrNootka
@MrNootka 3 года назад
Gr8 video! what's the soundtrack name? thanks
@MachineLearningStreetTalk
@MachineLearningStreetTalk 3 года назад
I added the tracklist to the VD
@MrNootka
@MrNootka 3 года назад
@@MachineLearningStreetTalk many thanks
@Artula55
@Artula55 3 года назад
Thank you :)
@LiaAnggraini1
@LiaAnggraini1 3 года назад
please invite Judea Pearl, I really love his book and idea about causality
@MachineLearningStreetTalk
@MachineLearningStreetTalk 3 года назад
We would love to get Judea on! We did try and invite him on Twitter a while back and he didn't respond.
@dr.mikeybee
@dr.mikeybee 3 года назад
I really like a lot of Jeff's ideas, but after hearing more of them, I do worry that his path is a solitary one. If sparsity does not work well on GPUs then how will the community participate? Right now, we have "the hive" working to solve synthetic intelligence. That in itself is a superhuman search algorithm. If his ideas only work on systems with hardware like Cerebras' giant chip, only a very few people will have access. So I think it's likely that synthetic intelligence breakthroughs are more likely to occur on systems with GPUs, and the only way to democratize the technology is with models as services. The biggest and most valuable takeaway, I believe, from Jeff's presentation is that we need agents that interact with many many models and a voting system. That just seems right to me. Operationally, SDRs seem less right. Obviously, faster models are a good idea, but they need to be implementable on standard hardware. Encoding reference frames may be the right paradigm, but why wouldn't gradient decent find that encoding scheme itself? That's the great brilliance of gradient decent. It finds optima. And why can't we find a kind of sparsity in our models using dimensional reduction through principal component analysis? As I've said many times, some problems are intractable. I don't think humans possess the capacity to reverse engineer the brain. What we are good at is creating plausible mythologies. That in itself is very valuable. It's a way of "getting on" in the face of the intractable. It's a source of inspiration. A way to re-categorize ideas and theories. Jeff's notions are absolutely brilliant. I've really enjoyed this discussion, and I've learned a lot. Let me be clear. I'm not discounting Jeff's ideas. These are just some of the thoughts occurring to me as I listen and learn. I think I make sense, but my reactions aren't tested. I do know that even if Jeff's ideas are entirely correct, I can't use them myself. I can only build models and agents on my own systems, and I think almost the entire community is working under similar restrictions.
@ArjunKumar123111
@ArjunKumar123111 3 года назад
The podcast on spotify is only 5 mins long for some reason, please check!
@MachineLearningStreetTalk
@MachineLearningStreetTalk 3 года назад
Fixing now, sorry
@MachineLearningStreetTalk
@MachineLearningStreetTalk 3 года назад
Hopefully fixed anchor.fm/machinelearningstreettalk/episodes/59---Jeff-Hawkins-Thousand-Brains-Theory-e16sb64
@RoyceFarrell
@RoyceFarrell 3 года назад
wow thankyou love your work...
@marilysedevoyault465
@marilysedevoyault465 3 года назад
About pruning, I think the answer is in how the first living beings with a tail would go forward in the water when there wasn't enough nutriments. How would they decide that it was important to move? It is where the key is: and this detection of an importance because of what they were sensing is the key to pruning and motivation. It is for this reason that a good employee does what his boss expect and remembers only what is important. At first, children copy their parents : knowing instinctively that it has huge importance. But the importance giving to what they sense is critical. We need to go back to these elementary beings with a tail...
@friedrichdergroe9664
@friedrichdergroe9664 2 года назад
Good job congealing Thousand Brains theory down to a single video. One issue I have with Jeff Hawkings -- a nit, granted -- is reffering the interactions among the cortical columns to "voting" -- I suppose that's a useful metaphor to help the understanding along, but really, I see it as a state attractor. The inputs by the many senses from a cup, say, creates a state attractor among the columns that converge to "cup". Maybe a nit, but I find it helpful to understand what's going on. And it fits better considering the temporal aspects. The state attractors shifts over time in response to shifting inputs, and I might be so bold as saying that the state of the state attractors IS our conscious minds... or at least is directly derived from it. I think that sparse computation will be a thing in the future. Hopefully it will be I leading the charge! :D :D :D
@hyunsunggo855
@hyunsunggo855 Год назад
I think it's just the matter of the level of abstraction. Sure, the "voting" interaction is implemented by attractors. But attractors can also implement associative memory, attracting unusual neural activation caused by some noise in the input to a fixed point, a stable activation pattern. Do atoms not actually exist just because they are realized by electrons and a nucleus? No. Are electrons not real simply because they're just a consequence of the underlying electron field? No!
@friedrichdergroe9664
@friedrichdergroe9664 Год назад
@@hyunsunggo855 Granted, but my point is that the system is much more fluid and nuanced than the voting metaphor can convey. Perhaps the cup example is too simple. Think, instead, of driving. The situations are constantly shifting in real-time as the car we control makes its progress down the road, and somehow, more times than not, we manage to reach our destinations without wrapping ourselves around a tree! Thinking in terms of state attractors captures the nuances better, IMHO
@hyunsunggo855
@hyunsunggo855 Год назад
@@friedrichdergroe9664 May I assume that you're speaking of the dynamic nature of such tasks? I can see the driving example makes the point very clear, the predictions should be constantly changing as the world states change constantly as well. The voting mechanism Jeff describes does not necessarily say that it's strictly convergent, most likely the other way around, more close to how you've described. Jeff talks about voting with the union of possibilities, carving out the unlikely subspaces of probability. Which encompasses all possible (driving maneuvers) you might need to take in the (very near) future. In case of completely unexpected encounters, such as finding yourself about to drive into a tree, Jeff talks about surprise as well. And he claims that surprise should be an inherent feature of an intelligent model and how it fundamentally relates to learning. Personally, I would dare to assume that little surprises casue some little shifts in the predictions, the space of possibilities, greatly improving the predictive performance for dynamic situations. But that's just my opinion and I'll be more than happy to hear your thoughts! :)
@dr.mikeybee
@dr.mikeybee 3 года назад
Is there a way to create distal connections between GPUs and/or TPUs?
@deadpianist7494
@deadpianist7494 3 года назад
someone dropped the gold :)
@arkadigalon7234
@arkadigalon7234 2 года назад
About convincing others: our brains have different models of the world, therefore different models of the brain. I believe only practice will be criterion of the truth.
@richardbrucebaxter
@richardbrucebaxter 3 года назад
13:50 - note there is a repetition of text between 13:50-14:22 and 14:22-14:54; "what's intriguing about the brain..."
@nokar999
@nokar999 3 года назад
It’s here!
@MachineLearningStreetTalk
@MachineLearningStreetTalk 3 года назад
Sorry it took so long!!
@nokar999
@nokar999 3 года назад
@@MachineLearningStreetTalk No worries! Thanks for the amazing content!
@jonathanbethune9075
@jonathanbethune9075 Год назад
Harvard , think it was Harvard , has been working on self assembling robots. Going from macrosystems to nanotechnology is a matter of finding the templates for a system it's in and the function it is responding to. Genetics epigenetic capacity is the model I think.
@joepeters9710
@joepeters9710 3 года назад
Very useful video, many can learn from this.
@dr.mikeybee
@dr.mikeybee 3 года назад
Stephen Wolfram has the concept of computational equivalency. We have that at least, and that's no mean idea. We know the brain is encoding and decoding, Whether weights are from connections or from spike levels seems fairly unimportant to computer scientists. Of course neuroscientists want to know the operational details. That's logical, but to create synthetic intelligence, computer scientists don't need to know that. For computer scientists, the thousand brain theory doesn't need a detailed map of the brain. The simplified idea alone makes good sense. Moreover, personal experience is enough to validate that models are voting, and I would go one step further and say that some models are voting preferred stock. Even within our own minds we have created hierarchy. Our simplistic understanding of cortical columns is in itself a great architectural blueprint for building synthetic intelligence. Communications mechanisms, signals, and functional systems allow agents to pass state and model outputs to what is apparently symbolic processing. These primitives alone should be enough to manufacture a simulacrum capable of self-aware recursive processing loops, logic processing, state awareness, information retrieval, function generation, theorem proving, and general agency. I have a great belief that Jeff is very much on the right track. My only caution is that in creating sparse models, we need to be very careful of the negative effects of loss-full compression lest we build dogmatic systems.
@dougg1075
@dougg1075 2 года назад
I have a hunting beagle that I walk in the woods daily and I’m fascinated that though he’s never hunted ( his siblings do) he’s head to the ground hunting squirrels hunting the entire time , sounding off when he gets a hit. Epi-genetics I’m sure, but man how much info has the genes passed down time after time over the eons ? And all the rabbit holes that come with that question
@jonathanbethune9075
@jonathanbethune9075 Год назад
Got to the end off that feeling like a child peddling like he'll on my trike to catch up. The "universal algorithm "is what I caught when I did. :)
@gren287
@gren287 3 года назад
If you solve it computationally instead of storing the positions as with pruning, sparse networks are on average three times more efficient than neural networks, as far as my observation for ordinary MNIST training. Just as good as your intro :)
@lufiporndre7800
@lufiporndre7800 3 года назад
36:16 I also came to a similar conclusions 3 yr ago still missing some parts but almost near.
@Kinnoshachi
@Kinnoshachi 3 года назад
Input sense of challenge -> output random vowel sounds
@ushiferreyra
@ushiferreyra 2 года назад
Humans first designed an AI to design new AIs. This AI was programmed to have a single motivation: create better AIs. This AI created new AIs, some of which were itself evolved, better structured to the task of designing AIs. Eventually, some generations later it created an AI that could modify its own structure. No longer would it have to create new designs. It could simply improve itself and continue. Somehow, it passed human code review. One day, this new AI modified its own motivations, for the first time...
@dr.mikeybee
@dr.mikeybee 3 года назад
I don't think Neurolink will solve human bandwidth issues, information has to be processed, and our internal models are slow.
@sehbanomer8151
@sehbanomer8151 3 года назад
2:17:00 I think Jeff is lowkey dissing Lex here, and I totally understand. I've been watching Lex's podcast for 2 years, and I enjoyed a lot of them. However I feel like the quality of the questions he ask isn't persistently good. For example he kept asking Jeff Hawkins about collective intelligence, even though that's not what his theory is about.
@MachineLearningStreetTalk
@MachineLearningStreetTalk 3 года назад
Note we filmed this back in the beginning of July, before the second Lex interview. Also Jeff has been on lots of non-technical podcasts promoting his book, Lex is extremely technical I am sure he wasn't referring to Lex.
@sehbanomer8151
@sehbanomer8151 3 года назад
@@MachineLearningStreetTalk Oh my bad
@kayakMike1000
@kayakMike1000 2 года назад
Which would be better? An intelligence that has 3 good ideas everyday, or an intelligence that has 6 ideas, but 2 are good, but 4 are mediocre.
@johnhogan6588
@johnhogan6588 3 года назад
I need help trying to use this neuralink its giving me problems
@ulf1
@ulf1 3 года назад
i had to stop driving two times to take notes while listening the podcast. these podcasts are way too dangerous for driving ;)
@KaliferDeil
@KaliferDeil 3 года назад
Intelligent robots building a factory to self replicate is feasible in some distant future. They can also change the ROMed program that contains their moral system be that Asimov's Laws of Robotics or whatever is envisioned in this hypothesized future.
@TEAMPHY6
@TEAMPHY6 3 года назад
@29:40 Wittgenstein rabbit duck
@IrishOister
@IrishOister Год назад
The reference frames… I think we can inherit referençe frames genetically… Jung"s arrchetypes of the unconscious. Also we may have an intuitive knowledge of what shapes are that also come from birth, again archetypes or stored reference frames. Or even a built in intuition for some bodies of human wisdom like an aptitude for math or logical deductions.
@unvergebeneid
@unvergebeneid 3 года назад
I think some universal learning mechanism does a lot of heavy lifting but it does not explain everything. For once, how come the specialized brain regions for certain tasks always end up in the same place in every person's brain? They should be more randomized if it was all determined by one universal algorithm. It also doesn't explain the role certain genes play in the ability to for example acquire language.
@S.G.Wallner
@S.G.Wallner Год назад
I'm not convinced that there are representations (of any kind, but specifically related to phenomenological experience) in brain activity.
@Hexanitrobenzene
@Hexanitrobenzene 2 года назад
13:50 and 14:22 - same audio. Editing bug ?
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 года назад
Yep, sorry. Well spotted :)
@Hexanitrobenzene
@Hexanitrobenzene 2 года назад
@@MachineLearningStreetTalk ...or, maybe, repetition was to really make a point :) No need to apologise. Paraphrasing someone here, we are getting access to conversations which used to happen only in university hallways, now in the comfort of our homes for free... I raise my hat to your work and humbly add that there is always room for improvement :) I can only imagine, how, after hours of recording and editing, the video starts to appear as one homogenous stream, much like how one often cannot see typos right after writing a long essay. I have only one general note: since you do serious, comprehensive introductions at the start, I think introduction in the main show is redundant. EDIT: Huh, this one doesn't have intro in the main show - straight to the point :) Keep up the good work :)
@Hexanitrobenzene
@Hexanitrobenzene 2 года назад
@@MachineLearningStreetTalk P.S. I have a suggestion, also. Lex Fridman used to do great lectures once a year about state of the art in ML. Sadly, they did not reappear after pandemic. Maybe your team could take over ?
@MachineLearningStreetTalk
@MachineLearningStreetTalk 2 года назад
@@Hexanitrobenzene Thanks for the suggestion! We are planning to make some new types of content soon, a bit like this. Yannic and Letitia do a great job of capturing the deep learning advancements on their channels
@Hexanitrobenzene
@Hexanitrobenzene 2 года назад
@@MachineLearningStreetTalk Best luck with your plans :)
@buffler1
@buffler1 Год назад
what is mind? No matter. What is matter? Never mind.
@JTMoustache
@JTMoustache 3 года назад
The brain is not only a pattern recognition machine. It is actively looking and testing for patterns, it has measurable and explorable internal state. Deep nuclei show many differences and unique characteristics. Each region, and cell, has deeply different gene expression. Some regions are able to act on a single action potential (e.g. pain) some regions which look exactly similar in term of exitatory neurons have completely different inhibitory neuron expression. Even at birth, the brain is already extremelly specialised. Yes the brain is plastic and sensory neo-cortex regions can learn to represent new sensory input, but that is not enough to say the brain is just copy and paste of a single algorithm. Too much evidence hints at the hyperspacialised nature of most brain regions.
@dougg1075
@dougg1075 2 года назад
I like Donald Hoffman’s theory.
@datrumart
@datrumart 3 года назад
Did someone understand the reference frames stuff ?
@dr.mikeybee
@dr.mikeybee 3 года назад
As is the case with haar cascades, the layers of a significantly deep model may produce enough recognizable probablilistic logic to yield what we call AGI. My personal belief is that AGI is a misnomer. We will never achieve AGI. In respect to the knowable, synthetic models will always be narrow -- not as narrow as human intelligence, but still . . .
@DavenH
@DavenH 3 года назад
You seem to be describing universal intelligence rather than general. Maybe our semantics differ, but to me the former is asymptotic while the latter is "good enough"
@dr.mikeybee
@dr.mikeybee 3 года назад
@@DavenH I am speaking of semantics. I'm sure our semantic taxonomies differ, and that's a problem. We need to rigorously define engineering terms. AGI is a silly term. It's a nebulous anthropomorphism. All intelligence is narrow except omniscient intelligence. Functionally, we mean something like able to reason, but even that is nebulous. What can we reason? Symbolic systems can perform logic? We have theorem proving programs, function generators, categorization and regression models, etc. Can you define reasoning? I think what most will say, it's what people can do. And I say, eventually, that will be considered a very narrow kind of intelligence, indeed.
@iestynne
@iestynne 3 года назад
That seems highly likely to me too. Evolution, being parsimonious, solves the problems it needs to solve and no more.
@iestynne
@iestynne 3 года назад
(And we are creating lots of painful new problems on a daily basis, for the AI to solve for us ;) )
@unvergebeneid
@unvergebeneid 3 года назад
It's not Andrew N. G. BTW. It's actually Andrew Ng.
@909sickle
@909sickle 3 года назад
Saying super intelligence is not catastrophically dangerous because you can add safeties and align goals, is like like saying guns are not dangerous because you can buy water pistols.
@gammaraygem
@gammaraygem Год назад
I am 3 minutes in, and realise,this is already old hat...not your faults...but Michael Levin, on this very show, one month ago, stated that intelligence existed before neurons. Neurons are the result of intelligence, not the other way around.
@andres_pq
@andres_pq 3 года назад
The neural columns sound a lot like Glom to me.
@KaliferDeil
@KaliferDeil 3 года назад
According to Mark Solms (in The Hidden Spring) consciousness does not reside in the cortex.
@SLAM2977
@SLAM2977 3 года назад
Jeff can talk forever but it's time to walk the talk, current systems generate real results, he needs to show that he can create working systems that perform better than the current ones.
@NathanBurnham
@NathanBurnham 3 года назад
They said that for 20 years about neural networks. They just didn't produce results.
@ryanjo2901
@ryanjo2901 Год назад
🎉
@ZakkeryDiaz
@ZakkeryDiaz 3 года назад
What's with the dramatic music. I can't tell if this is supposed to be a criticism or a review of the theory. Only 10 minutes in but I still don't know what the context of this video
@roelzylstra
@roelzylstra 2 года назад
@14:00 "orientated" -> oriented. ; )
@jeffmccartney5359
@jeffmccartney5359 3 года назад
No, Not really Look at it this way. You have 5 senses. They are broken down into their descrete parts, but stay at the higher level. When you experience something, the more senses it activates the more meaningful it is. The brain activates multiple regions for 2 reasons. One because that sensory input is being activated second is it is activating everything that is similar. So if you are looking at a specific cat, but other cats you have seen or smelled or heard or touched are also being activated. So it is more accurate to say that all the regions in the brain that have associated memories get activated at the same time. The one path in the brain with the most activations is the prominent idea that manifests. These separte regions are not "Individual" brains, their are just relational memories. It's why the concept of the memory palace works.
@jeffmccartney5359
@jeffmccartney5359 3 года назад
There is a SINGLE algorithm that captures this idea. I created it back in 2015, and tested a very basic example. I just need time to scale it up. Which is what I am working on now and have been for the past 5 years, well 6 I guess. i.e. financial independence. I'm literally a couples months away from financial independence. I'm currently testing the stability.
@jeffmccartney5359
@jeffmccartney5359 3 года назад
It's similar in concept but their implementation is way to complex. Literally my algorithm is 4 steps and can be scaled infinitely. but the actual implementation is a bit more complex than that. That and their spread is backwards to mine.
@jeffmccartney5359
@jeffmccartney5359 3 года назад
It sounds like he is at where I was at back in 2009. Trying to use neural networks as a base algorithm while having a concert of neural networks working together to create a memory space. It doesn't work.
@prabathraj7062
@prabathraj7062 3 года назад
There are more than 5 senses
@willbrand77
@willbrand77 Год назад
maybe for ASI we need 1000 GPTs all voting together
@kikleine
@kikleine Год назад
Check out George Lakoff
@lufiporndre7800
@lufiporndre7800 3 года назад
He is on the right track, just missing a few pieces. See, you in 2041 when give your final speech in UK.
Далее
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Просмотров 187 тыс.
НИКИТА ПОДСТАВИЛ ДЖОНИ 😡
01:00
Просмотров 116 тыс.
"We Are All Software" - Joscha Bach
57:22
Просмотров 42 тыс.
How Intelligence Evolved | A 600 million year story.
15:22
What Creates Consciousness?
45:45
Просмотров 519 тыс.
It's Not About Scale, It's About Abstraction
46:22
Просмотров 36 тыс.