Тёмный

Can We Build an Artificial Hippocampus? 

Подписаться
Просмотров 195 тыс.
% 8 348

To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov/
The first 200 of you will get 20% off Brilliant’s annual premium subscription.
My name is Artem, I'm a computational neuroscience student and researcher. In this video we discuss the Tolman-Eichenbaum Machine - a computational model of a hippocampal formation, which unifies memory and spatial navigation under a common framework.
Patreon: www.patreon.com/artemkirsanov
Twitter: ArtemKRSV
OUTLINE:
00:00 - Introduction
01:13 - Motivation: Agents, Rewards and Actions
03:17 - Prediction Problem
05:58 - Model architecture
06:46 - Position module
07:40 - Memory module
08:57 - Running TEM step-by-step
11:37 - Model performance
13:33 - Cellular representations
17:48 - TEM predicts remapping laws
19:37 - Recap and Acknowledgments
20:53 - TEM as a Transformer network
21:55 - Brilliant
23:19 - Outro
REFERENCES:
1. Whittington, J. C. R. et al. The Tolman-Eichenbaum Machine: Unifying Space and Relational Memory through Generalization in the Hippocampal Formation. Cell 183, 1249-1263.e23 (2020).
2. Whittington, J. C. R., Warren, J. & Behrens, T. E. J. Relating transformers to models and neural representations of the hippocampal formation. Preprint at arxiv.org/abs/2112.04035 (2022).
3. Whittington, J. C. R., McCaffary, D., Bakermans, J. J. W. & Behrens, T. E. J. How to build a cognitive map. Nat Neurosci 25, 1257-1272 (2022).
CREDITS:
Icons by biorender.com and freepik.com
Brain 3D models were created with Blender software using publicly available BrainGlobe atlases (brainglobe.info/atlas-api)
Animations were made using open-source Python packages Matplotlib and RatInABox ( github.com/TomGeorge1234/RatInABox )
Rat free 3D model: skfb.ly/oEq7y
This video was sponsored by Brilliant

Опубликовано:

 

30 апр 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 314   
@ArtemKirsanov
@ArtemKirsanov Год назад
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/ArtemKirsanov/. The first 200 of you will get 20% off Brilliant’s annual premium subscription.
@KnowL-oo5po
@KnowL-oo5po Год назад
your videos are amazing you are the Einstein of today
@RegiJatekokMagazin
@RegiJatekokMagazin Год назад
@@KnowL-oo5po Business of today.
@josephvanname3377
@josephvanname3377 Год назад
Brilliant needs to have a course on reversible computation.
@ironman5034
@ironman5034 Год назад
I would be interested to see code for this, if it is available of course
@muneebdev
@muneebdev Год назад
I would love to see a more technical video explaining how a TEM transformer would work.
@waylonbarrett3456
@waylonbarrett3456 Год назад
I have many mostly "working" "TEM transformer" models although I've never called them that. This idea is not new; just its current sysnthesis. Basically, all of the pices have been around for a while and I've been building models out of them. I don't ever have enough time or help to get them off the ground.
@jonahdunkelwilker2184
@jonahdunkelwilker2184 Год назад
Yes same, I would love a more technical video on how this works too! Ur content is so awesome, currently studying CogSci and I wanna get into neuroscience and ai/agi development, thank u for all the amazing content:))
@mryan744
@mryan744 Год назад
Yes please
@Arthurein
@Arthurein Год назад
+1, yes please!
@GuinessOriginal
@GuinessOriginal Год назад
Predictive coding sounds a bit like what LLMs do.
@666shemhamforash93
@666shemhamforash93 Год назад
A more technical video exploring the architecture of the TEM and how it relates to transformers would be amazing - please give us a part 3 to this incredible series!
@kyle5519
@kyle5519 7 месяцев назад
It's a path integrating recurrent neural network feeding into a Hopfield network
@alexkonopatski429
@alexkonopatski429 Год назад
A technical video about TEM transformers would be amazing!!
@SuperNovaJinckUFO
@SuperNovaJinckUFO Год назад
Watching this I had a feeling there was some similarities to transformer networks. Basically what a transformer does is create a spatial representation of a word (with words of similar meaning being mapped closer together), and then the word is encoded in the context of its surroundings. So you basically have a position mapping, and a memory mapping. It will be very interesting so see what a greater neuroscientific understanding will allow us to do with neural network architectures.
@cacogenicist
@cacogenicist Год назад
That is rather reminiscent of the mental lexicon networks mapped out by psycholinguists -- using priming in lexical decision tasks, and such. But in human minds, there are phonological as well as semantic relationships.
@---capybara---
@---capybara--- Год назад
I just finished my final for behavioral neuroscience, lost like 30% of my grade to late work due to various factors this semester, but this is honestly inspiring and makes me wonder how the fields of biology and computer science will intersect in the coming years. Cheers, to the end of a semester!
@joesmith4546
@joesmith4546 Год назад
Computer scientist here: they do! I’m absolutely no expert on neuroscience, but computer science (a subfield of mathematics) has many relevant topics. One very interesting result is that if you start from the perspective of automata (directed graphs with labeled transitions and defined start and “accept” states) and you try to characterize the languages that they recognize, you very quickly find as you layer on more powerful models of memory that language recognition and computation are essentially the exact same process, even though they seem distinct. If you want to learn more about this topic, I have a textbook recommendation: Michael Spiders Theory of Computation, 3rd edition Additionally, you may be interested in automated theorem proving as another perspective on machine learning that you may not be familiar with. Neither automata nor automated theorem proving directly describe the behavior of neural circuits, of course, but they may provide good theoretical foundations for understanding what is required for knowledge, memory, and signal processing in the brain, however obfuscated by evolution these processes may be.
@NeuraLevels
@NeuraLevels Год назад
"Perfection is enemy of efficiency" - they say, but in the long run, quality wins when we run for trascendent work instead of immediate rewards. BTW, the same happend to me. Mine was the best work in the class. the only which also incorporated beauty, and the most efficient design, but the professor took 9/20 points because a 3 days delay. His lessons I never learned. I am not an average genius. Nor are you! No one has achieved what I predicted on human brain internal synergy. Here the result (1min. video). ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-25WsCtvOE98.html
@jeffbrownstain
@jeffbrownstain 11 месяцев назад
Look up Micheal Levin and his TAME framework (technological approach to mind everywhere), cognitive light cones and the computational boundary of the self. He's due for an award of some type for his work very soon.
@silvomuller595
@silvomuller595 Год назад
Please don't stop making these videos. Your channel is the best! Neuroscience is underrepresented. Golden times are ahead.
@memesofproduction27
@memesofproduction27 Год назад
A renaissance even... maybe
@timothytyree5211
@timothytyree5211 Год назад
I would also love to see a more technical video explaining how a TEM transformer would work.
@MrHichammohsen1
@MrHichammohsen1 Год назад
This series should win an award or something!
@GiRR007
@GiRR007 Год назад
This is what I feel like current machine learning models are, different primitive sections of a full brain. Once all the pieces are brought together you get actual artificial general intelligence.
@josephlabs
@josephlabs Год назад
I totally agree like a 3D net
@aaronyu2660
@aaronyu2660 Год назад
Well, we’re still way miles off
@jeffbrownstain
@jeffbrownstain 11 месяцев назад
@@aaronyu2660 Closer than you might think
@cosmictreason2242
@cosmictreason2242 11 месяцев назад
@@jeffbrownstainno you need to see the neuron videos. Computers are binary and neurons are not. Besides, each bit of storage is able to be used to store multiple different files.
@didack1419
@didack1419 10 месяцев назад
​​​@@cosmictreason2242 you can simulate the behavior of neurons in computers. There are still advantages to physical-biological neural networks but those could be simulated with a sufficient number of transistors. If it's too difficult they will end up using physical artificial neurons. What I understand that you mean by "each bit of storage is able to be used to store multiple different files" is that biological NNs are very effective at compressing data (ANNs also compress data in that basic sense), but there's no reason to think that carbon-based physical-biological NNs are unmatchable. I'm not gonna say that I have a conviction that it will happen sooner rather than later, and people here are also really vague regardless. What I could say is that I know of important technologists who think that it will happen sooner (others say that it will happen later).
@marcellopepe2435
@marcellopepe2435 Год назад
A more technical video sounds good!
@robertpfeiffer4686
@robertpfeiffer4686 Год назад
I would *love* to see a deeper dive into the technology of transformer networks as compared with hippocampal research! These videos are outstanding!!
@anywallsocket
@anywallsocket Год назад
Your visual aesthetic is SO smooth on my brain, I just LOVE it
@jasonabc
@jasonabc Год назад
For sure would love to see a video on the transformer/hopfield networks and the relationship to the hippocampus. Great stuff keep up the good work.
@al3k
@al3k Год назад
Finally, someone talking about "real" artificial intelligence.. I've been so bored of the ML models... just simple algos.. What we are looking for is something far more intricate.. Goals.. 'Feelings' about memories and current situations... Curiosity... Real learning and new assumptions...A need to grow and survive.. and a solid basis for benevolance, and a fundamental understanding of sacrifice and erring..
@xenn4985
@xenn4985 5 месяцев назад
What the video is talking about is using simple algos to build an AI, you reductive git.
@inar.timiryasov
@inar.timiryasov Год назад
Amazing video! Both the content and production. Definitely looking forward for a TEM-transformer video!
@cobyiv
@cobyiv Год назад
This feels like what we should all be obsessed with as opposed to just pure AI. Top notch content!
@aw2031zap
@aw2031zap Год назад
LLM are not "AI" they're just freaking good parrots that give too many people the "mirage" of intelligence. A truly "intelligent" model doesn't make up BS to make you go away. A truly "intelligent" model can draw hands FFS. This is what's BS.
@gorgolyt
@gorgolyt 7 месяцев назад
idk what you think "pure AI" means
@astralLichen
@astralLichen Год назад
This is incredible! Thank you for explaining these concepts so well! A more detailed video would be great, especially if it went into the mathematics.
@jamessnook8449
@jamessnook8449 Год назад
This has already been done at The Neurosciences Institute back in 2005. We developed a model that not only led to place cell formation, but also prospective and retrospective memory - the beginning of episodic memory. We used the model to control a mobile device that ran the gold standard of spatial navigation ' the Morris water maze'. In fact Professor Morris was visiting the Institute for other reasons and viewed our experiment and gave it his blessing.
@memesofproduction27
@memesofproduction27 Год назад
Incredible. Were you on the Build-A-Brain team? Could you please direct me to anything you would recommend me read on your work there to familiarize myself and follow citations toward influence on present day research? Much respect, me
@tenseinobaka8287
@tenseinobaka8287 Год назад
I am just learning about this and it sounds so exciting! A more technical video would be really cool!
@lake5044
@lake5044 Год назад
But, at least in humans, there is at least two crucial things that is model of intelligence is missing. First, the abstraction is not only applied to the sensory input, it's also applied to internal thoughts (and no, it's not just the same as running the abstraction on the prediction). For example, you could think of a letter (a symbol from the alphabet) and imagine what it would look like rotated or mirrored. And no recent sensory input has a direct relation to the letter you choose, what transformation you chose to imagine or even to imagine all of this in the first place. (You can also think of this as the ability to execute algorithms in your mind, a sequence of transformations based on learned abstractions.) Second, there is definition a list of remembered structures/abstractions that we can run through when we're looking to find a good match for a specific problem or data. Sure, maybe this happens for the "fast thinking" (the perception part of thinking, you see a "3" you perceive it without thinking it has two incomplete circles) but also for the slow deliberate thinking. Take this following example, you're trying to solve some math problem, you're trying to fit it on abstractions you already learned, but then suddenly (whether someone gave you a hint or the hint popped in your mind) you know found a new abstraction that would better fit the problem, the input data didn't change but now you decided to see in as a different structure. So there has to be a mechanism of trying any piece of data with any piece of structure/abstraction.
@brendawilliams8062
@brendawilliams8062 Год назад
It is a separate intelligence. It communicates with the other cookie cutters by a back propagation similar to telepathic. It is as a plate of sand making patterns on it’s plate by harmonics. It is not human. It is a machine.
@ianmatejka3533
@ianmatejka3533 Год назад
Yet another outstanding video. Like many of the other comments here, I would also love to see an in-depth technical video on the TEM transformer. Please make a part 3!
@michaelgussert6158
@michaelgussert6158 Год назад
Good stuff man! Your work is always excellent :D
@AlecBrady
@AlecBrady Год назад
Yes, please, I'd love to know how GPT and TEM can be related to each other.
@yassen6331
@yassen6331 Год назад
Yes please we would love to see more detailed videos. Thank you for this amazing content🙏
@dandogamer
@dandogamer Год назад
Absolutely loved this, as someone who's coming from the ML side of things it's very interesting to know how these models are trying to mimic the inner workings of the hippocampus
@Wlodzislaw
@Wlodzislaw Год назад
Great job explaining TEM, congratulations!
@BHBalast
@BHBalast Год назад
Im amazed by the animations, and recap at the end was a great idea.
@mags3872
@mags3872 Год назад
Thank you so much for this! I think I'm doing my masters thesis on TEM so this is such a wonderful resource. Subscribed!
@benwilcox1192
@benwilcox1192 Год назад
Your videos have some of the most beautiful explanations as well as graphics I have see on youtube
@klaudialustig3259
@klaudialustig3259 Год назад
I was surprised to hear at the end that this is almost identical to the transformer architecture
@Alex.In_Wonderland
@Alex.In_Wonderland Год назад
your videos floor me absolutely every time! You clearly put a LOT of work in to these and I can't thank you enough. These are genuinely a lot of fun to watch! :)
@ArtemKirsanov
@ArtemKirsanov Год назад
Thank you!!
@justwest
@justwest 7 месяцев назад
absolutely astonishing that I, like all of you, have access to such valuable, highly interesting and professional educational material. thanks a lot!
@johanjuarez6238
@johanjuarez6238 Год назад
Mhhhhh that's so interesting! Quality is mad here, gg and thanks for providing us with these videos.
@lucyhalut4028
@lucyhalut4028 Год назад
I would love to see a more technical video! Amazing work, Keep it up!😃
@tomaubier6670
@tomaubier6670 Год назад
Such a nice video! A deep dive in TEM / transformers would be awesome!!
@bluecup25
@bluecup25 10 месяцев назад
The Hippocampus knows where it is at all times. It knows this because it knows where it isn't. By subtracting where it is from where it isn't, or where it isn't from where it is (whichever is greater), it obtains a difference, or deviation. The guidance subsystem uses deviations to generate corrective commands to drive the organism from a position where it is to a position where it isn't, and arriving at a position where it wasn't, it now is. Consequently, the position where it is, is now the position that it wasn't, and it follows that the position that it was, is now the position that it isn't. In the event that the position that it is in is not the position that it wasn't, the system has acquired a variation, the variation being the difference between where the missile is, and where it wasn't. If variation is considered to be a significant factor, it too may be corrected by the GEA. However, the Hippocampus must also know where it was. The Hippocampus works as follows. Because a variation has modified some of the information the Hippocampus has obtained, it is not sure just where it is. However, it is sure where it isn't, within reason, and it knows where it was. It now subtracts where it should be from where it wasn't, or vice-versa, and by differentiating this from the algebraic sum of where it shouldn't be, and where it was, it is able to obtain the deviation and its variation, which is called error.
@GabrielLima-gh2we
@GabrielLima-gh2we Год назад
What an amazing video, knowing that we can now understand how the brain works through these artificial models is incredible, neuroscience research might explode in discoveries right now. We might be able to fully understand how this memory process works in the brain by the end of this decade.
@ceritrus
@ceritrus Год назад
That might genuinely be the most fascinating video I've ever seen on this website
@ArtemKirsanov
@ArtemKirsanov Год назад
Wow, thank you!
@nicolaemihai8871
@nicolaemihai8871 8 месяцев назад
Yes pls keep on working on this series as your content îs really creative, concise, high-quqlity and it adresses exotic specific themes
@julianhecker944
@julianhecker944 Год назад
I was just thinking about building an artificial hippocampus using something like a vector database this past weekend! What timing with this upload!
@waylonbarrett3456
@waylonbarrett3456 Год назад
I've been building and revising this machine and machines very similar for about 10 years. I didn't know for a long time that they weren't already known.
@sgaseretto
@sgaseretto 6 месяцев назад
Really nice video, very well explained!! Would love to see a more detailed version of TEM
@Mad3011
@Mad3011 Год назад
This is all so fascinating. Feels like we are close to some truly groundbreaking discoveries.
@CharlesVanNoland
@CharlesVanNoland Год назад
Don't forget groundbreaking inventions too! ;)
@egor.okhterov
@egor.okhterov Год назад
The missing ingredient is how to make NN changes on the fly when we receive sensory input, without backpropagation. There's no backpropagation in our brain
@CharlesVanNoland
@CharlesVanNoland Год назад
@@egor.okhterov The best work I've seen so far in that regard is the OgmaNeo project, which explores using predictive hierarchies in lieu of backpropagation.
@egor.okhterov
@egor.okhterov Год назад
@Charles Van Noland the last commit in github is from 5 years ago and the website didn't update for quite a while. What happened to them?
@yangsong4318
@yangsong4318 Год назад
@@egor.okhterov There is an ICLR 2023 paper from Hinton: SCALING FORWARD GRADIENT WITH LOCAL LOSSES
@alexharvey9721
@alexharvey9721 Год назад
Definitely keen to see a more technical video, though I know it would be a lot of work!
@TheSpyFishMan
@TheSpyFishMan Год назад
Would love to see the technical video describing the details of transformers and TEMs!
@kevon217
@kevon217 Год назад
top notch visualizations! great video!
@arasharfa
@arasharfa Год назад
how fascinating that you talk about sensory, structural and constructed model/interpretation, those are the three base modalities of thinking i've been able to narrow down all of our human experience to in my artistic practice. I call them "phenomenologic, collective and the ideal" modalities of thinking.
@asemic
@asemic Год назад
this is a big reason i've been interested in neuroscience for a while. just the fact you are covering this gets my sub. this area needs more interest.
@arnau2246
@arnau2246 Год назад
Please do a deeper dive into the relation between TEM and transformers
@archeacnos
@archeacnos 4 месяца назад
I've somehow found your channel, AND WOW IT'S AMAZINGLY INTERESTING
@aleph0540
@aleph0540 Год назад
FANTASTIC WORK!
@KonstantinosSamarasTsakiris
The video that convinced me to become a patron! Super interested in a part 3 about TEM-transformers.
@ArtemKirsanov
@ArtemKirsanov Год назад
Thanks :3
@foreignconta
@foreignconta Год назад
I really liked your video. And I would like to see a technical video on TEM transformer. Especially the difference. Subscribed
@dysphorra
@dysphorra Год назад
Actually 10 years ago Bergman build a prosthetic hippocampus with much simpler architecture. It was tested in three different conditions. 1) Bergman take input from healthy rat's hippocampus and successfully predicted it's output with his device. 2) He removed the the hippocampus and replaced it with his prosthesis. Electrodes collected inputs to hippocampus sent it to computer then back to the output neurons. And it worked. 3) He connected an input of the device to the brain of a training mice and the output of device to the brain of an untrained one. And he showed some sort of memory transfer (!!!). Noticeable is that he used very simple mathematical algorithm to convert input into output.
@Lolleka
@Lolleka 8 месяцев назад
This is fantastic content. Subscribed in a nanosecond.
@user-zl4fp3ml4e
@user-zl4fp3ml4e Год назад
Please also consider a video about the PFC and its interaction with the hippocampus.
@astha_yadav
@astha_yadav Год назад
Please also share what software and utilities you use to make your videos ! I absolutely love their style and content 🌸
@adhemardesenneville1115
@adhemardesenneville1115 Год назад
Amazing video ! Amazing quality !
@dinodinoulis923
@dinodinoulis923 Год назад
I am very interested in the relationships between neuroscience and deep learning and would like to see more details on the TEM-transformer.
@sledgehogsoftware
@sledgehogsoftware Год назад
Even at 2:25, I can see that the model you used for the office is in fact from another thing I saw; The Office tv show! Loved seeing that connection, and it furthered the point across so well for me!!
@Kynatosh
@Kynatosh Год назад
How is this so high quality wow
@juanandrade2998
@juanandrade2998 Год назад
It is amazing how each field has its own term for these concepts. I come from an architectural background, and my brief interaction with the art's majors taught me about the concept of "Deconstruction". In my spare time I like to code, so I always thought of this "Tolman-Eichenbaum machine" process of our cognition as the act of deconstructing a system on it's most basic building blocks. I've also seen the term "generalization" to be conceptually equal in the process by which we arrive to a maximum/minimum "entropic" state of a system(depending on scope...).
@memesofproduction27
@memesofproduction27 Год назад
Ah, the eternal lexicon as gatekeeper, if only we had perfect information liquidity free of the infophysical friction of specific lexica, media, encoding, etc. Working on it:)
@juanandrade2998
@juanandrade2998 Год назад
@@memesofproduction27 This specifically is a topic in LLM that I see seldom discussed. On the one hand, language is sometimes redundant or interchangeable (like "TEM" and "Deconstruction"), but in other cases the same word has different meanings, in which case "nuance" is required in order to infer meaning. "Nuance" IMO is just a residual consequence of a lack of generalization. Because the data/syntax is not well categorized into mutually-exclusively building blocks, and there is a lot of overlap allowing for ambiguities in the message. But this is not something that can be solved with architecture, the issue is that the language in itself is faulty and incomplete. For example, a lot of times people talk about "love" as a single concept, when in reality it is the conjoint of several feelings, hence the misunderstanding. e.g.: "I don't know how she is so in love with that guy..." So, whoever is saying that line has the term "love" misaligned with the actual activity taking place. Simply because too many underlying concepts overlap into the term "love". Another example, the word "extrapolation" can be interpreted as the act of completing a pattern following previous data points. The issue is that people don't usually use the term to mean "to complete", MMOs don't ask gamers to: "Please extrapolate the next quest" OR "LEVEL EXTRAPOLATED!".... I mean... THEY COULD... but nobody does this... Because of this, If you ask a LLM to make an extrapolation of something, depending on the context, it may or may not understand the prompt. This is because the AI is not actually intelligent, instead it is subjected to its corpus of pretrained data, and the link of "extrapolation/completion" is simply not strong enough because the building blocks are not disjointed enough and there's still overlap.
@_sonu_
@_sonu_ Год назад
I lo❤ your videos more than any videos nowadays.
@porroapp
@porroapp Год назад
I like how neurotransmitters and white matter formation in the brain are analogues to weights/biases and back prop in machine learning. Both are used to amplify the signal and re-enforce activation based on rewards be it neurons and synapses or convolution layers and the connection between nodes in each layer.
@austindibble15
@austindibble15 Год назад
Fascinating, I have enjoyed both of your videos in this series very much! And your visualizations are really great and high quality. I thought the comparison between the Tolman-Eichenbaum machine and a lookup table was very interesting. In reinforcement learning, I think there's a parallel here between Q-learning (learned lookup table) and policy-based methods which use deep neural network structures.
@josephlabs
@josephlabs Год назад
I was trying to build something similar, but I thought of the memory module as an event storage, where it would store events and the location of which those events happened. Then we would be able to query things that happened by events or locations or things involved in events at certain locations. However, my idea was to take the memory storage away from the model and create a data structure(graph like) uniquely for it. TEM transformers are really cool.
@egor.okhterov
@egor.okhterov Год назад
How to store location? Some kind of hash function of sensory input?
@josephlabs
@josephlabs Год назад
@@egor.okhterov that was the plan or some graph like data structure to denote relationships.
@itay0na
@itay0na Год назад
Wow this is just great! I believe it somehow contradicts the message of AI & Neuroscience video. In any case really enjoyed that one, keep up the good work.
@BleachWizz
@BleachWizz Год назад
Thanks man I might actually reference those papers! I just need to be able to actually become a researcher now. I hope I can do it.
@0pacemaker0
@0pacemaker0 Год назад
Amazing video as always 🎉! Please do go over how Hopfield networks fit in the picture if possible. Thanks
@FA18_Driver
@FA18_Driver 3 месяца назад
Hi your narration and videos are nice. I put on while falling asleep. Thanks.
@brubrusuryoutube
@brubrusuryoutube Год назад
got an exam on neurobio of learning and memory tomorrow, upload schedules on point
@FoxTails69
@FoxTails69 9 месяцев назад
you know where my man Artem comes from when he hits the "spherical model in the vacuum" line hahaha great job!
@siggiprendergast7599
@siggiprendergast7599 Год назад
the goat is back!!
@y5mgisi
@y5mgisi Год назад
This channel is so good.
@En1Gm4A
@En1Gm4A Год назад
Awesome video. This is gamechangeing
@egor.okhterov
@egor.okhterov Год назад
Excellent video as always :) Do you have ideas on how to get rid of backpropagation to train a transformer and implement one-shot(online) life-long learning?
@plutophy1242
@plutophy1242 Год назад
love your videos! i'd like more detailed math description
@SeanDriver
@SeanDriver Год назад
Great video…the moment you showed the function of the Medial EC and LateralEC I thought …hey transformers….so really nice to see that come out at the end, albeit for a different reason. My intuition for transformers came from the finding of the ROME paper which suggested structure is stored in the higher attention layers and sensory information in the mid level dense layers
@briankleinschmidt3664
@briankleinschmidt3664 Год назад
Memory isn't stored in the brain like data. It is integrated into the "world view" If the new information is incompatible. The world view is altered, or the info is altered or rejected. The recollection of the original input includes a host of other inputs. Often when you learn a new thing, it seems as if you are remembering something you already knew. After a while it as if you always knew it.
@jamessnook8449
@jamessnook8449 Год назад
Yes, read Jeff Krichmar's work at UC Irvine, it is dramatically different than what people view as the traditional neural network approach.
@EmmanuelMessulam
@EmmanuelMessulam Год назад
As an AI engineer I would like to see more of the models that are used in neuroscience and just a light touch of artificial models, as there are many others that explain how AI models work.
@donaldgriffin6383
@donaldgriffin6383 Год назад
More technical video would be awesome! More BCI content in general would be great too
Год назад
This is cool! Thank you for sharing. The visualization is stunning, I'm curious know if you do it yourself and which tools you use
@ArtemKirsanov
@ArtemKirsanov Год назад
Thank you! Yeah, I do everything myself ;) Most of it is done in Adobe After Effects with the help of Blender (for rendering 3D scenes) and matplotlib (for animations of neural activity of TEM, random-walk etc)
@binxuwang4960
@binxuwang4960 Год назад
Well explained!! The video is just sooooo beautiful.....even more beautiful than the talk given by Whittington himself visually. How did you make such videos? using python or Unity? Just curious!
@floridanews8786
@floridanews8786 Год назад
It's cool that someone is attempting this.
@TheRimmot
@TheRimmot Год назад
I would love to see a more technical. video about how the TEM transformer works!
@mkteku
@mkteku Год назад
Awesome knowledge! What app are you using for graphics, graphs and editing? Cheers
@AiraSunae
@AiraSunae Год назад
Videos that teach me stuff like this is why i love RU-vid
@KalebPeters99
@KalebPeters99 Год назад
This was breathtaking as always Artem. ✨ Have you heard of Vervaeke's theory of "Recursive Relevance Realisation"? It fits really nicely with Friston's framework. I think its super underrated.
@markwrede8878
@markwrede8878 Год назад
It would need to host some sophisticated pattern recognition software. These would arise from values similar to phi, which, like phi itself, are described by dividing the square root of the first prime to host a specific sequential difference by that difference. For phi, square root of 5 by 2, then square root of 11 by 4, square root of 29 by 6, square root of 97 by 8, and so on. I have a box with the first 150 terms.
@markovarga2424
@markovarga2424 Год назад
You did it!
@TheMrDRAKEX
@TheMrDRAKEX Год назад
What an excellent video.
@xavierhelluy3013
@xavierhelluy3013 Год назад
So beautifull to watscg once again and very nice and very instructive. I would love a more technical video on the matter. I see a direct link between Jeff Hawkins vision of how the neocortex works, since according to him cortical columns are kind of stripped down neuronal hippocampal orientation system, but which act on concepts or sensory inputs depending on input output connections. The link llm and TEM remains amazing.
@egor.okhterov
@egor.okhterov Год назад
The thing is that Jeff Hawkins is also against backpropagation. That is the last puzzle to solve. We need to make changes in the network on the fly, at the same time as we are receiving sensory input. We learn new models in a few seconds and we don't need billions of samples
@cloudcyclone
@cloudcyclone Год назад
very good video im going to share it
@ironman5034
@ironman5034 Год назад
Yes yes, technical video!
@petemoss3160
@petemoss3160 Год назад
interesting! i've been looking at how to equip an agent with powers of observation via a vector database to log the facts and judgements (including reward expectation) from what it observers of other agents and the environment. so far i'm figuring for a vector space of logs clustering all the memories with strong positive and strong-negative reward, as well as everything closely related to them. perhaps generalization will be found this way, especially if using a decision transformer with linguistic pretraining.
@CopperKettle
@CopperKettle Год назад
Thank you, quite interesting.
@neurosync_research
@neurosync_research Год назад
Yes! Make a video that expounds on the relation between transformers and TEMS!
@JoeTaber
@JoeTaber Год назад
Nice video! You didn't mention the representational format that location and sensory nets were provided. Did location nets get cartesian coordinates? What was the representation for sensory input?