Тёмный

A Brain-Inspired Algorithm For Memory 

Artem Kirsanov
Подписаться 199 тыс.
Просмотров 115 тыс.
50% 1

Опубликовано:

 

3 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 213   
@ArtemKirsanov
@ArtemKirsanov 3 месяца назад
Join Shortform for awesome book guides and get 5 days of unlimited access! Get 20% off at shortform.com/artem
@NicholasWilliams-uk9xu
@NicholasWilliams-uk9xu 3 месяца назад
I have a more streamline answer to the protein problem. The protein doesn't start folding when it's a complete sequence, it folds as the sequence is being built. This computationally and temporally constrains the degrees of movement, limiting the number of molecular forces at work at any one given time. Meaning that the part of the sequence that has already been constructed, is already folded into it's low energy state, and the part that hasn't been build isn't preturbing the current folding stage. The folding process is constrained to occur as sequentially as possible, not in parrallel.
@NicholasWilliams-uk9xu
@NicholasWilliams-uk9xu 3 месяца назад
This is top notch content, good work.
@NicholasWilliams-uk9xu
@NicholasWilliams-uk9xu 3 месяца назад
A threshold activation heatmap over a parallel distribution of temporal sequential threads is more descriptive. Each thread operates in its own input/output relative connection space and favors specific input sequences over time. Maximum amplification of a sequence (i_1/time + i_2/time + i_3/time...) indicating highly favored temporal sequence and (i_3/time - i_2/time - i_1/time...) indicating the least favored temporal sequence (with temporal sequences in-between these 2 extremes). Each thread is measured against its threshold (T), Amplification (A), and a latent timeframe (L) and elapsed time (E) for sequence-coordinated activation. When A exceeds T, the output is calculated as (1 - |L - E|) to output partners. Favored detection sequences can be defined by a integer to define (most favored position) within the temporal sequential thread. The process can be tuned by sensory reward detections over time, increasing mutation velocity in a direction, changing the param magnitude in that direction, acting on thresholds and shift the sequence of most value for each thread. There is more optimizations to further optimize this style of learning, by extending it with threads that mutate other threads based on their activation levels, allowing mutation behavior to be inferred and leveraged by the network as it's trained (it begins to handle it's own mutations internally based on inference). Then it's a matter of reading the heat map to see what parts of the network like doing certain task, and seeing the state transitions of the network.
@talkingbirb2808
@talkingbirb2808 29 дней назад
how do you pay for the subscription from Russia?
@Dent42
@Dent42 3 месяца назад
Ladies, gentlemen, and fabulous folks of every flavor, the legend is back!
@tonsetz
@tonsetz 3 месяца назад
bro got lost into obsidian css configuration, but now he returns to brain cell
@giacomogalli2448
@giacomogalli2448 3 месяца назад
He's something else, manages to make computational neuroscience engaging WHILE not giving up on the details
@terbospeed
@terbospeed 18 минут назад
Fabulous folks of every flavor :) :)
@davidhand9721
@davidhand9721 2 месяца назад
In fact, many proteins function in a _local_ minimum that is _not_ the global minimum. This is why proteins denature irreversibly when exposed to heat; there's an energy barrier that they can never come back from if they cross it.
@GeoffryGifari
@GeoffryGifari 2 месяца назад
What's stopping the proteins to fold to the global minimum immediately? And can it spontaneously transition from local to global minimum?
@onebronx
@onebronx 2 месяца назад
They function at their own global minimum, but the global minimum is also defined by the environment, for example pH, or various chemical agents. Also, for some proteins denaturation is reversible ("renaturation") when the conditions for denaturaing are removed. Irreversibility often caused by protiens interconnecting each other and forming a mesh, losing almost all degrees of freedom; in this case you cannot talk about a global minimum of an individual protein molecule.
@davidhand9721
@davidhand9721 2 месяца назад
@@GeoffryGifari The space of possible conformations for a protein is gigantic, very high dimensional, so even if they were started in a random state, it would be very unlikely for them to fall into the global minimum right away. As it happens, though, they are constructed one amino acid at a time and the enzyme that builds them keeps anything from interacting with the most recently added residues, so there is a systematic way to do it. At each step, the already-extruded portion of the polypeptide finds its own minimum, and that limits the trajectory of the conformation as it grows. Thus, it's always in some kind of local minimum at every step. There are many other factors at play, too. There are chaperone proteins that prevent selected parts from interacting, there is a whole different process for proteins that are supposed to be in a membrane, and on and on. Life never gives you simple, straightforward rules. What stops it from falling to the global minimum is the energy barriers surrounding local minima. That's pretty much the definition of local minimum, surrounded by energy barriers. If the water molecules that are constantly battering it give it enough energy, it can hop over the barrier and fall into the nearest minimum in that direction, so to speak, but there is no dynamical reason for it to progress toward the global minimum at any given moment; it has to reach it by random jostling. It's theoretically possible for it to jump out of the global minimum, too, in the same way, but by definition, the global minimum has the biggest barriers in the whole space, so it's likely to stick around there.
@davidhand9721
@davidhand9721 2 месяца назад
@@onebronx Some good points there; I was thinking only of heat denaturing. The result of physical heat denaturing is inevitably the global minimum, and at least when I was in university in 2006, we didn't hear about any proteins coming back from that. If a protein functions at its global minimum, then you can always cook it until it chemically comes apart, but that is another story. Most proteins need to be able to cycle between conformations at low energy scales, e.g. the binding energy of a small molecule, so they don't typically function at the global minimum. It would be pretty contrived if they did, if you think about it. RNAzymes, on the other hand denature and cease function at _low_ temperatures, rather than high, last I checked, and this happens for the same reason: an enzyme needs to access multiple conformation states to function, so it doesn't function if denied the energy to flop around. There are way more interaction motifs for RNA than for polypeptides, so the energy landscape, while bumpy, doesn't generally have an irreversible global minimum like a protein, so they can return to function pretty well after being cooked. That's just what I was taught ~2006, so if we know any different now, please show me a citation.
@onebronx
@onebronx 2 месяца назад
​@@davidhand9721 well, again, are we talking about heat denaturating of a single polypeptide, or of a large collective? Those are different environments -- in a collective, large molecules can start interlinking, so there is no much sense to talk about some individual global minimums. Of course there is always an ultimate global minimum -- death of the Universe, but we're usually interested in somewhat more "locally global" minimums, where your surrounding environment does not kill you if you stretched a bit too much :) For polymers it is even more complicated because of topology -- there are millions way to make a polypeptide chain self-entangled like a rope in a washing machine, and it actually can denaturate in some configuration with a _higher_ energy, simply because it cannot detangle itself from it. Intuitively, a global minimum (or very close one) should be reachable if you build the chain slowly and carefully, shaking it periodically to allow some "annealing", allowing it to de-tangle early while the chain is still short. And the shorter the polypeptide, the more likely it will eventually find its global minimum (in a sense "global for a single molecule in the pH-neutral isotonic intracellular fluid"). Huge polypeptides can be farther from a global minimum for sure, but may be not so far too, due to evolutionaly pressure: the secondary/ternary structures are evolving and they were naturally-selected for as much stability as it still allows them to function. Those pretty alpha-heilces and beta-sheets, varoius cross-links and nicely-fitting active zones overall create more bonds per unit of length than some randomly aligned segments of the same chain - they evolved for that! And more bonds means less free energy. And then, near the global minimum, you can have some local minimums with slightly higher energies, corresponding to different "activated" configuration. OTOH, the alternative "activated" configuration can be thought as "global" minimums for a system "protein + activator".
@SriNiVi
@SriNiVi 3 месяца назад
This is an insanely educational video, as a ML researcher working on representation learning for multi modal retrieval, this is insanely helpful and relatable. I think you just gave me a new area to look at now, how exciting, i owe you one.
@blakesmith4879
@blakesmith4879 Месяц назад
He has screwed you over. This goes nowhere.
@zachariascarlstromselimovi1243
@zachariascarlstromselimovi1243 Месяц назад
@@blakesmith4879explain
@oliveeisner8964
@oliveeisner8964 Месяц назад
Everyone involved in this field needs to take a step back and focus on developing empathy before engaging with emerging technologies. The money is big but the forces behind the scenes have nefarious goals. Will you engage ethically going forward? Because there are no positive role models. We are in a battle for humanity's future.
@SriNiVi
@SriNiVi Месяц назад
@@blakesmith4879 I agree, the inspiration from PGMs and energy states are interesting and comes closer to formalising emergent properties in biological evolution
@tfburns
@tfburns 3 месяца назад
John Hopfield wasn't the first to describe the formalism which has been subsequently popularised as "Hopfield networks". It seems much fairer to the wider field and long history of neuroscientists, computer scientists, physicists, and so on to call them "associative memory networks", i.e. Hopfield was definitely not the first/only to propose the network some call "Hopfield networks". For instance, after the proposal of Marr (1971), many similar models of associative memory were proposed, e.g., those of Nakano (1972), Amari (1972), Little (1974), and Stanley (1976), which all have a very similar (or exactly the same) formalism as Hopfield's 1982 paper. Today, notable researchers in this field correct their students' papers to replace instances of "Hopfield networks" with "associative memory networks (sometimes referred to as Hopfield networks)" or something similar. I would encourage you to do the same in your current/future videos. I deeply regret making a similar mistake regarding this topic in one of my earlier papers. However, I am glad to correct the record now and in the future. Refs: D Marr. Simple memory: a theory for archicortex. Philos Trans R Soc Lond B Biol Sci, 262(841):23-81, July 1971. Kaoru Nakano. Associatron-a model of associative memory. IEEE Transactions on Systems, Man, and Cybernetics, SMC-2(3):380-388, 1972. doi: 10.1109/TSMC.1972.4309133. S.-I. Amari. Learning patterns and pattern sequences by self-organizing nets of threshold elements. IEEE Transactions on Computers, C-21(11):1197-1206, 1972. doi: 10.1109/T-C.1972.223477. W.A. Little. The existence of persistent states in the brain. Mathematical Biosciences, 19(1):101-120, 1974. ISSN 0025-5564. doi: doi.org/10.1016/0025-5564(74)90031-5. J. C. Stanley. Simulation studies of a temporal sequence memory model. Biological Cybernetics, 24(3):121-137, Sep 1976. ISSN 1432-0770. doi: 10.1007/BF00364115.
@Marcus001
@Marcus001 3 месяца назад
Wow, you cited your sources on a RU-vid comment! Thanks for the info.
@tiagotiagot
@tiagotiagot 3 месяца назад
See also: Sitgler's law
@maheshkanojiya4858
@maheshkanojiya4858 3 месяца назад
Thank you for sharing your knowledge
@ArtemKirsanov
@ArtemKirsanov 3 месяца назад
Wow, thanks for the info!
@NicholasWilliams-uk9xu
@NicholasWilliams-uk9xu 3 месяца назад
Relative connection spaces are dimensionally agnostic, they don't presupose a dimensionality for each node in the connection space, it's better at tracking large distributions (where a heat map can highligh areas of activity [threshold activations], to see the areas that light up when the system is doing specific task or undergoing a specific sensory data pattern). This way the dimensionality isn't constrained to a 2d sheet and predifined curvature manifold, you can better see the modal transitions of the system via this heat map.
@ProgZ
@ProgZ 3 месяца назад
At the beginning, when you mention the O(n) problem, as a programmer it just intuitively makes you want to use a tree or a hash map lol In any case, another banger! Its fascinating to see how these things work!
@vastabyss6496
@vastabyss6496 2 месяца назад
I had the same thought. Though, a hashmap or something similar probably wouldn't work, since many times the key is incomplete or noisy, which would cause the hashing function to return a hash that would map to the wrong index
@FreshMedlar
@FreshMedlar 3 месяца назад
Thanks for the incredible quality in your videos
@ricklongley9172
@ricklongley9172 3 месяца назад
Minor correction: 'Cells that fire together, wire together' was coined by Carla Shatz (1992). Unlike Donald Hebb's original formulation, Shatz's summary of Hebbian learning eliminates the role of axonal transmission delays. By extension, neural networks which remain true to Hebb's original definition should go beyond rate coded models and instead simulate the time delays.
@NicholasWilliams-uk9xu
@NicholasWilliams-uk9xu 3 месяца назад
Yes latent time parameters need to be implemented. A threshold activation heatmap over a parallel distribution of interconnected temporal sequential threads is more descriptive in targeting what he is trying to convey in larger distibutions where hopfield computational structure fails. Each thread operates in its own input/output relative connection space and favors specific input sequences over time. Maximum amplification of a sequence (i_1/time + i_2/time + i_3/time...) indicating highly favored temporal sequence and (i_3/time - i_2/time - i_1/time...) indicating the least favored temporal sequence (with temporal sequences in-between these 2 extremes). Each thread is measured against its threshold (T), Amplification (A), and a latent timeframe (L) and elapsed time (E) for sequence-coordinated activation. When A exceeds T, the output is calculated as (1 - |L - E|) to output partners. Favored detection sequences can be defined by a integer for each input within a temporal sequential thread (a mutable trainable param), representing the input's favored position within the temporal sequence. The process can be tuned by sensory reward detections over time, increasing mutation velocity in a direction, changing the param magnitude in that direction, acting on thresholds and shift the sequence of most value for each thread. There is more optimizations to further optimize this style of learning, by extending it with threads that mutate other threads based on their activation levels, allowing mutation behavior to be inferred and leveraged by the network. Then it's a matter of reading the heat map to see what parts of the network like doing certain task given a specific time slice, and seeing the state transitions of the network when other distributions become active.
@mannyadisa
@mannyadisa Месяц назад
This is a wildly fantastic video
@danishawp32
@danishawp32 3 месяца назад
Finally, you are comeback 🎉
@didack1419
@didack1419 3 месяца назад
I was thinking about your channel less than an hour ago.
@minos99
@minos99 2 месяца назад
This was one of the good oness. I really loved it and hope part 2 comes out sooner. Keep up the amazing production sir.
@SteveRowe
@SteveRowe 3 месяца назад
This was really clear, accurate, and easy to follow. 10/10, would watch again.
@josephlabs
@josephlabs 3 месяца назад
I wanted to do research on something like this a year or two ago. This is amazing, I've got some work to do with this.
@raresmircea
@raresmircea 3 месяца назад
Exceptional pedagogical skill! I’m not able to hold these types of explanations in my mind, so any attempt at following such a web of relations would quickly have me lost. But this is a masterclass in clear considerate communication 🙏
@翁祺-i7d
@翁祺-i7d 2 месяца назад
Man, really thankful to your contents. I was facinated by your video about TEM, and started trying to fully understand that network(and memory in general) in my leisure time since about a year ago. I learned about latent variables, transformer architecture(fantastic videos by andrej karpathy), autoencoders, etc, but got stuck at (modern)hopfield nets, which I think is super important in the architecture of TEM. Very glad to see that you start to touch this field of Hopfield Nets, this is probably the best video about vanilla hns I've ever watched. Really looking forward to your video about Boltzman Machines and Modern Hopfield Nets, always appreciates your videos!
@psuedonerd1236
@psuedonerd1236 3 месяца назад
Respect for using Coldplay 🔥🔥🔥
@Mede_N
@Mede_N 3 месяца назад
Awesome video, like always. Just a small nitpick: your speaker audio jumps between the left and right audio channel, which is quite distracting - especially with headphones. You can easily solve this by setting the voice audio track to "mono" when editing the video. Cheers
@zeb4827
@zeb4827 3 месяца назад
very cool video, keen to see how the broader arguments progresses in this series
@u2b83
@u2b83 2 месяца назад
4:00 LOL "The ball doesn't search through all possible trajectories to select the optimal parabolic one." The visualization of the "trajectory space" is even funnier ;) I suspect there's different encodings for proteins with identical function, but which are more robust wrt folding consistently.
@shasun99
@shasun99 3 месяца назад
Waiting for your video so long. Thank you so much
@thwhat6567
@thwhat6567 3 месяца назад
Your back!!! awesome vid as always.
@jverart2106
@jverart2106 3 месяца назад
I was reading and watching videos about metacognition and bayesian probability and now you have thrown me into a new rabbit hole! 😅 Your videos are incredible and it's great to have a new one. Thank you!
@shoaibanis6013
@shoaibanis6013 Месяц назад
one of the most underrated channels.
@mariorossi7930
@mariorossi7930 23 дня назад
Super intuitive! Very well done! I will wait for the video on Modern Hopfield Network :P
@Dawnarow
@Dawnarow 3 месяца назад
Thank you. This is unbelievably simple and potentially more accurate than any other speculation. Next step: determining the shape of the proteins and categorizing them. The tools may not be there, yet... but a good hypothetical certain helps to reach certain conclusions.
@GeoffryGifari
@GeoffryGifari 2 месяца назад
Isn't the 2nd law of thermodynamics more directly linked to entropy? Is there an analog for entropy in the associative memory network?
@jeffevio
@jeffevio 2 месяца назад
Another great video! I really liked your energy landscape and gradient descent animations especially.
@Jaybearno
@Jaybearno 2 месяца назад
This was really interesting to listen to, and your intro triggered an intuition I never thought of-- The "weights" in associative memory (as you describe as proteins folding to minimize potential energy), also have the design of reinforcing relative to *emotional stimuli*. This is an evolutionary adaptation and makes total sense-- we want to remember the things we care about or hurt us. What I didn't consider until this video is that this 'emotional weight' also behaves just like a database index, except it's bi-directional, and decaying! The surface similar to what you displayed, maybe like a paraboloid, where X is time, Y is memory complexity, and Z would describe the 'activation energy' requires to perform recall. I think it would be interesting to somehow visualize memories on this plane. I don't know if I'm rambling, but this video alone just formed so many connections between things I never thought of!
@F_Sacco
@F_Sacco 3 месяца назад
Hopfield networks are amazing! They are studied in physics, biology, machine learning, mathematics and chemistry The rabbit hole goes extremely deep
@agurasmask2210
@agurasmask2210 3 месяца назад
Much love bro incredible video ❤ thank you
@swamihuman9395
@swamihuman9395 3 месяца назад
- Excellent! Thx. - Very well presented: clear/concise, yet fairly comprehensive - and w/ great visualizations. - Keep up the great content!...
@GeoffryGifari
@GeoffryGifari 2 месяца назад
That excitatory and inhibitory connections remind me of statistical correlation function
@odmirmiro
@odmirmiro День назад
Дружище, связка отлично работает. Всем советую. Спасибо!
@zix2421
@zix2421 Месяц назад
That’s really interesting theme, as a programmer I’m gonna try to create it by myself)
@antonionogueras6533
@antonionogueras6533 3 месяца назад
So good. Thank you
@spiralsun1
@spiralsun1 3 месяца назад
One of the best most Clear videos I’ve ever seen EVER❤ 🙏🏻THANK YOU ❤😊
@nothingtobelie
@nothingtobelie Месяц назад
Very intelligent, as someone who finds this very relevant, the rgb lights add on for pico 2 looks a good way to visualize this a little affordably. I am going to read on the 2016 paper. Geoffrey Hinton is amazing to hear beforehand. Thank you for creating this
@Raphael4722
@Raphael4722 2 месяца назад
This is really cool! Thanks for your work Artem!
@Velereonics
@Velereonics Месяц назад
I cant recall lyrics at all. But I recall the music itself very clearly, and in the right octave/key, as well as the period of time after which I first heard the song and had an autistic meltdown, listening to it around 5000 times over the course of a few weeks.
@GeoffryGifari
@GeoffryGifari 2 месяца назад
The protein example got me thinking. Is there only one unique folded configuration of the lowest energy? Can there be multiple stable comfiguration anyway, and transitions between them?
@Alexander_Sannikov
@Alexander_Sannikov Месяц назад
i want to program this immediately, this is a great sign.
@ExistenceUniversity
@ExistenceUniversity 3 месяца назад
This content is so high level, it's almost impossible to tell if it is true or not. Physically and philosophically, I have bought in, but my want of it to be true doesn't make it so. I cannot imagine this is wrong, but where does this come from? This stuff is just crazy and I don't know if it is crazy good or just crazy lol but I'm along for the ride
@jamesphillips9403
@jamesphillips9403 3 месяца назад
Holy cow, this makes a complex topic so intuitive.
@Harsooo
@Harsooo 3 месяца назад
Кайф слушать и офигевать) Greetings from Austria, keep doing what you're doing!
@Masz0211
@Masz0211 Месяц назад
Imagine your favorite song is viva la vida by Coldplay, and you DON’T want to kill yourself. Crazy, right?
@pushyamithra223
@pushyamithra223 2 месяца назад
please try to make more videos, your content is extremely good
@PastaEngineer
@PastaEngineer 3 месяца назад
This is incredibly well put together.
@alexpetrov8871
@alexpetrov8871 12 дней назад
It always amazed me that sometimes I can recognize melodies which I heard last time like 30 years ago. Same with videogame. Never in last 35 years I played any Atari videogames and the moment I see them on youtube I remember sounds and pictures.
@GeminiI-yn4xb
@GeminiI-yn4xb 2 месяца назад
Hello. It's been long since you last uploaded the last video. I hope you are well. Best videos, bro. Keep them coming 🎉
@therainman7777
@therainman7777 Месяц назад
Not to be pedantic, but systems’ tendency to minimize their potential energy is not directly related to the 2nd law of thermodynamics. The 2nd law states that closed systems increase in entropy over time; it does not deal with energy values directly.
@laurentpayot3464
@laurentpayot3464 3 месяца назад
Awesome. I just can’t wait for the next video!
@kellymoses8566
@kellymoses8566 2 месяца назад
One reason I like using Neo4J is that graph networks seem like the work a bit like human memory with links between things making finding related items fast.
@felipemldias
@felipemldias 3 месяца назад
Man, I just love your videos
@porroapp
@porroapp 3 месяца назад
12:21 Watching this to maximise my happiness. Max happiness Min unhappiness, this is the way. Thank you!
@machi4744
@machi4744 Месяц назад
youre the best for making this i love you man
@futurisold
@futurisold 3 месяца назад
> "when you throw a ball the ball doesn't search through all the possible trajectories" QM has entered the chat
@andrew.sandler
@andrew.sandler Месяц назад
same thought
@deotimedev
@deotimedev 3 месяца назад
Thank you so much for creating this video, genuinely one of the most educational I've ever come across. I've been trying to learn more about how brains work since that's always been something I've been very curious about literally since birth, and along with entropy being my favorite physics concept this video has just led to me googling and researching for the last 4 hours (its 3am lol) trying to find out more. Really impressed with how complicated, yet still high-quality and clear, some of the topics are in this is and I'm really looking forward to watching the rest of your videos to learn more on how all of this stuff works in such an intricate way
@ArtemKirsanov
@ArtemKirsanov 3 месяца назад
Thank you!!
@joeybasile1572
@joeybasile1572 2 месяца назад
Please keep going. Keep dedicating your time to your pursuit of wonder.
@viniciusscala7324
@viniciusscala7324 2 месяца назад
Thank you for this video!
@rohan.fernando
@rohan.fernando 2 месяца назад
Really good that you’re covering these foundational concepts of NNs. Cross-associative NNs, auto-associative NNs, and unsupervised learning are big missing pieces in today’s NNs.
@asdf56790
@asdf56790 3 месяца назад
As always, outstanding video!
@Tangi_ENT
@Tangi_ENT Месяц назад
I love being here and asking “huh ?” every two minutes. Stay blessed bro. Love these videos (even though I don’t understand anything I feel like I’m learning a lot)
@1495978707
@1495978707 Месяц назад
3:45 Well, that's really all paradoxes are. The reason people typically consider them to be impossobilities is because they're stuck in the mindset of believing what they first learn, rather than automatically considering that their perspective may be wrong or incomplete.
@h.mrahman2805
@h.mrahman2805 2 месяца назад
Plz make a video about modern hopfield net or dense assosiative memory. Cuz it a different and generalize perspective of mopdern hopfield nets.
@Xylos144
@Xylos144 3 месяца назад
Great video. Little sad to see that anti-training wasn't mentioned. It doesn't really solve the problem with training two sequences that are 'close' together, so that's fair. But it does help, and has an interesting analogy with physiology. Essentially, if you try to train two sequences that are too close to each other, their valleys will overlap, which means you might try to aim for one specific sequence and end up falling into the other. And if they're really close, you'll actually create a new local minimum that sits between the two. In those cases, what you can do is identify all your local minima and then run the algorithm backwards, training hills on top of all your local minima. For stand-alone minima, this doesn't matter because they're still local minima. But if a minima is a false sequence that sits between two or more neighboring targets, this builds a hill in between those two neighboring valleys, helping to make those nearby sequences more distinct. As Geoffy Hinton has pointed out, this has an interesting conceptual analog to dreaming, where we seem to replay experiences and concepts from our day (to a vague extent) and sleeping/dreaming also seems to help with learning and memory. Similarly by making the Hopfield focus on its memoreis while playing them backwards, so to speak, helps to solidify its own memory. It may be little more than a metaphorical analog, but I think its still quite interesting.
@ArtemKirsanov
@ArtemKirsanov 3 месяца назад
That’s exactly right! Bolzmann machines, which are an improved version of Hopfield nets in fact do just that, with contrastive hebbian learning, by increasing the energy of “fake” memories. Hopfield networks, being the first model, don’t have that property in the conventional form though. So we will talk about this in the Bolzmann machines video. Good catch!
@Xylos144
@Xylos144 3 месяца назад
@@ArtemKirsanov Ah, gotcha I didn't realize the idea of 'anti-learning' applied to boltzman machines. I've only messed with restricted boltzmann machines and I always thought of them as stacked reversible auto-encoders. Never though that the updating method may be replicating the same 'anti-learning' process - though it does make sense since autoencoders are trying to make a bunch of weird, distinct hyper-dimensional valleys. Maybe it's more apparent with the more general Boltzmann machine. Looking forward to that video!
@strangelaw6384
@strangelaw6384 Месяц назад
doesn't a principal component analysis solve this problem?
@tornyu
@tornyu 2 месяца назад
I love that you made everything dark mode. (Noticed when I saw the Wikipedia logo)
@studywithmaike
@studywithmaike 2 месяца назад
The outro music 🙏🏻🙏🏻🙏🏻
@max-ys1ei
@max-ys1ei 2 месяца назад
Fabulous, bravo
@filedotnix
@filedotnix 2 месяца назад
Super interesting to see that the hopfield network basically reinvents binary operations like XOR and XNOR for neurons, with the two differentiated by the weight.
@13lacle
@13lacle 2 месяца назад
Great video as always. For 22:50, has anyone tried stacking layers of Hopfield networks yet as a work around? Basically each layer acts in it's own feature level space and resolves for that level's most likely feature, then passes it up to the next higher order Hopfield feature space to be resolved there. It seems like this would allow you to store exponentially more overall patterns has they get resolved separately to avoid the overly busy end energy landscape. Also interestingly you can see how it would carve out the energy landscape from just the raw inputs with this. You have the Hopfield network constantly comparing itself to the some abstraction of the source input layer, meaning the more times a pattern seen the stronger it gets in the Hopfield network. Also for faster convergence, it is likely the greater the xi and hi difference the faster xi updates.
@marcoramonet1123
@marcoramonet1123 3 месяца назад
This is one of the best channels
@Ann-x1n
@Ann-x1n 2 месяца назад
Thanks for such a this amazing content
@revimfadli4666
@revimfadli4666 2 месяца назад
Does this mean artbros were right about diffusion models being databases?
@u2b83
@u2b83 2 месяца назад
My guess is they're "databases" of chaotic attractors. After all, the diffusion process is effectively a differential equation that evolves over time and settles in some stable basin.
@vinniepeterss
@vinniepeterss 3 месяца назад
great video as always
@谭文龙-d1l
@谭文龙-d1l 3 месяца назад
Can wait to see this video!!!!!
@mahmoudhamdy4252
@mahmoudhamdy4252 Месяц назад
Thank you very informative❤❤
@ibrahimal-nuaimi1005
@ibrahimal-nuaimi1005 Месяц назад
I'm subbing this dude is amazing
@NeuroScientician
@NeuroScientician 3 месяца назад
This looks like a run-of-the-mill gradient descent, how does resolves false bottoms?
@SilentderLaute
@SilentderLaute 3 месяца назад
Another awesoem Video :)
@thegloaming5984
@thegloaming5984 3 месяца назад
Can you do a video on the work of Dmitry Krotov showing that attention mechanisms are a special case of associative memory networks
@foreignconta
@foreignconta 3 месяца назад
Waited for this...!!
@mastershooter64
@mastershooter64 3 месяца назад
Just as I suspected, everything is just the principle of stationary action! It's all just making an action functional stationary what if we considered non-local actions?
@simdimdim
@simdimdim 2 месяца назад
2:16 a* great introduction
@keithwallace5277
@keithwallace5277 3 месяца назад
Love you man
@GeoffryGifari
@GeoffryGifari 2 месяца назад
Reminds me of gradient descent
@РоманИгнатьев-ц7ж
@РоманИгнатьев-ц7ж 2 месяца назад
That's because this is it.
@Actualhumanlive
@Actualhumanlive 2 месяца назад
Neurons like strings of a piano set into songs that vibrate in response to similar tones.
@is44ct37
@is44ct37 3 месяца назад
Great video
@disgruntledwookie369
@disgruntledwookie369 3 месяца назад
Ironically, within the framework of quantum mechanics, one could actually say that the ball *does* "search" every possible path in order to find the "correct" one. It simply performs the "search" in parallel, not sequentially. And it's less of a search and more of an average of all paths. The principle of stationary action is the driving principle behind Newtonian dynamics and itself follows directly from the interference between many "virtual" trajectories, it turns out that the paths which are close to the "true path" (the classical path) have very little variance in their action, which rough speaking means that they end with nearly identical phase shifts (e^iHt/h, Ht ~ action, h = Planck constant) and can interfere constructively, whereas paths which are far from the "true path" have wildly varying actions, even if two paths are similar to each other. So they pick up big phase shifts and end up interfering destructively, leaving only the contributions from the paths "close to" the classically observed path. As far as I know this is the only way to derive the principle of stationary action, and the same basic idea is essential to finding transition probability amplitudes in QFT. It really does seem like the universe simultaneously tries all conceivable paths, superimposed together.
@asdf56790
@asdf56790 3 месяца назад
One could also say this for optics with Fermat's principle or classical mechanics with Hamilton's principle. Even though variational formulations are mathematically beautiful, I'd be cautious to assume that "reality works this way" i.e. "searches through all paths". They are one equivalent description of many (even though it is remarkable that they pop up basically everywhere).
@disgruntledwookie369
@disgruntledwookie369 3 месяца назад
@@asdf56790 I agree with your caution. I'm just increasingly convinced that theory and experiment are pointing this way and the onl obstacle is our flawed intuition and prejudice. We want there to be only a single, consistent reality. But this forces us into some intense mental and mathematical gymnastics to make the equations of QM fit observations. If you take the equations at face value then you have no trouble, you just have to contend with the idea that reality is not a single line of well defined events, but multiple histories occurring simultaneously and generally able to interfere with each other. An electron passing through a Stern-Gerlach device would then actually travel both paths, in "separate worlds" but these paths can still interfere and superimpose so long as you don't take any steps to determine which path was taken. Like if you redirect the paths to converge into a single path and put the whole thing in a box so you can only see the output, you cannot determine which path was taken and you can show experimentally that the output electron superposition of spin states is preserved. But the universe doesn't know ahead of time (superdeterminism notwithstanding) whether you will take a peek in the box and catch the electron with its pants down. In my view, the explanation requiring the fewest assumptions is that all paths really are taken, but with the assumptions that 1. When we observe a property we can only observe definite values, not superpositions, and 2. paths can interfere (unless decoherence has occurred). A poor and rushed explanation but this is kind of my thought process. As you say, there are many alternative interpretations. It's pretty much philosophy at this point. 😅
@raresmircea
@raresmircea 3 месяца назад
What’s the status of all those parallel paths? You’ve used the term "virtual" so you seem to view them as part of some kind of potentiality, not getting actualized in the observer measured reality (I’m using these terms very loosely, I don’t know exactly what they mean). If I’ve understood right, in the MWI there’s no actual convergence on a path, each of the possible parallel paths are actualized paths that get to be part of reality.
@mastershooter64
@mastershooter64 3 месяца назад
@@asdf56790 This is true, all subsequent physical theories are simply better and better approximations of reality, we shouldn't assume reality works that way without more experiemental and theoretical verification.
@AffectiveApe
@AffectiveApe 25 дней назад
At around the @13:10 mark I really had to suppress the urge to just stand up in the cafe I am working in and just start slow clapping.
@peterfaber7124
@peterfaber7124 3 месяца назад
Great explanation! This applies to fully connected networks, correct? So memories are Dense Distributed Representations. It's not how the brain does it. The brain uses the opposite: Sparse Distributed Representations. Is there any way you could explain it using SDRs?
@Mohammad-nv1wv
@Mohammad-nv1wv Месяц назад
RU-vid channel out there ❤
@edsonjr6972
@edsonjr6972 3 месяца назад
My God, your videos are amazing
@nothinginteresting1662
@nothinginteresting1662 2 месяца назад
For every memory, there is a cue associated. This cue is formed when the memory is created. It will only be created if it is worthwhile to the brain. Cues are usually emotional. Something emotional is easily remembered. Because that emotional experience is important to the brain. Now when you come across any information, the brain doesn't have to search the whole memory. It merely needs to find the cue. That cue will help you remember. If you are familiar with programming, the cue acts as a pointer to the memory. Any information that you come across that includes the cue will cause the brain to recall the memory associated with that cue.
@neon_Nomad
@neon_Nomad 3 месяца назад
I use an excell spreadsheet for all my memories
@waff6ix
@waff6ix 3 месяца назад
STUFF LIKE THIS IS SO INTERESTING TO ME TO LEARN😮‍💨GOD DESIGNED SUCH AN AMAZING CREATION🤩🙏🏾🙏🏾🙏🏾
@Kram1032
@Kram1032 3 месяца назад
It seems to me this "fire together wire together" notion is actually also present in the attention mechanism of transformers, except you don't just have a 1D 1 bit + - but rather an nD dot product. This still has the same basic structure: Neurons try to more closely align to each other. But the added dimensions give each neuron more ways to accomplish that: Among three neurons, one neuron can be pretty close to the other two while those two might be pretty far away from each other.
@Tutul_
@Tutul_ 2 месяца назад
Because the neurons have two edge weights (A->B and B->A) does that might explain the case where we have the memory just out of reach and get lock trying to get it?
@os2171
@os2171 2 месяца назад
Your lectures are outstanding! This vision of the Hopfield resembles the evolutionary landscapes of adaptation, right?
@lovemacom3448
@lovemacom3448 21 день назад
Thanks !
@thisismambonumber5
@thisismambonumber5 Месяц назад
your intro is beautiful btw
Далее
The Grandfather Of Generative Models
33:04
Просмотров 68 тыс.
How Your Brain Organizes Information
26:54
Просмотров 613 тыс.
Airpod Through Glass Trick! 😱 #shorts
00:19
Просмотров 2,7 млн
Лиса🦊 УЖЕ НА ВСЕХ ПЛОЩАДКАХ!
00:24
The Most Important Algorithm in Machine Learning
40:08
Просмотров 446 тыс.
Building Blocks of Memory in the Brain
27:46
Просмотров 268 тыс.
Brain Criticality - Optimizing Neural Computations
37:05
How do QR codes work? (I made one myself to find out)
35:13
The moment we stopped understanding AI [AlexNet]
17:38
AI can't cross this line and we don't know why.
24:07
Просмотров 909 тыс.
The Key Equation Behind Probability
26:24
Просмотров 108 тыс.
Watching Neural Networks Learn
25:28
Просмотров 1,3 млн