STAY TUNED: Next video will be on "History of RL | How AI Learned to Feel" SUBSCRIBE: www.youtube.com/@ArtOfTheProblem?sub_confirmation=1 WATCH AI series: ru-vid.com/group/PLbg3ZX2pWlgKV8K6bFJr5dhM7oOClExUJ
so, if neural networks can´t reason.... why the people call it "articial intelligence"... when intelligence and learning aren´t the same thing? for me, neural networks are a good way to save patterns and return us the result we want ... with brutal force
I don't mind that you take your time making these. Your meticulous script preparation & attention to production values allow you to pack massive amounts of information into these videos. You are creating "aha!" moments & rewiring neurons around the world. Bravo!
Video is absolutely awesome, only thing that seemed missing to me, is difference between neural network and another well know mathematical models (relational databases design and analytics).
Dude , this is excellent work, you have explaned the secret of Neural Networks in a really beautiful way. It takes real understanding to be able to distil the information in such a beautiful way. Thank you s much for this.
@@ArtOfTheProblem yeaa i go there to watch series but there were only four videos and this one is last... i thought it gonna be a some more videos...i haven't watched yet but i will! I love your explanations, everything is perfect! You're a great teacher!!💜💜💝💝
@@NiteshKumar-ss8zd thanks so much, I'm still going to make a final video to this series when I get the time and feel like I have a strong thesis for the video
I know a little about computers. Used to be a lot; but, then I retired and computers and computing move on. This was a wonderful explanation. Not too fast, not in the least boring, and I learned some things. Thank you and KUDOS!
Even though I've seen these concepts before this video does a great job of slowly building up the ideas and bringing the viewer along to the next level of understanding. This was very good. Thank you for taking the time and effort to put this together.
i think the genius here, honestly, is the maintaining the whole way through the output neuron vector as points in 3d space. the way to divide points into groups, and combine them becoming oragami folds for depth. at 12:01 i finally understood that these differing output patterns all fit inside a 3d space, meaning, a brain, like, I can imagine these little lit up paths in a brain that the data goes through, but instead of like a radioactive isotope, it was a component of a stormcloud, and it routes down the pathway... You illustrated the finitude of possible induction in perception space, and then at the end what a limited number of neurons can represent while keeping things distinct and recognizable, fulfilling their purpose. Yet we know there's this infinity of things that can be represented in that process. its really magical, because we go from finitude to infinity and back-- without stopping, and without doubling back the way we came.
and what just gave me the chills was that i paused just after 12:00 minutes to write these comments calling what at that moment i thought was magic, and your next line was "and so the magic is..." Not to get corny about it but woah serendipity. read that as testament to the editing i guess. amazing job on this series. i really did wait this long to watch it all ahah
i prefer your videos over 3b1b. you include a variety of backgrounds/contexts to help me pay more attention (and not get stuck to the monotone black bg with animations). thank you!!!
Hi, your pictures and explanations are just too good, clear and coherent and made sense. That's how things should be explained. I want to cite your pictures and some of the wording. And I have no problem mentioning a youtube link instead of a textbook even though its not peer reviewed. I was wondering should is it ok if I use link in bib or you have a proper article written on it.
You made amazing videos on Khan Academy years back and I've finally stumbled upon your criminally small channel. Keep up the good work, I hope the algorithm tips in your favor one day.
Your videos are a thing of beauty! The attention to detail is fascinating, especially how it clarifies the concepts that are explained. I can only imagine how beautiful the world would be if everything was explained in this manner!
Wow...! This was clearly the best ever explanation of neural networks I’ve ever seen! For awhile I even thought I understood them... ;-) great vid, thx!
Very nice. Not so sure about the folding paper, but the visualisations really show how the coordinates are transformed from the complicated manifolds to the relatively simple clusters, and that visualisation can possibly help guide neural network design. Shame you weren't able to answer the final question, ha ha ha!
This is fascinating, and the best explanation I've ever seen for how neural networks actually work. You have earned my sub, and I look forward to more insightful explanations of a topic that boggles my mind!
I like the temperature and pressure on different axes analogy, but what would the parameters be for a picture of a dog? Would each pixel be an axis? And what would be the value that it is measuring for a pixel?
thank you for feedback. At this moment i'm in the rough drafting stage of the video which follows this one. Probably will take me another month to write or so
@@ArtOfTheProblem Thank you so much! I look forward to watching that. I really liked your use of visual analogies, such as paper folding, to better understand what's happening inside of the neural network.
This is for everyone who has no idea what a neural network is, or who thinks itself has a good idea of what it is. Nothing is more powerful than visualizing a complex idea into some simple idea that we have already familiar with. I'm surprised how neural network is similar to the concept of decision tree/regression tree, which is also using a bunch of AND gates to make a prediction
I think there is a typo at 5:15. Active and inactive should be flipped for any 1 line drawn for consistency. If the circles represent 'active' data points, the active-inactive labels for the slant line at the right should be flipped.
Watching these videos makes me feel just like how I did as a child watching the National Film Board of Canada videos. You've made the correct patterns, well done.
I'm from accounting field. Randomly got this video from Reddit. I have to tell you, your explanation and way of presenting is not just good, it's interesting too.plase continue doing what you are doing.
Say whatever you want, but the fact that patterns in the last layer and some NN generated images look like acid fractal hallucinations is astonishing Too many things in completely different fields resemble each other Makes me feel like we are very close to smth like describing the whole world with a function and approximate all the values inside it using some mega powerful AI
Amazing video. Although the occacional backgroung noise was quite distracting. For exampel, the one started at 4:00 was pretty annoying and I had to rewinde the video multiple times to be able to focus on the material. Overal a great simplification of such a concept.
As someone who is quite new to all of this, I keep wondering. If we solve a certain problem using NN, why not just add more layers? Is this due to computational limitations? Or after a certain amount of partitions, is adding more not increasing accuracy?
So good. The layering was such a important lesson to learn. With the 3D simulation it looks like a cloudy rainbow rubix cube being twisted and turned in our minds. The ramifications of these learnings are infinite. Imagine what perceptions our minds as sensory identifiers are not perceiving yet, and the avenues of worlds that it has the ability to open up as we simply use more complex sets of neural sensory functions in our body, and increase our pattern recognition's as an individual, social and planetary society. edit: I am going to have to go to the begining of the series and count my blessings
Like Graham Todd said in his comment, your vids always deliver waves of "Aha!" moments that join previously distant or incoherent bits of our minds. I hope these vids reach as many schools as possible, kids would benefit immensely and so the larger society of tomorrow. Thanks 🤘
Wow... this presentation is a winner, this is epiphanically so good... I just realized what comprised as our 3D mental space is not "an object" but a momentum, a summed illusion effect of all the hard work along the way rather than a hidden 3D holographic chamber tucked at the deep back end of the brain, much like how we perceive the illusion of time or gravity or consciousness as "one thing" when it is in fact a working dynamics of many factors too complicated to be directly visible for the average person - thus, our brain summed them up as an object and worse, gave 'em a name as "one object" because that's what we do (even though the purpose is so that we could easily understand how to describe the world as a useful prediction tool to be applied in everyday life). Suffice to say, now I am feeling a bit mixed up remembering the way Europe's Human Brain Project were presented: Showing a faux colored shadowy figure of a red flower within the jungle of neurons several distance away from the exposed retina.... so silly of me to be awed by that, back in the day... Anyway, love this, thank you so much!
@@ArtOfTheProblem I fell into the rabbit hole while searching for specific subjects as it got deeper and your addictively well-presented thought provoking series kept coming up, your titles and thumbnails evolved from "Hmmm... interesting" to "If I see you I will click you!"🙂
Holy cow, I never understood it this way. I wish I had known this from the beginning, I think other NN things I had learned would have been cast in a different light.
This is the most beautiful, deep presentation on neural networks I have seen. This has given me another depth of understanding. Thank you so much. I would love if you could provide a reading list for this series, to take my studies further.
You say that the neurons are "on" or "off". So neurons in AI don't have continuous values? I watched 3blue1brown's video, and his presentation says that they're continuous. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-IHZwWFHWa-w.html
some are binary (especially historicalls), but between "on" and "off" they can have various continuous or discrete responses. some, like relu neurons, are continuous on the "on" side
Wow very well done, and informative as usual. Thank you so much for the thoroughness in your explanation. One of the most underrated RU-vidr's of all time!
do you mind to tell me what is the software(s) to make the visualization, like the number lines, plane, volume or the neural network layers. They are some masterpieces
Edit: You just answered... :) A question that you might address later on (I've been thinking about it and I'm currently at 4:38) - What if you want the A/C to start when the temperature is above 28C (cool) AND below 19C (heat)? A line won't do. Or, activate if all inputs are on or all inputs are off? A plane won't do. I'm guessing more than one neuron would be needed, and in the case of cool/heat/let-it-be - three outputs...
Neural networks with their incompressible behaviour..... are very dangerous.. you don't expect every technology to come without risk but not on this level...an amatuer response...
what is reason? would be the first question i think can you program reason? or rather, can reason emerge from an automated process? from codig to philosophy
Can someone point me to more resources on the idea of mapping the perception space into the concept space? I get it intuitively but would like a more thorough treatment.
Wouldn't it be easier to work with odd roots and tangents visually if axis' were signed perpendicular to themselves? Seems to be suggested at about 8 minutes. Why stop there? What if infinity was origin and the signs were at ends? What if the graph used the point of Z-axis for complex or signed-zero graphing which is useful in isolating fusion cells? It sounds like they are trying to suggest the Facebook pixel is part of a global image using a sonar-style muon. Plus I am pretty sure after you leave a movie theater and first step outside on a bright an sunny day where you instinctively stick your hand out to block the sunlight and your hand warms you find a disconnect between rejecting the light for the dark but preferring the warmth over the cold so it seems that our fundamental wiring's most basic sense, touch(heat, pressure) still holds true but the configuration of the higher order senses has a half-duplex loopback on the IO configuration. Further, every sense has a trigger for signals, i.e. goosebumps, sneezes or ringing ears. The implementation is fascinating. Ultimately, the pliable growth structure we represent has thermodynamics induced crying from a warm dark womb into a cold bright world instinctively drawn to the darkness but consciously preferring the warmth. I could go on forever. We leave the world with dim eyes, cold skin and rigid bones with people we never talked to anymore showing up at our funeral claiming they loved us but not enough to pick up the phone and call in the last 20 years...
hey I borrowed some from here: colah.github.io/posts/2015-01-Visualizing-Representations/ and here was my working script docs.google.com/document/d/1uxiLHv2Lwo5JfTh19ZHSiXnRPOJsnvTjUsmi4jiOQrU/edit
Does the network understand it is dog that consist from dog 🐕, or it just represend image and word dog. Does the complexity give undrstandings to networks that output could be consist from outputs parts usually
NN do not learn and the senseless usage of the term applied to them is astonishing. They do not learn any more than a simple linear regression "learns". Learning is vastly different. Learning has to conjure up the connections. NN's and data fitting simply take the data humans give them and find a semi-optimal fit that can be used for interpolation and rarely work for extrapolation. It is very important to understand the distinction because most people won't and this makes AI very dangerous when it is assumed it can learn. Maybe one day with enough AI systems all linked together and the ability for them to memorize/data store massive amounts of data and process it in semi-real time they will be able to learn... but that's at least about 50 years in the future if not 500. Even then there will be issues. Right now AI is just very good lossy compression algorithms.
Does AI filter output parts, does the all parts in dog is consist from dog or partly could be the cat, and at the same time cat could consist from dogs content