Тёмный

How AI Learns Concepts 

Art of the Problem
Подписаться 136 тыс.
Просмотров 174 тыс.
50% 1

Опубликовано:

 

29 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 428   
@ArtOfTheProblem
@ArtOfTheProblem 4 года назад
STAY TUNED: Next video will be on "History of RL | How AI Learned to Feel" SUBSCRIBE: www.youtube.com/@ArtOfTheProblem?sub_confirmation=1 WATCH AI series: ru-vid.com/group/PLbg3ZX2pWlgKV8K6bFJr5dhM7oOClExUJ
@PLATONU
@PLATONU 4 года назад
so, if neural networks can´t reason.... why the people call it "articial intelligence"... when intelligence and learning aren´t the same thing? for me, neural networks are a good way to save patterns and return us the result we want ... with brutal force
@navinpandey1309
@navinpandey1309 3 года назад
Thank you so much.i had been struggling to understand the concepts behind neural network.You explained it to us so nicely.
@stevesmith291
@stevesmith291 2 года назад
This is maybe the best explanation I have seen of a topic that is rather elusive. I will watch this video again!
@ArtOfTheProblem
@ArtOfTheProblem 2 года назад
@@stevesmith291 so happy to hear it
@pranavbhagwat1734
@pranavbhagwat1734 2 года назад
This was very informative and explained the depth advantage in a really easy to grasp manner. Thank you!
@GrahamTodd-ca
@GrahamTodd-ca 4 года назад
I don't mind that you take your time making these. Your meticulous script preparation & attention to production values allow you to pack massive amounts of information into these videos. You are creating "aha!" moments & rewiring neurons around the world. Bravo!
@ArtOfTheProblem
@ArtOfTheProblem 10 месяцев назад
this one took a longtime: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-OFS90-FX6pg.html
@Davld1996
@Davld1996 6 месяцев назад
Just a remarkable video. The most clear explanation of NNs I’ve ever seen. Really well done.
@ArtOfTheProblem
@ArtOfTheProblem 6 месяцев назад
thank you, glad you found this as it's buried deep in the results!
@ArtOfTheProblem
@ArtOfTheProblem 5 месяцев назад
New video is up on Evolution of Intelligence ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-5EcQ1IcEMFQ.html
@dm3on
@dm3on 4 года назад
Video is absolutely awesome, only thing that seemed missing to me, is difference between neural network and another well know mathematical models (relational databases design and analytics).
@neoblackcyptron
@neoblackcyptron 2 года назад
Dude , this is excellent work, you have explaned the secret of Neural Networks in a really beautiful way. It takes real understanding to be able to distil the information in such a beautiful way. Thank you s much for this.
@ArtOfTheProblem
@ArtOfTheProblem 2 года назад
appreciate it, i worked really hard on this and hit a wall after it. I'm still planning to follow up with more on sequential networks.
@NiteshKumar-ss8zd
@NiteshKumar-ss8zd 2 года назад
Such a good explanation, i didn't know anything about neural network, still i understand at full length!!
@ArtOfTheProblem
@ArtOfTheProblem 2 года назад
thrilled to hear this! did you watch the whole series?
@NiteshKumar-ss8zd
@NiteshKumar-ss8zd 2 года назад
@@ArtOfTheProblem yeaa i go there to watch series but there were only four videos and this one is last... i thought it gonna be a some more videos...i haven't watched yet but i will! I love your explanations, everything is perfect! You're a great teacher!!💜💜💝💝
@ArtOfTheProblem
@ArtOfTheProblem 2 года назад
@@NiteshKumar-ss8zd thanks so much, I'm still going to make a final video to this series when I get the time and feel like I have a strong thesis for the video
@victorvsl
@victorvsl 4 года назад
I'm leaving this comment here for the RU-vid algorithm.
@NelsonIngersoll
@NelsonIngersoll 4 года назад
I know a little about computers. Used to be a lot; but, then I retired and computers and computing move on. This was a wonderful explanation. Not too fast, not in the least boring, and I learned some things. Thank you and KUDOS!
@britcruise2629
@britcruise2629 4 года назад
so nice to hear
@ArtOfTheProblem
@ArtOfTheProblem 10 месяцев назад
finally done: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-OFS90-FX6pg.html
@jamesallen74
@jamesallen74 4 года назад
Let's all be real here, that last layer is really just on LSD. That's how it all works. Those were some trippy images. Joking aside, fantastic video!
@MRKS8
@MRKS8 3 года назад
hey keep going with the videos. The quality of your vids easily justifies 2M subs -- you’ll blow up eventually
@BluePhoenix986
@BluePhoenix986 4 года назад
Yes! A new Art of the Problem video!
@heidtmare
@heidtmare 4 года назад
Even though I've seen these concepts before this video does a great job of slowly building up the ideas and bringing the viewer along to the next level of understanding. This was very good. Thank you for taking the time and effort to put this together.
@ArtOfTheProblem
@ArtOfTheProblem 10 месяцев назад
next part: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-OFS90-FX6pg.html
@hafty9975
@hafty9975 3 года назад
never would have imagine this stuff in this way. the patience and care of thought behind it is just, like, therapeutic to take in. million thanks man
@hafty9975
@hafty9975 3 года назад
beautiful
@hafty9975
@hafty9975 3 года назад
i think the genius here, honestly, is the maintaining the whole way through the output neuron vector as points in 3d space. the way to divide points into groups, and combine them becoming oragami folds for depth. at 12:01 i finally understood that these differing output patterns all fit inside a 3d space, meaning, a brain, like, I can imagine these little lit up paths in a brain that the data goes through, but instead of like a radioactive isotope, it was a component of a stormcloud, and it routes down the pathway... You illustrated the finitude of possible induction in perception space, and then at the end what a limited number of neurons can represent while keeping things distinct and recognizable, fulfilling their purpose. Yet we know there's this infinity of things that can be represented in that process. its really magical, because we go from finitude to infinity and back-- without stopping, and without doubling back the way we came.
@hafty9975
@hafty9975 3 года назад
and what just gave me the chills was that i paused just after 12:00 minutes to write these comments calling what at that moment i thought was magic, and your next line was "and so the magic is..." Not to get corny about it but woah serendipity. read that as testament to the editing i guess. amazing job on this series. i really did wait this long to watch it all ahah
@ArtOfTheProblem
@ArtOfTheProblem 3 года назад
@@hafty9975 we are definitely in sync
@tobias5740
@tobias5740 4 года назад
The channel is alive!
@abdullahalmahfuz6700
@abdullahalmahfuz6700 2 года назад
I clicked the video by seeing the thumbnail of a simple paper,thought it would be a easy tutorial🙃
@cipherxen2
@cipherxen2 2 года назад
This video is a much watch for the so called "machine learning experts"
@narenmani07
@narenmani07 3 месяца назад
i prefer your videos over 3b1b. you include a variety of backgrounds/contexts to help me pay more attention (and not get stuck to the monotone black bg with animations). thank you!!!
@ArtOfTheProblem
@ArtOfTheProblem 3 месяца назад
thank you for feedback, working hard on next video now
@AndersonSilva-dg4mg
@AndersonSilva-dg4mg 4 года назад
wow, new video, thank you so much!
@vicuppal
@vicuppal 4 года назад
It was fascinating to see the images when probing the different layers. The paper folding example was great at explaining this at least for me.
@ArtOfTheProblem
@ArtOfTheProblem 10 месяцев назад
3 years later i finish next part ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-OFS90-FX6pg.html
@sarthakbhole3724
@sarthakbhole3724 2 года назад
Hi, your pictures and explanations are just too good, clear and coherent and made sense. That's how things should be explained. I want to cite your pictures and some of the wording. And I have no problem mentioning a youtube link instead of a textbook even though its not peer reviewed. I was wondering should is it ok if I use link in bib or you have a proper article written on it.
@ArtOfTheProblem
@ArtOfTheProblem 2 года назад
That's so awesome i'd love if you used that link too please share whatever work you are doing too thanks
@Friedolin-qz9id
@Friedolin-qz9id 4 года назад
this deserves a lot more views
@zionj104
@zionj104 4 года назад
You made amazing videos on Khan Academy years back and I've finally stumbled upon your criminally small channel. Keep up the good work, I hope the algorithm tips in your favor one day.
@ArtOfTheProblem
@ArtOfTheProblem 4 года назад
glad you found me Zion, I hope for the tip one day too. thanks for the support
@MohkKh
@MohkKh 4 года назад
Sorry for my English I registered for this channel many years ago and waited eagerly for videos.
@RokoThEMaster
@RokoThEMaster 4 года назад
Your videos are a thing of beauty! The attention to detail is fascinating, especially how it clarifies the concepts that are explained. I can only imagine how beautiful the world would be if everything was explained in this manner!
@ArtOfTheProblem
@ArtOfTheProblem 4 года назад
this comment made my day thank you
@CreeperSlenderman
@CreeperSlenderman 3 года назад
@@ArtOfTheProblem Yeah, learnt sin cos tan a bit by programming a circle and i understood kinda
@ArtOfTheProblem
@ArtOfTheProblem 10 месяцев назад
finally done ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-OFS90-FX6pg.html
@robosergTV
@robosergTV 4 года назад
a single hidden layer is enough for any problem, it's just the number of neurons will be very large.
@tolex3
@tolex3 4 года назад
Wow...! This was clearly the best ever explanation of neural networks I’ve ever seen! For awhile I even thought I understood them... ;-) great vid, thx!
@hmm7458
@hmm7458 4 года назад
beautifully crafted... we can see the hardwork you have put into it.. subbed
@JamesMart
@JamesMart 4 года назад
Thank you! Extremely helpful visualizations
@wybird666
@wybird666 2 года назад
Very nice. Not so sure about the folding paper, but the visualisations really show how the coordinates are transformed from the complicated manifolds to the relatively simple clusters, and that visualisation can possibly help guide neural network design. Shame you weren't able to answer the final question, ha ha ha!
@ArtOfTheProblem
@ArtOfTheProblem 2 года назад
thanks I got held up working on the last video, I will get to it eventually
@stevepittman3770
@stevepittman3770 4 года назад
This is fascinating, and the best explanation I've ever seen for how neural networks actually work. You have earned my sub, and I look forward to more insightful explanations of a topic that boggles my mind!
@Financeification
@Financeification 2 года назад
Wow, you are good at teaching. Making obvious the nonobvious is extraordinarily complex.
@ArtOfTheProblem
@ArtOfTheProblem 2 года назад
thank you, i'm still planning to do a follow up to this on sequential problems
@stanleyhe5775
@stanleyhe5775 5 месяцев назад
I like the temperature and pressure on different axes analogy, but what would the parameters be for a picture of a dog? Would each pixel be an axis? And what would be the value that it is measuring for a pixel?
@UnPuntoCircular
@UnPuntoCircular 4 года назад
Thanks for making these videos. The paper folding part analogy was really GOOD!
@amoghdadhich9318
@amoghdadhich9318 2 года назад
It's actually a paper by Yoshua Bengio - on the number of Linear Regions of deep neural networks
@ArtOfTheProblem
@ArtOfTheProblem 10 месяцев назад
long time no see: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-OFS90-FX6pg.html
@nityaaryasomayajula2204
@nityaaryasomayajula2204 3 года назад
Just watched this now, and this explanation is absolutely amazing. Please make more videos about ML :)
@ArtOfTheProblem
@ArtOfTheProblem 3 года назад
thank you for feedback. At this moment i'm in the rough drafting stage of the video which follows this one. Probably will take me another month to write or so
@nityaaryasomayajula2204
@nityaaryasomayajula2204 3 года назад
@@ArtOfTheProblem Thank you so much! I look forward to watching that. I really liked your use of visual analogies, such as paper folding, to better understand what's happening inside of the neural network.
@ivanlo7195
@ivanlo7195 Год назад
This is for everyone who has no idea what a neural network is, or who thinks itself has a good idea of what it is. Nothing is more powerful than visualizing a complex idea into some simple idea that we have already familiar with. I'm surprised how neural network is similar to the concept of decision tree/regression tree, which is also using a bunch of AND gates to make a prediction
@theinthanhlan3582
@theinthanhlan3582 2 года назад
me : researching how to print "Hello World" others from history : researching "how a computer can learn"
@judo-rob5197
@judo-rob5197 2 года назад
Very well explained. I have a beginner question. Are the partitions only linear hyperplanes. Is there any non-linearity involved?
@vedhasp
@vedhasp 2 года назад
I think there is a typo at 5:15. Active and inactive should be flipped for any 1 line drawn for consistency. If the circles represent 'active' data points, the active-inactive labels for the slant line at the right should be flipped.
@MrTexMart
@MrTexMart 4 года назад
Watching these videos makes me feel just like how I did as a child watching the National Film Board of Canada videos. You've made the correct patterns, well done.
@britcruise9101
@britcruise9101 3 года назад
i grew up watching these
@bg-mq5hz
@bg-mq5hz 8 месяцев назад
You are a great thinker and equally good presenter. Thank you for sharing.
@khoakirokun217
@khoakirokun217 9 месяцев назад
5:02, oh no, this triggers my OCD, but great video.
@nodyuejim
@nodyuejim Месяц назад
i see a cat i click
@mikoajjedrzejewski3676
@mikoajjedrzejewski3676 2 года назад
Amazing video! Really appreciate this, great work :)
@jennyone8829
@jennyone8829 2 года назад
Thanks for beings a point in my neural network... I appreciate your genius 🎈❤️
@ArtOfTheProblem
@ArtOfTheProblem 2 года назад
glad you enjoyed this video thanks
@Pakalaakhil
@Pakalaakhil 4 года назад
I'm from accounting field. Randomly got this video from Reddit. I have to tell you, your explanation and way of presenting is not just good, it's interesting too.plase continue doing what you are doing.
@yum33333
@yum33333 4 года назад
What an excellent video.
@iberiaaydin
@iberiaaydin 4 года назад
You sir, you deserve much more attention. Very well illustrated and clearly explained. Thanks.
@jarrenvanman2570
@jarrenvanman2570 4 года назад
Holy crap... That was an amazing video!
@moo3oo3oo3
@moo3oo3oo3 2 года назад
Great video. Was hoping to see more "maths" in the video though
@PK-mx3tv
@PK-mx3tv 3 года назад
the best explaination ive ever heard thruely intriguing
@ArtOfTheProblem
@ArtOfTheProblem 3 года назад
thrilled to hear it, still trying to crack the next video
@arc8dia
@arc8dia 7 месяцев назад
If someday I prove P=NP, I'm donating half to you. This channel is that inspirational to me.
@ArtOfTheProblem
@ArtOfTheProblem 7 месяцев назад
godspeed! thank you!
@vladislava5237
@vladislava5237 10 месяцев назад
Say whatever you want, but the fact that patterns in the last layer and some NN generated images look like acid fractal hallucinations is astonishing Too many things in completely different fields resemble each other Makes me feel like we are very close to smth like describing the whole world with a function and approximate all the values inside it using some mega powerful AI
@ArtOfTheProblem
@ArtOfTheProblem 10 месяцев назад
totally agree
@MaysamKiani
@MaysamKiani 2 года назад
Amazing video. Although the occacional backgroung noise was quite distracting. For exampel, the one started at 4:00 was pretty annoying and I had to rewinde the video multiple times to be able to focus on the material. Overal a great simplification of such a concept.
@ArtOfTheProblem
@ArtOfTheProblem 2 года назад
thanks for the feedback
@kenmeylemans1528
@kenmeylemans1528 Год назад
As someone who is quite new to all of this, I keep wondering. If we solve a certain problem using NN, why not just add more layers? Is this due to computational limitations? Or after a certain amount of partitions, is adding more not increasing accuracy?
@kenmeylemans1528
@kenmeylemans1528 Год назад
Same with the amount of neurons in each layer, how to determine those?
@Virus3652
@Virus3652 4 года назад
Sometimes I wish RU-vid had a super-like button or something to express how much I like this
@phillipmorgankinney881
@phillipmorgankinney881 4 года назад
Fantastic video.
@shawnbibby
@shawnbibby 9 месяцев назад
So good. The layering was such a important lesson to learn. With the 3D simulation it looks like a cloudy rainbow rubix cube being twisted and turned in our minds. The ramifications of these learnings are infinite. Imagine what perceptions our minds as sensory identifiers are not perceiving yet, and the avenues of worlds that it has the ability to open up as we simply use more complex sets of neural sensory functions in our body, and increase our pattern recognition's as an individual, social and planetary society. edit: I am going to have to go to the begining of the series and count my blessings
@ArtOfTheProblem
@ArtOfTheProblem 9 месяцев назад
let me know what you think after finsihing the series as i'm working on a follow up
@yt-sh
@yt-sh 4 года назад
I'm leaving this comment here for the RU-vid algorithm. - victorvsl
@ArtOfTheProblem
@ArtOfTheProblem 4 года назад
appreciate it
@sudo-rimraff
@sudo-rimraff 4 года назад
Your content is brilliant. Thank you!
@MatthewKelley-mq4ce
@MatthewKelley-mq4ce 2 месяца назад
This was an interesting way to look at it. Thanks for the well done video 👍
@ArtOfTheProblem
@ArtOfTheProblem 2 месяца назад
thanks, stay tuned!
@raresmircea
@raresmircea 4 года назад
Like Graham Todd said in his comment, your vids always deliver waves of "Aha!" moments that join previously distant or incoherent bits of our minds. I hope these vids reach as many schools as possible, kids would benefit immensely and so the larger society of tomorrow. Thanks 🤘
@yousseffatihi3702
@yousseffatihi3702 2 года назад
That was magnificent, I mean really really really super breathtaking.
@ArtOfTheProblem
@ArtOfTheProblem 2 года назад
thanks Youssef, thrilled youtube started to surface this video i worked my ass off on it
@captainjj7184
@captainjj7184 Месяц назад
Wow... this presentation is a winner, this is epiphanically so good... I just realized what comprised as our 3D mental space is not "an object" but a momentum, a summed illusion effect of all the hard work along the way rather than a hidden 3D holographic chamber tucked at the deep back end of the brain, much like how we perceive the illusion of time or gravity or consciousness as "one thing" when it is in fact a working dynamics of many factors too complicated to be directly visible for the average person - thus, our brain summed them up as an object and worse, gave 'em a name as "one object" because that's what we do (even though the purpose is so that we could easily understand how to describe the world as a useful prediction tool to be applied in everyday life). Suffice to say, now I am feeling a bit mixed up remembering the way Europe's Human Brain Project were presented: Showing a faux colored shadowy figure of a red flower within the jungle of neurons several distance away from the exposed retina.... so silly of me to be awed by that, back in the day... Anyway, love this, thank you so much!
@ArtOfTheProblem
@ArtOfTheProblem Месяц назад
Glad you found this series, curious if it was recommended by the algo or somewhere else?
@captainjj7184
@captainjj7184 Месяц назад
@@ArtOfTheProblem I fell into the rabbit hole while searching for specific subjects as it got deeper and your addictively well-presented thought provoking series kept coming up, your titles and thumbnails evolved from "Hmmm... interesting" to "If I see you I will click you!"🙂
@ArtOfTheProblem
@ArtOfTheProblem Месяц назад
@@captainjj7184 Love that you have found the series. love this feedback
@sd4dfg2
@sd4dfg2 4 года назад
Holy cow, I never understood it this way. I wish I had known this from the beginning, I think other NN things I had learned would have been cast in a different light.
@schophi
@schophi 9 месяцев назад
This is the most beautiful, deep presentation on neural networks I have seen. This has given me another depth of understanding. Thank you so much. I would love if you could provide a reading list for this series, to take my studies further.
@ArtOfTheProblem
@ArtOfTheProblem 9 месяцев назад
glad you found this! did you see the entire series? i'll work on a list but I read quite widely and ferociously
@schophi
@schophi 9 месяцев назад
I've started watching the series after your latest video. Just brilliant.
@ArtOfTheProblem
@ArtOfTheProblem 9 месяцев назад
thrilled people are finding this finally :)))@@schophi
@zoechen4153
@zoechen4153 4 года назад
Absolutely amazing video! Thank you for making it!
@ArtOfTheProblem
@ArtOfTheProblem 4 года назад
thanks Zoe!
@teechawoon
@teechawoon 3 года назад
You say that the neurons are "on" or "off". So neurons in AI don't have continuous values? I watched 3blue1brown's video, and his presentation says that they're continuous. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-IHZwWFHWa-w.html
@ArtOfTheProblem
@ArtOfTheProblem 3 года назад
some are binary (especially historicalls), but between "on" and "off" they can have various continuous or discrete responses. some, like relu neurons, are continuous on the "on" side
@teechawoon
@teechawoon 3 года назад
@@ArtOfTheProblem Thanks :)
@CrucialMuzic
@CrucialMuzic 4 года назад
Wow very well done, and informative as usual. Thank you so much for the thoroughness in your explanation. One of the most underrated RU-vidr's of all time!
@thelazymanatee2506
@thelazymanatee2506 4 года назад
These videos are so extremely good! Thanks for making these!
@K0P
@K0P 4 года назад
Absolutely loved this! You're truly one of the best at teaching visually
@benjaminfink9580
@benjaminfink9580 4 года назад
Love your videos. Every time i see them on my page I just have to watch them
@mikeg3810
@mikeg3810 4 месяца назад
Perhaps AI helped make this video.
@ArtOfTheProblem
@ArtOfTheProblem 4 месяца назад
this was made in the good old days, check the date :)
@ivanlo7195
@ivanlo7195 Год назад
do you mind to tell me what is the software(s) to make the visualization, like the number lines, plane, volume or the neural network layers. They are some masterpieces
@ArtOfTheProblem
@ArtOfTheProblem Год назад
hey i made these from scratch just using apple motion, but for fancy 3D manifolds i used from here: colah.github.io/
@ivanlo7195
@ivanlo7195 Год назад
Thx for the information
@OrenLikes
@OrenLikes 8 месяцев назад
why "784" inputs? a square image of 28x28 pixels. not everybody knows that...
@OrenLikes
@OrenLikes 8 месяцев назад
Edit: You just answered... :) A question that you might address later on (I've been thinking about it and I'm currently at 4:38) - What if you want the A/C to start when the temperature is above 28C (cool) AND below 19C (heat)? A line won't do. Or, activate if all inputs are on or all inputs are off? A plane won't do. I'm guessing more than one neuron would be needed, and in the case of cool/heat/let-it-be - three outputs...
@christat5336
@christat5336 Год назад
Neural networks with their incompressible behaviour..... are very dangerous.. you don't expect every technology to come without risk but not on this level...an amatuer response...
@orsmplus
@orsmplus 4 года назад
What a straightforward explanation.
@technomajikal
@technomajikal 4 года назад
I assume this would fail if you had a cat that looked like a dog.
@robosergTV
@robosergTV 4 года назад
Any human would fail too then.
@morpheus_uat
@morpheus_uat Год назад
what is reason? would be the first question i think can you program reason? or rather, can reason emerge from an automated process? from codig to philosophy
@ricardo190988
@ricardo190988 3 года назад
Good explanation but awful background music/sounds. Makes it harder to follow.
@TheTacticalDood
@TheTacticalDood 4 года назад
Can someone point me to more resources on the idea of mapping the perception space into the concept space? I get it intuitively but would like a more thorough treatment.
@ArtOfTheProblem
@ArtOfTheProblem 4 года назад
Here is an epic blog post on this topic: distill.pub/2020/grand-tour/
@TheTacticalDood
@TheTacticalDood 4 года назад
@@ArtOfTheProblem Thanks!
@jelle1811
@jelle1811 Год назад
Thanks for the intuitive explanation. A question: is the perception space that contain all the different handwritten digits a 784-dimensional space?
@CarlJohnson-jj9ic
@CarlJohnson-jj9ic 2 года назад
Wouldn't it be easier to work with odd roots and tangents visually if axis' were signed perpendicular to themselves? Seems to be suggested at about 8 minutes. Why stop there? What if infinity was origin and the signs were at ends? What if the graph used the point of Z-axis for complex or signed-zero graphing which is useful in isolating fusion cells? It sounds like they are trying to suggest the Facebook pixel is part of a global image using a sonar-style muon. Plus I am pretty sure after you leave a movie theater and first step outside on a bright an sunny day where you instinctively stick your hand out to block the sunlight and your hand warms you find a disconnect between rejecting the light for the dark but preferring the warmth over the cold so it seems that our fundamental wiring's most basic sense, touch(heat, pressure) still holds true but the configuration of the higher order senses has a half-duplex loopback on the IO configuration. Further, every sense has a trigger for signals, i.e. goosebumps, sneezes or ringing ears. The implementation is fascinating. Ultimately, the pliable growth structure we represent has thermodynamics induced crying from a warm dark womb into a cold bright world instinctively drawn to the darkness but consciously preferring the warmth. I could go on forever. We leave the world with dim eyes, cold skin and rigid bones with people we never talked to anymore showing up at our funeral claiming they loved us but not enough to pick up the phone and call in the last 20 years...
@timuvlad6764
@timuvlad6764 2 года назад
Amazing video! Would it be possible to add links to the visualization tools you used?
@ArtOfTheProblem
@ArtOfTheProblem 2 года назад
hey I borrowed some from here: colah.github.io/posts/2015-01-Visualizing-Representations/ and here was my working script docs.google.com/document/d/1uxiLHv2Lwo5JfTh19ZHSiXnRPOJsnvTjUsmi4jiOQrU/edit
@timuvlad6764
@timuvlad6764 2 года назад
@@ArtOfTheProblemMany thanks!!
@AO-em4qx
@AO-em4qx 2 года назад
Are you a human or a neural network? Because this is the BEST explanation of a neural network a human ever did.
@user-eh9jo9ep5r
@user-eh9jo9ep5r 7 месяцев назад
Does the network understand it is dog that consist from dog 🐕, or it just represend image and word dog. Does the complexity give undrstandings to networks that output could be consist from outputs parts usually
@kodfkdleepd2876
@kodfkdleepd2876 2 года назад
NN do not learn and the senseless usage of the term applied to them is astonishing. They do not learn any more than a simple linear regression "learns". Learning is vastly different. Learning has to conjure up the connections. NN's and data fitting simply take the data humans give them and find a semi-optimal fit that can be used for interpolation and rarely work for extrapolation. It is very important to understand the distinction because most people won't and this makes AI very dangerous when it is assumed it can learn. Maybe one day with enough AI systems all linked together and the ability for them to memorize/data store massive amounts of data and process it in semi-real time they will be able to learn... but that's at least about 50 years in the future if not 500. Even then there will be issues. Right now AI is just very good lossy compression algorithms.
@MohamedESSANOUSY
@MohamedESSANOUSY 4 года назад
Thank you for this video, it opens your mind to a lot of things
@rathnakumarv3956
@rathnakumarv3956 2 года назад
do you have any video explaining CNN, RNN and LSTM for time series data regression?
@Cobalt_drakeru
@Cobalt_drakeru 4 года назад
Excellent video!! Thank you!!
@Zethrax
@Zethrax Год назад
Something to bear in mind if you are creating a discriminatory neural network is that you should usually include an 'I don't know' output option.
@djohnjimmy
@djohnjimmy 2 года назад
Wow. This is brilliant. You guys are awesome. Thanks everyone involved in production. 👍🏿
@ArtOfTheProblem
@ArtOfTheProblem 2 года назад
Thanks I really hope to follow this up with another video eventually
@bahritjohn
@bahritjohn 2 года назад
A good explanation. But there is not much mathematics content to justify the word "Mathematics" in the title.
@user-eh9jo9ep5r
@user-eh9jo9ep5r 7 месяцев назад
Does AI filter output parts, does the all parts in dog is consist from dog or partly could be the cat, and at the same time cat could consist from dogs content
@movanhousen7785
@movanhousen7785 4 года назад
great video
@prateekthakur1347
@prateekthakur1347 3 года назад
Where are you? We are missing you here.🥺
@ArtOfTheProblem
@ArtOfTheProblem 3 года назад
I know I'm sorry, i am working on next video and getting closer
@arturlinden9442
@arturlinden9442 4 года назад
Wow, this is amazing
@Matlockization
@Matlockization 4 года назад
Thankyou for explaining the fundamental building blocks of a neural network in a way that's easy to understand.
@ArtOfTheProblem
@ArtOfTheProblem 4 года назад
appreciate the feedback
@Matlockization
@Matlockization 4 года назад
@@ArtOfTheProblem You know, these neural networks is just like building or fixing a car. It's a guy thing.
@jamaluddin9158
@jamaluddin9158 4 года назад
Words can't describe this marvellous explanation!
@ArtOfTheProblem
@ArtOfTheProblem 4 года назад
thrilled to get this feedback
@lourence1651
@lourence1651 4 года назад
Great work
Далее
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
From Bacteria to Humans (Evolution of Learning)
16:58
50m Small Bike vs Car FastChallenge
00:22
Просмотров 4,1 млн
The moment we stopped understanding AI [AlexNet]
17:38
Watching Neural Networks Learn
25:28
Просмотров 1,3 млн
Why Does Diffusion Work Better than Auto-Regression?
20:18
The Most Important Algorithm in Machine Learning
40:08
Просмотров 440 тыс.
A Brain-Inspired Algorithm For Memory
26:52
Просмотров 112 тыс.
AI Learns to Run Faster than Usain Bolt | World Record
10:22
How AIs, like ChatGPT, Learn
8:55
Просмотров 10 млн