Тёмный

How Does a Neural Network Work in 60 seconds? The BRAIN of an AI 

Arvin Ash
Подписаться 984 тыс.
Просмотров 94 тыс.
50% 1

Full Video here: • How the BRAIN of an AI...
This video answers the question "How do Neural networks work?"
#neuralnetworks
A neuron in a neural network is a processor, which is essentially a function with some parameters. This function takes in inputs, and after processing the inputs, it creates an output, which can be passed along to another neuron. Like neurons in the brain, artificial neurons can also be connected to each other via synapses. While an individual neuron can be simple and might not do anything impressive, it’s the networking that makes them so powerful. And that network is the core of artificial intelligence systems.

Наука

Опубликовано:

 

22 июн 2023

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 70   
@ArvinAsh
@ArvinAsh Год назад
Full video on how the Brain of an AI works is here: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-NxTTXuUl-Lc.html
@thymos6575
@thymos6575 Год назад
nahh you gotta make a whole course on this you're too good at explaining
@leafloaf5054
@leafloaf5054 11 месяцев назад
That is what I thought. He'd make us pros
@jaimeduncan6167
@jaimeduncan6167 11 месяцев назад
It’s because he is simplifying big time , yo start it’s a vector equation, and then the training and the network construction are the funny part.
@TM_Makeover
@TM_Makeover 6 месяцев назад
​@@jaimeduncan6167 I wann a know more about it
@hdsz7738
@hdsz7738 10 месяцев назад
I can finally add AI into my CV
@WeyardWiz
@WeyardWiz 2 месяца назад
😂
@Zeero3846
@Zeero3846 11 месяцев назад
Training is fixing both the input and output and then solving for the weights and bias. Then, once you get the weights and bias close enough to the outputs expect from the given inputs, you fix the weights and bias and evaluate the outputs on arbitrary inputs, or at least inputs you weren't using in the training data. If the training went well, then the outputs will largely be correct. Note, this mostly works with what's called supervised learning, which requires you to have a training data set with known inputs and outputs. One trick that's often used to increase the confidence in the training process is to divide the training set into two similar sets. The first half is used for training, and the second is used to measure how well it did. The idea is that training should extrapolate well to data it was never trained on, but because the second half's output is already known, you'll actually have data on the ready to actually measure the effectiveness of the training. If you just move on straight to inputs taken from the wild, you'll need human intervention to do the measuring, which you might as well do ahead of time.
@jbruck6874
@jbruck6874 11 месяцев назад
Question: what is the reason that (numerical) "solving for weights and biases" is *possible* in practice in the case of a larger ANN? And that with a simple gradient descent...!? An ANN model has 10^4 to ^9 parameters, ie the equation has that many Variables... In case of nonlinear systems, one would be *very* lucky to get a solver algorithm that delivers good results. Is there a deeper conceptual answer why this works with coupled perceptron model- equations?
@gpt-jcommentbot4759
@gpt-jcommentbot4759 11 месяцев назад
@@jbruck6874 Because they don't just use gradient descent they also have extra optimizers too. For why they generalize and don't just overfit to everything, we don't know. We just know that Convolutional NNs converge onto interpretable image features. And that reverse engineering sentiment Recurrent NNs reveal line attractor dynamics, and converge to simpler representations than theoretically possible
@yashaswikulshreshtha1588
@yashaswikulshreshtha1588 4 месяца назад
There is one principle in the world, 1) Everything works on supply and demand. And also neural network use atomic abstractions that create a fluid essence in which the abstractions of outer world can be absorbed and reflected.
@constantinvasiliev2065
@constantinvasiliev2065 Год назад
Thanks. Very simply explained!
@Anaeijon
@Anaeijon 6 месяцев назад
good explanation and great visuals BUT you are missing the importance of a neurons activation function here. Without it, the whole neural network basically shrinks down to a linear regression. Adding an activation function turns the regression into something like a logistic regression. A logistic regression with a verry hard cut basically is (mathematically) identical to a perceptron, which is the simplest form of a neuron. Adding multiple of these together creates a multilayered perceptron (short: MLP). Big MLP are what we call 'artificial neural networks'.
@WeyardWiz
@WeyardWiz 2 месяца назад
So what is the activation function and how to combine with this one, in simple terms?
@YUSUFFAWWAZBINFADHLULLAHMoe
@YUSUFFAWWAZBINFADHLULLAHMoe 11 месяцев назад
“Schematic of a simple artificial neural network”
@user-hl6ls8sv4t
@user-hl6ls8sv4t 11 месяцев назад
What elementary school he went to ☠️
@yashrajshah7766
@yashrajshah7766 Год назад
Awesome simple explanation!
@ChathuraPerera
@ChathuraPerera 11 месяцев назад
Very good explanation
@ocean645
@ocean645 11 месяцев назад
I am now seeing the importance of my discrete mathematics class.
@PeaceNinja007
@PeaceNinja007 8 месяцев назад
Are you saying my bias can be physically weighed? Cuz I surely have a heavy ass bias.
@aleksmarinic5748
@aleksmarinic5748 14 часов назад
We really never used bias in school 😅, just weights
@petronikandrov7593
@petronikandrov7593 3 месяца назад
One of the best explanations
@mlab3051
@mlab3051 9 месяцев назад
Missing activation function... Non linear is an important part of ANN. You should not miss that.
@Beerbatter1962
@Beerbatter1962 Год назад
Equation of a line, y=mx+b, in matrix form.
@Outchange
@Outchange Месяц назад
Thankyou 👏🏽
@muhammadfaizanalibutt4602
@muhammadfaizanalibutt4602 2 месяца назад
You forgot the non linearity function
@derekgeorgeandrews
@derekgeorgeandrews 10 месяцев назад
I thought the function of a neuron was slightly more involved than this? I thought it was some kind of logarithmic response to the input not a purely linear function?
@WeyardWiz
@WeyardWiz 2 месяца назад
Yes it's more complicated of course but this is the basic formula. Determining w and b is where you need crazy math lol
@kbee225
@kbee225 11 месяцев назад
So it's fitting a linear model per factor?
@baileym4708
@baileym4708 11 месяцев назад
Simple equation from elementary school: f(x) = Z(x) = w * x+ b ....hahahaha Maybe high school.
@BackYardScience2000
@BackYardScience2000 11 месяцев назад
I don't know where you went to elementary school at, but we didn't learn physics or equations until at least the 6th or 7th grades, let alone things like this. Lmao!
@shivvu4461
@shivvu4461 7 месяцев назад
Same Lmao
@nasamind
@nasamind 10 месяцев назад
Awesome
@Oscar-vs5yw
@Oscar-vs5yw 11 месяцев назад
This is a very dumbed down explanation, i can understand wanting to avoid the linear algebra but making the dot product into multiplication between 2 variables and calling it "elementary math" seems extremely misleading as those 2 variables represent maybe thousands of values
@ancientheart2532
@ancientheart2532 11 месяцев назад
Simple equation from elementary school? I didn't learns functions in grade school.
@danielmoore4311
@danielmoore4311 9 месяцев назад
Is this the linear regression equation? Why not the sigmoid equation?
@chenwilliam5176
@chenwilliam5176 Год назад
Mathematics used to describe ANN is very simple ❤
@Nico-pb1sr
@Nico-pb1sr Месяц назад
Who leanred y= mx + b in elementary school 😭
@______IV
@______IV 11 месяцев назад
So…nothing like organic neurons then?
@OmniGuy
@OmniGuy Год назад
He learned this equation in ELEMENTARY SCHOOL ???
@lidarman2
@lidarman2 11 месяцев назад
y = mx + b.....but applied to a large ass matrix. He oversimplifed it because the training phase is very iterative and computational intensive.
@TM_Makeover
@TM_Makeover 6 месяцев назад
I wanna know more about it​@@lidarman2
@AccordingToWillow
@AccordingToWillow Месяц назад
all this stress to find out it’s just the fuckin slope formula????
@hoagie911
@hoagie911 11 месяцев назад
... but don't most neutral networks use sigmoid, not linear, functions?
@badabing3391
@badabing3391 11 месяцев назад
you right i think
@rishianand153
@rishianand153 6 месяцев назад
Yea sigmoid function is used to map the value you get from linear function to range between [0,1] which is used as activation value
@DJpiya1
@DJpiya1 11 месяцев назад
This is not fully true, X is not multiplied by W. Both are vectors and this is the dot product of W and X. Not multiplication.
@jeevan88888
@jeevan88888 Год назад
excep ha i involves marice mulipliscaion.
@tabasdezh
@tabasdezh Год назад
👏
@FrancisGo.
@FrancisGo. 11 месяцев назад
@warrenarnold
@warrenarnold 8 месяцев назад
I hate meth, i love math😅
@mrspook4789
@mrspook4789 3 месяца назад
Unfortunately this type of neural net has zero plasticity and cannot learn on its own. That must change someday.
@caldeira_a
@caldeira_a 2 месяца назад
no? it does learn as it changes the weight and bias
@mrspook4789
@mrspook4789 2 месяца назад
@@caldeira_a It's not capable of doing that when it's running though and the pace of witch one learns is very slow. They adapt they don't learn. It effectively a much more advanced version of a decision tree. Liquid neural nets and spiking neural nets come much closer to learning however but we do not use those as they are more difficult to control. Also convolutional neural nets are not temporally aware and they can't think as they are built to be very linear. True learning involves taking new data understanding it by using previous data and then applying the new data in a way that is appropriate to context. Convolutional neural nets only do 50% of this as they can understand new data with existing data but they can't really act on it much with the weights being changed witch doesn't happen with the net alone and learning would also imply a capacity for multiple tasks witch a convolutional neural nets cannot do well as a consequence to there vary linear design. Transformers are better than convolutional neural nets but they have mostly the same problems. A liquid nureal network and spiking neural network can adjust there own effectives and learn autonomously with being retrained and they constantly retrain themselves like biological nureal network.
@caldeira_a
@caldeira_a 2 месяца назад
@@mrspook4789 at this point you're just using semantics, the process where you say it "adapts" it isn't just adaptation, it takes in its mistakes and attempts to correct them, increasing it's own accuracy. Sure, it may not be self aware and thus not be straight up literal artificial intelligence but it's learning nonetheless
@mrspook4789
@mrspook4789 2 месяца назад
@@caldeira_a no it isn't. That's like saying that a computer learned a new task If you reprogram it to do something completely different. Traditional neural nets cannot "learn" on their own that mechanism is done externally. For example a few companies once sent several chatbots into social media apps as an experiment to watch them "learn" and technically a chatbot knows rights from wrongs as that is within its knowledge however those chatbots became racist anyways and the reason because of this is because they're programming was altered due to the statistical patterns of language it was receiving but if it were never retrained it would have never became racist. They don't learn they are just adapted to serve a function and the way that works is through back propagation were you already have the answer and you send the answer back through the neural net in a way that changes the weights and biases literally rewriting the neural nets code to best match the answer and in that case with the chatbot the answer was a bunch of racism. Learning requires you to be aware of previous events and of what you are receiving and the ability to act upon it and convolutional neural nets do not do this neither do Transformers however transformers can do something kind of close to learning. Transformers are often equipped with programs that give them short-term memory that allow them to look at several sentences of text and generator response based on context in this even allows The transformer AI to learn within the extent of its own short-term memory however the training data is not changed it will always have the same behavior and a short-term memory is not unlimited which means things learned within short-term memory will eventually be lost as the training data will prevail as that is permanent. This is where liquid neural nets, spiking neural nets and biological neural nets like brains come to shine because there training data and memory and experience are completely the same but with a transformer and convolutional neural net they are completely separate.
@timmygilbert4102
@timmygilbert4102 9 месяцев назад
This explain nothing, the mul is a filter, the addition is the decibel measure, the bias is the threshold. Basically low bias encode and logic, high bias encode or logic, so it encode a sliding logic. 2 layers encode xor logic. Therefore neural network encode three sets operation, discrimination, composition and equivalency.
@WeyardWiz
@WeyardWiz 2 месяца назад
Bruh
@timmygilbert4102
@timmygilbert4102 2 месяца назад
@@WeyardWiz bruh what 🤔
@WeyardWiz
@WeyardWiz 2 месяца назад
@@timmygilbert4102 We have no idea what you just said
@timmygilbert4102
@timmygilbert4102 2 месяца назад
@@WeyardWiz that's sad, it's English. The formula of a neuron is sum of inputs x weight, then the result is added to a bias value, and submit to the activation function that does a thresholding, IE it activate if the sum is above a value defined by the bias. So the original multiplication is simply filtering the input, IE multiplication by zero remove the contribution of that input, by one it let pass the input value unchanged. Thus only relevant value are taken into account. The sum is basically telling how strong of a signal we have from the input after filtering. The bias shift the sensibility up or down before the activation function. If the signal after bias is strong enough, the activation function trigger it's output to be further processed in the next layer as input. If the bias is low, the signal don't have to be strong, even a single input passing through the filtering will trigger the neuron. IE similar to OR logic. But if the bias is high, all input filtered need to be high, IE the signal need to be strong to activate the neuron. That's equivalent to AND logic. Any bias between low and high create a spectrum between these two logic.
@WeyardWiz
@WeyardWiz 2 месяца назад
@@timmygilbert4102 Well that's much more thorough and easier to grasp, thnx
@GregMatoga
@GregMatoga Год назад
There's like a thousand explanatory videos about how NN work and like none actually using it for anything * useful *
@verizonextron
@verizonextron 4 месяца назад
whater
@arielpirante2
@arielpirante2 11 месяцев назад
i substitute chatgpt for google searches. in the future maybe it will be all chatgpt like softwares and companies will fight over the market of AI and the resource to fight with is the Data. bec Ai needs data.
@subhodeepmondal7937
@subhodeepmondal7937 Год назад
Those who are fooling themselves with this video just try to understand backpropagation😂😂😂. It is not simple at all.
@RIVERANIEVESZ
@RIVERANIEVESZ 11 месяцев назад
Oh...no wonder...😊
@snap-off5383
@snap-off5383 Год назад
But NO, our brains couldn't be working this way and we couldn't possibly be biological machinery. . . right? The main difference is that we take 24 hours or so to create a newly trained network, and AI on silicon is millisecond or less for a new updated neural network. The "math" of AI was able not only to learn chess better than any human in 9 hours, but able to beat the best human created program.
@adityapatil325
@adityapatil325 11 месяцев назад
Stolen from @3blue1brown
Далее
I Made a Neural Network with just Redstone!
17:23
Просмотров 682 тыс.
Why Neural Networks can learn (almost) anything
10:30
Neural Network Architectures & Deep Learning
9:09
Просмотров 775 тыс.
How AIs, like ChatGPT, Learn
8:55
Просмотров 10 млн
Watching Neural Networks Learn
25:28
Просмотров 1,2 млн
Why Does Diffusion Work Better than Auto-Regression?
20:18
I Built a Neural Network from Scratch
9:15
Просмотров 142 тыс.
Best mobile of all time💥🗿 [Troll Face]
0:24
Просмотров 1,5 млн