Great video! Since you asked for further suggestions ;) 1) In the feedForward method in the Level class, we can refactor various things: a) The first for-loop may be simply replaced by level.inputs = givenInputs. b) The nested for-loop, where the sum is computed, can be replaced by using the array method "reduce". c) The if-clause can be replaced by a ternary operator. 2) In the feedForward method in the NeuralNetwork class, we can get rid of the code duplication as follows: Initialize ouputs = givenInputs. Then start the for-loop with index i=0.
Thanks :-) For 1 a), I don't remember why I did that. It could be something not obvious at this stage... maybe we need a deep copy later when parallelizing and visualizing things. But now I can already imagine you saying level.inputs = [... givenInputs] :-)) All others are nice ideas as well, especially since I don't use "reduce" at all during this course. Will keep them in mind for future courses.
@@Radu It's all been smart 😀 Don't sell yourself short. You're tackling a problem I typically am not exposed to, so it's been fun to think about how I would do it and compare to your work. I also don't know much about building NNets, and your explanation was fantastic and easy to follow. Thank you for that :)
Awesome as always Radu. Just one question (so far). This code is like that (in this specific order) because you have decided it, right? if (this.useBrain){ this.controls.forward=outputs[0]; this.controls.left=outputs[1]; this.controls.right=outputs[2]; this.controls.reverse=outputs[3]; } I mean, the order can be different and the neural network will 'adjust' to it, right?
Well, I accidentally joined this course at Level 6 XD But... I was able to code along with it. So thanks, that shows how good your teaching technique is!! On to the visualiser...
It took me a long time before I could understand why he was able to code such interesting projects. This guy is a genius, and has great passion for his subjects, as well as a great humor. Thank you so much for the content.
@@Radu You are very humble but my first job was definitely from doing your tutorials. I got a job offer for 50k starting. It was after this I completed this video (I put in a good year of programming too).
Full self-driving car course playlist (3 lectures still to come on visualization, optimization and fine-tuning): ru-vid.com/group/PLB0Tybl0UNfYoJE7ZwsBQoDIG4YN9ptyY
Hi Radu, this was an incredible video and I'm really grateful for it. The best part was the visualisation aling with the code. Just a small suggestion, the graphs and animation (particularly line slope animation) could have been a bit bigger. Hope to see more advanced content :)
You can use normal methods as well at this stage. No problem with that... But later (I think), we save the network by serializing in local storage. And if the methods are static, I don't need to worry about serializing them or parsing the object (it avoids some software engineering I didn't want to focus on ... thought it would be too distracting).
@@Radu We did it, my girlfriend and I. We really enjoyed it! I will be doing the whole series now, she will be doing some reading to get into ML Great stuff man, we’re subbed 😄
Hi, I learned and enjoyed a lot in this series. One question. Why is 6 used in middle level. What will happen if I use a bigger or small number in place of 6 there?
It's an arbitrary number :-) The system supports if you remove that layer entirely [inputs, outputs], or if you add few more hidden layers, like [inputs, 6, 5, 4, outputs]. The number of the neurons on each layer is related to how difficult the task is, more complicated tasks will require more complexity (like brains of different animals enable them to do less or more intelligent things) but they are also more difficult to train (like human brains take years to develop while some species know what to do days or even hours after being born). Our algorithm for optimizing the network (2 videos after this one) is not very sophisticated, so, training a large network will take quite a long time.
Thank you much Mr Radu. It's so entertaining but more important, it's so useful the lessons you teach us here. I immediately subscribe after watching this video. Anyway Mr Radu, at 21:31 Can we teach the car / the brain to randomly move slight right or left when we meet a condition like that. I am trying to teach the car's brain but unfortunately, even my own brain do not understand yet about any of these :")
Thank you for your amazing video - I am having an error in my code at 18:29 in the video, when you define outputs and link the neural network to car.js. This is the error: network.js:61 Uncaught TypeError: Cannot read properties of undefined (reading 'inputs') - in my code this is pointing to the initial for loop in the feedForward method of the Level class. Do you know where the issue might be? Thanks
Hello, great video :D, but I didn't quite understand the reason behind choosing this amount of neurons in the hidden layer. What is the impact for adding more or fewer layer/neuron ? Why bother adding a hidden layer in the first place ? And if more layers/neurons is better (I imagine this is the case), why not add 10 layers with 8 neurons each for example (or make 1 "super layer" with 80 neurons in it, or make 40 layers with 2 neurons each to reduce the amount of connections per layer) ?
In short, more layers / neurons means a more complex network, meaning that it can do more things... Here, I don't think the hidden layer is necessary to accomplish what is needed, I just showed it because I wanted to show that the code is general and can work with many layers and different neuron counts. Larger networks can be more powerful, but finding optimum weights and biases is also more difficult. Our way of training here (trial and error, pretty much) is not fantastic, so, a complex network may perform worse than a simple one. I will make a part 2 and 3 to this course in the future. In part 2 I will design more complicated scenarios and in part 3 we will learn more about neural networks. So, stay tuned :-)
I recommend you watch phase 3 of the course. The 'Understanding AI' playlist on the channel. The first few lessons there talk about the math behind neural networks. And you can watch that now with no problem. It doesn't depend on phase 2 of the course.
@@Radu sure will do that, well i want to master machine learning, and have no idea how to do that where to learn what to learn. I m thinking of completing self driving car and then ur machine learning course and leaning some maths from 3b1b. is that right approach for now as a beginner who knows how to make simple 2d stuff in directx using c++
@SeraphicFrost if you want to understand how things work, that sounds like a good plan. But keep in mind that once you know what you're doing, switching to python gives you access to a lot of advanced methods, already implemented in various libraries.
I assume you mean 'self-learning'? If by that, you mean that an agent doesn't learn anything, you are right... Once it gets assigned a brain it is 'fixed' and never changes throughout it's 'lifetime'. But the entire system is evolving or... learning what to do because the best of each generation is kept and mutated upon. So... it depends on the viewpoint.
@@ertemeren I think I do teach how to store the brain in localStorage and continue to evolve on top of it... I also have phase 3 of the course now (Understanding AI playlist). It explains the math of neural networks much better than here and you can jump into it right away.
why static methods ?? why not normal methods? i know that static methods belongs to class and not to the individual object. but how does that logic applies here? and sir, What do you mean by 6:45 "i want to serialize this object afterwards"?
You answered your own question, kind of :-) Serializing means I want to store later the brain so it 'survives' refreshing the page. If you do that, it only stores the object attributes (like the weights, biases, in this case). Not the methods (like the feedforward algorithm). I use static methods because they are part of the class (we don't serialize that), not the objects (which we serialize). Hope this helps.
Hmmm put a breakpoint when the sensor reads something and try to reproduce and see if all the touches are detected. You can also share your code if you want.
Hi Marc! I'm coding them with the same techniques I used to make the neural network visualizer (my next video in the series). I render them on a green canvas, record the screen and then crop it and remove make the green parts transparent in editing.
Good job at explaining a very complex subject in a digestible byte. The only stopping point for me is when I connect the NeuralNetwork to the car, the browser tab spins until it crashes. I commented out the code where we are adding the Levels in the NeuralNetwork constructor then then project loads up fine. It’s late now gonna give my neurons a break. Anything you can think of that I a, doing wrong?
@@Radu Thank you for your answer. I'm a neurophysiologist (i.e. I study the 'mechanics' of the nervous system), and when I heard at 01:18 that "a single neuron does something really simple", especially after such a naive and over simplistic description of what a neuron is/does, it made me startle. I don't think you ask the right question. It's not what I would add to this explanation that's important, because there would be thousands of pages to add in order to give a more accurate description of what a neuron is and does, and that would be out-of-scope here. I would rather subtract the part pretending a real neuron does something simple because this is just not true. I am going to give you a few informative examples, but that will be just examples to prove my point and definitely not suggestions about what to add to your video. You said there were "branch-like structures that received the signals", i.e. dendrites. First dendrites are not only "receivers", they can also be "senders" (see dendritic release), even though this phenomenon is not well-known. Dendrites will not just receive signals, they will modulate their receptors, grow, "ungrow", produce variable delays, produce combinatorial computations based on branches (akin to logic gates or mathematical functions), transmit "reconfiguration signals" to the neuron nucleus so that the neuron behaves differently (modulating output, producing spontaneous rhythms, etc.), produce oscillations, etc., the list is long. This means there will be very complex computations happening there, that go way beyond a mere summation of inputs. Again, I'm not saying such a video should be an introduction to neurobiology, I'm merely saying, one should be careful pretending a neuron is something simple without first having a serious read about it. On a different note, I think your videos are cool and will probably help beginners understanding how to create a simulation and implement some AI algorithms along the way.
@@rmsv Ok. I can see why you were startled by this :-) My goal was to simplify and make the concepts more approachable. Definitely not to give a lesson in biology / physiology (I said I explain it as good as I can = high-school level). I do know a few more things than that... but they are not very relevant to artificial neural lessons so, I left them out. I know that some species have fewer, but more complex neurons than others making them more intelligent than others. This sounds a bit like your explanation of the dendrites above. But if dendrites really do complex computations like that then they can be modeled as smaller neural networks as well :-))
@@rmsv @Veritasium has a good video from about a year ago about analog computing explaining how they may make a comeback and how they can be used to implement neural networks.
@@Radu sir can you jus share the video or tell what it do coz i am seeing this first time in javascript that would be really helpul thankyou you are great teacher
@@shivamdubey4783 It's a method that belongs to the class itself, not to the object you instantiate from it. I use it because at some point, later, I serialize the neural network, and traditional methods don't serialize. Static ones don't either... but they remain available as such.