Thank you MIT. Just found out today that Professor Patrick had passed on the 19th of July, 2019. I am immensely saddened by this incident. I was actually looking forward to meeting you but I guess that is no longer possible. Rest in Peace legend!
"You have to be very careful about the confusion of correlation with cause. They see the correlation, but they don't understand the cause, so that's why they make a mistake" This is so simple but so meaningful!
His stories and jokes inspires learning and intuition about the subject. That's good teaching skill right there. I was lucky enough to meet teachers with this skill during secondary my school years but I find really rare in university-level education. Thank you MIT and the late professor for this lecture series
46:06 Another thing that is not especially related to the topic is that even when deprived of sleep, the brain works better in the middle of the day rather then the start or end. The huge drops of performance happens when a "subject" is used to sleep/need to sleep. While performance doesn't drop at all (and even goes higher related to the "sleeping time") during the mid-day. Therefore Linear regression can tell you the obvious hypothesis (losing sleep = losing performance) While the Cubic spline can teach you new things you didn't even think of.
I started out going: "This is too slow". Im now on day two, another 10 hour session. The pace of new information is just perfect. You are a great teacher!
that was hella hilarious on the part of the rangers, sleep dep, so the answer then should be how do they get major end decisions out of soldiers when they only have a 25 percent ability, and naps do help immensely if you can handle rounds or just nervousness.
VERY USEFUL FOR STUDENT US GRADUATE PHD RESEARCHERS OR ANYONE NEEDING TOBE INTRODUCED TO AI FIELD I GUES THE BASES GOT HERE FROM PROF WINSTON ARE A GREAT BASIS TO TACKLE OR USE ANY NOTIONS IN THIS VAST FIELD THANKS FROM ALGERIA
At 23:01 the professor says 'so that's just the dot product, right' - but that's to say that cosine similarity = dot product, which is not precise, right? The dot product is the numerator in this case.
I hope that x-axis grows at a much faster rate than his y-axis, otherwise the example to get the idea across makes less sense. Still a great lecture though! Thumbs up.
Wait, so the majority here are programmers and not math people? I thought that it would be the other way around. I don't know where one subject starts and the other begins anymore
Isn't there an ambiguous way to divide the graph into 4 areas? That little triangle in the middle looks like it could be included in any of the four boundaries.
Well what I'm thinking is something you can take and then it would feel like you had 8 hours sleep, even though you didn't (or maybe even though you only had 2 hours). Caffeine doesn't do that --- it just makes it impossible to sleep without really improving your productivity.
I once use a nearest-neighbor algorithm to create a voronoi diagram. I didn't even know there was a name for either of them. I was just playing around with pixels.
Can anyone please help me? 1. Regarding the Robotic Hand Solutions Table: If I understand correctly in the case of the robotic hand, we start from an empty table and drop a ball from a fixed height on the robotic hand. When the robotic hand feels the touch of the ball, we give a random blow as we record the robotic hand movements. Now, only if the robotic arm detects after X seconds that the ball has hit the surface again, it realizes that the previous movement was successful and records the movements it made for the successful result in the table for future use. I guess there is a way to calculate where on the surface the ball fell and then in case the robotic hand feels that the ball touched a region close to the area it remembers it will try the movement closest to these points in the table. Now there are a few things I do not understand: A. The ball has an angle, so that touching the same point on the board at different angles will lead to the need to use a different response, our table can only hold data of the desired point and effect and do not know the intensity of the fall of the ball or an angle, the data in the table will be destroyed or never fully filled ? B. How do we update the table? It is possible that we will drop a ball and at first when the table is empty we will try to give a random hit when the result of this is that the ball will fly to the side so we will not write anything in the table, now this case may repeat itself over and over and we will always be left with an empty table? It seems to me that I did not quite understand the professor's words and therefore I have these questions. I would be very happy if any of you could explain to me exactly what he meant by this method of solution. 2. In relation to finding properties by vector: If I understand correctly, we fill in the data we know in advance, and then when a new figure is reached, and we do not know much about it, we measure the angle it creates with the X line (the angle of the vector) and check which group is the most suitable angle. Now there is a point I do not understand. Suppose I have 2 sets of data, 1 group have data with very low Y points and very high X points and a second group having data with high X and Y points when I get a new data with a low Y and low X , the method of the vector angle will probably associate them with group 1 although it appears on paper that the point is more suitable for group 2. It seems that if we used a simple surface distribution here (as in the first case presented by the professor) we would get more accurate results than the method of pairing according to vectors angle?
At 41:45, the professor indicates that you cannot use AI for predicting bankruptcies in credit card companies. That's like making cake without flour. Wouldn't the credit card company have relevant data to be able to use AI to predict bankruptcies? Why is the answer "no"?
some of the principle looks like very abstract and super-natural and human have been considering as mystery and classify it to AI, but actually it is very simple. Computer could simulate it easily. The Brain is small but can do a lot of things. Not because of mystery but very simple structure.
Take two vectors of u, v of R2 as an example. Let u=[x11 x12] and v=[x21 x22]. Then, cos(theta)=(x11*x21 + x12*x22) / ( sqrt(x11^2 + x12^2 ) * sqrt(x21^2 + x22^2 )). If u=v, then x11=x21 and x12=22 so cos(theta)=1.
If you are talking about "x prime", that's not the derivative of x. It's a new random variable. More precisely, it's a random variable transformed from the original x. With the definition of "x prime", you can calculate its variance by plugging it in the formula. You will get 1.