at 8 minutes, it works not only by given explanation but also it comes from epoch training formulas : a) Y2^(i)=beta(x^|I|t *W) b) err=y^|I|-Y2^|I| ...(true-prediction) c) w=w+err*X|I|... depending on error the equation is gonna change, if error is -1 then from the c) equation we can see thhen=w-X^i. It basically means if the predicted output(Y2) is =0 and target(Y) is 1, then we need to=x+w, what graphically would mean that line dividing classes would go to the left side from class 1 to class 0. And opposite can happen: if output is 1, target 0, then need to subtract input vector from weights. It is something to deal with errors and its not understandable from this clearly.
Thanks again for the wonderful lectures. I require some clarification on Uncertainty Principle after 31:40. 31:40 uncertainty principle It is not clear here. The two outputs shown for SGz are in fact the counts of particles in state |0> or |1>. Experimenting on state |0> with SGx further is not clear. Do we mean, that experimenting SGx on those particles which were (deflected) in state |0> by SGz and so on… Does it also mean that if the first SGz experiment is immediately followed by another SGz on those particles which were (deflected) in state |0>, we still get two type of deflections and not just all in |0> direction as one would expect?