Before coming here, i saw about 5 videos on SOM. No one pointed out that the algorithm is the same as K-means . You enlightened me! thank you very much
My humble regards to Professor. For so much simplifying complex concepts and explaining intution behind the algorithms ...and encouraging us to understand 🙏
Wonderful professor. I can follow with him even if i am so far from ML field. I start to love Mr hamid and also AI methods and techniques. Thanks a lot my favorite virtual teacher.
I like the way he explains things very clearly. Within machine learning there is a tendency to cloud things to make oneself seem more intelligent - this lecturer shows how simple some of these algorithms (and ML in general) truly are without dumbing things down.
SOM 40:39 1:14:25 Given input X, find i-th unit with closest weight vector by competition. WiT X will be maximum. Find the most similar unit. i(X) = arg max i Ⅱ X - Wk Ⅱ k = 1, 2, 3... m, m = no. of units. The "max" here means highest value of dot product. The most "aligned" set of vectors between input vector and the neuron vector. If the vector are misaligned, the dot product (think cos θ) might be zero.
Given input X, find i-th unit with closest weight vector by competition. WiT X will be maximum. Find the most similar unit. i(X) = arg max i Ⅱ X - Wk Ⅱ k = 1, 2, 3... m, m = no. of units. The "max" here means highest value of dot product. The most "aligned" set of vectors between input vector and the neuron vector. If the vector are misaligned, the dot product (think cos θ) might be zero.