I have studied eigenvalues and eigenvectors, multiple times, but this video explained to me the depth of it in a very simple way! One of the best teachers out there.
गुरू ब्रह्मा गुरू विष्णु, गुरु देवो महेश्वरा गुरु साक्षात परब्रह्म, तस्मै श्री गुरुवे नमः Aapko koti koti naman hai sir, jo ye gyan aapne youtube ke jariye hum sabko diya
I am always amazed how important the concept of eigenvectors and eigenvalues are, they are one of the most important concepts of quantum mechanics. Every operator( ex- energy, momentum) in Q.Mech is a linear operator and our aim usually is to find the corresponding eigenvectors and eigenvalues. Time-independent Schrödinger eq usually takes the form of eigenvalue equation Hψ =Eψ. It's so amazing to see how these concepts are finding their role in Machine Learning as well. MY love for Math keeps on growing. As always thank you for your amazing videos
The dedication that reflects from your content just shows how good of a teacher you are. Me hamesha sochta tha k eigenvectors or eigenvalues ko seekh k kia krunga. Aaj is video se jawab mil gaya. Bht shukriya!
00:02 PCA aims to reduce dimensionality while maintaining data essence. 02:55 Projection and unit vector for PCA 10:27 Principle Component Analysis (PCA) helps to find the direction for maximum variance. 12:48 Variance measures the spread of data 19:22 Principal Component Analysis (PCA) helps in understanding the spread and orientation of data. 21:56 PCA provides complete information about data spread and orientation. 27:10 Principle Component Analysis involves transformations and changing directions of vectors. 29:39 Linear transformation does not change vector direction. 34:24 Principal Component Analysis (PCA) uses eigenvectors for linear transformation. 36:36 Principal Component Analysis (PCA) helps identify vectors with the highest variation in data. 41:55 Principal Component Analysis allows transforming data and creating new dimensions. 44:15 PCA involves transforming the dataset to a new coordinate system 49:14 Using PCA to find the best two-dimensional representation of 3D data 52:07 Principle component analysis (PCA) involves transforming and transporting the data. Crafted by Merlin AI.
You are really a good teacher, I am in IIT Bombay, Environmental Engineering, Mtech , but I wanted to learn ML, this playlist is so far best understandable for me.
Thankyou So Much Sir, You not only Cleared my doubt's about how PCA works, but also for the first time gave me mathematical intitution of Eigen Value and Eigen Vector and even Matrices Transformation which I am learning from previous so many years Best Explaination I've seen regarding this topic
I am blown away by understanding the true meaning of eigen vectors. I always knew the definition but now I have understood the meaning. You are a savior!
Sir, it is really the best explaination on the PCA, i was struggling to learn PCA before but after watching this video mine mostly concepts are cleared. Thanks Sir ji for this valuable content.
Actually I was learning PCA for the first time. When I watched the video for the first time I didn't understand it but when I watched it a second time then all the topics very clearly. This video is amazing
45:44 I think it's gonna be (3,1) and when transposed it's gonna be (1,3) which then is multiplied with the matrix representing the dataset. (1,3) × (3, 1000) . This representation is valid too
Hi pravin if u have got the job, could you guide me a little. I have questions related to how the work gets distributed in a data science dept. of a company. How the data science dept works and how the work gets distributed etc etc...Could u plz share ur email?
One small explanation of shortcut in lecture at 16:04, co-variance actual formula includes xmean and ymean, here both were zero, that's why shortcut sum(x*y)/3 formula for covariance is: covariance(x,y) = summation[(x-xmean) (y-ymean)] / n basically this is the same reason, covariance matrix has variance at diagonal 22:57 both features are same x so covariance(x,x) = summation[(x-xmean)(x-xmean)]/n which is actually the formula for variance
those who are wondering why three eigen vectors everytime, because covariance matrix is a symmetric matrix, and Real Symmetric Matrices have n linearly independent and orthogonal eigenvectors. zero vector is not considered an eigen vector although it satisfies Ax= λx, like wise there might be upto n LI eigen vectors for n*n symmetrix matrix
Thank you so much sir, you always leave us awestruck by your remarkable explanation and in-depth knowledge. I never knew this topic can be explained with this much clearity. The teacher I never knew I needed in my life ❤️✨
One small correction: 52:55 Eigen Vectors are the COLUMNS of the matrix which is given as the output of np.linalg.eig() not the rows which you have used... please correct me if I am wrong
sabse bada eigenvector sabse bade eigen value ke corresponding hoga , but ek eigen value ke corresponding more than one eigen vector hotai hai , infact poora eigen space hota hai (except 0 vector ofcourse)!! in R2 plane ,it will have uncountable eigen vectors corresponding to the largest eigen value
ooh i figured it out , i think if the eigen vectors are LD they have the same direction and direction is what matters , and i we have and LI one , then we will have one more u which works equivalently good
If you are facing an error then you should know that in Pandas version 2.0 and above the append() method is deprecated and removed. Replace append () with concat()..
one question do we need to sort the eigen vectors based on highest eigen values and then choose the eigen vectors accordingly? also sum of top K eigen values will show us how many eigen vectors we need to take (in case of high dimensional data)
sir i just wanted to ask that can we write our own machine learning algorithms instead of using sklearn and tensorflow i mean from scratch plz make a video about that. I have been following you whole series. Sir do reply. Thanks to your efforts
Hi Sir just to understand the concept well, so when we do the transformation of the data D, do we use the matrix of Eigen Vector (Calculated using the Covariance Matrix) or do we use the Covariance Matrix itself? Its using the matrix of Eigen Vector right?
I think, There can only be n eigen values for a n*n matrix. And n unit eigen vectors for it. but can be as many as eigen vectors as possible. we need to just multiply by some k to to that unit eigen vectors to get some more eigen vectors. :)