You have explained very well. It will be good if you explain, "What is the use of calculating the pca in the geometrical representation you explained at the last?"
Great work sir! Really very helpful for my exams. Very grateful to you 🙏 One suggestion sir, kindly please share practice problems in the description or in comment box sir. Could you please do it as soon as possible sir? Thanks a lot sir 😀
Here, we had considered 2 dimension as the high dimensonal data for example. One of the most usecase of PCA is in dimensionality reduction. So, if you want you can use e2 and get second PC. But then think about it. From 2 variable, we again got 2 variables. That's why he has shown only PC1. However, in reality we generally use 2 PC axes (mostly depends on your data). If it has a lot of variables, then 3 or 4 can also be good but we don't generally go beyond that. So, in this case you will need e2, e3 and e4 as well. So this is how it works.
thank you very much Sir, for ur explantion on that video. I still confused so I would like to ask how to get the value of: [-4.3052, 3.7361, 5.6928, -5.1238] how can I get the value. I still dont get. Thank u Sir
Hi Sir, Great explanation about PCA. But when I searched the covariance matrix for more 2 variables it's showing that covariance is only done between 2 variables. How to calculate the covariance if a dataset have more than 2 variables. Could you please give an explanation on that.....!!
@fintech1378 is right. You need to do pairwise combinations. For example, for 4 variables, your covariance matrix will be 4x4 with the following combinations: cov(a, a) cov (a, b) cov (a, c) cov(a,d) cov(b, a) cov(b, b) cov(b, c) cov(b, d) cov(c, a) cov (c, b) cov(c, c) cov(c, d) cov(d, a) cov(d, b) cov (d, c) cov(d, d)