This is easier to understand if you're already familiar with PCA. This video is a summary of what i've learnt in dimensionality reduction,Thanks Siraj.
Siraj,Maaaaahn I am working on my FYP on recommender systems and I was using SVD for reducing dimensions.Thanks for exposing me to PCA and the fact that how this field is breakthroufh worthy...keep up the work..#Respect
i am a prodigy machine learner with python and scikit learn and 16 year old boy i learned more in this video about dimension reduction than i could possibly learn in hours i am now on way to reduce dimensions of my data and improve the accuracy thanks siraj sir i really appreciate that u r posting educational content free of cost allah bless you
Thanks for your content Siraj! It is both informative and entertaining. The videos have improved dramatically as well. Could you make a video comparing and contrasting some of the different neural networks? Especially convolutional neural nets versus deep nets versus artificial nets.
I am pausing this just to laugh at all the pictures and side stuff. This is so informative and fun. What a refreshing surprise as I am trying to research this.
As someone who has worked with high-dimensional data classification, I can say that PCA is a truly powerful technique, and not at all that complicated. Another interesting technique for dimensionality reduction is "random projection", which has the cool property of closely retaining distance relationships between high-dimensional data points in the lower dimensional space (en.wikipedia.org/wiki/Random_projection)
Hie siraj... I want to ask you.. Is it possible to detect the babesia(smaller pixel) from an image. First I convert it to grayscale image then to after applying erosion morphological operation... The babesia images are enhanced but how to extract such smaller object from image.. I really don't know
Can PCA for predicting data? example using same eigenvector but separate data (the score pc and eigenvalue) into traning and testing, thank you... hopely you read this comment. Best regards -Brata
i am kinda struggling with one doubt…..what to do when our independent variables are of different scale…. For example i have likert scale responses and also discrete( 2 or 3 scale) responses….. is there any way to perform PCA/EFA on such mixed data?????/ Plz do reply :(
Hey Siraj, Love watching your videos,just a doubt here, while explaining feature standardization the formula you used i feel is a normalization formula(x-xmin/xmax-xmin), standardization is x-mu/sigma for mu=0 and std dev=1 ,plz clarify here
Finally some real Meat!. I didnt get the part following eigenvector (except the intution of eigenvector), i think i gotta watch all the resource links in the description and then re-watch this video.
3:58 In order to standardize the data (getting a distribution of mean = 0 stdv=1) couldn't we do Xi' = (Xi - avg(x))/(stdev(x)) instead of Xi' = (Xi - min(X))/(max(X) - min(X))? Are both solutions equivalent? Why choose one over the other? P.S. Thank you a lot!!! Your channel is great!!!
Hi Siraj! thanks for the videos. These are really helpful. Can you please make video about, How can I train my own word embedding, like google's google word2vec. I want to make word embedding for Bangla language.
Hey Siraj, what is the best laptop for programming that will not lag and will maximize the range of applications you can create? P.S. I really love your videos.
I've been noticing that you get snippets from other channels as well. You should ask 3Blue1Brown before using his footage. Also you should name him in the video cause that guy definitely deserves attention.
There is a typo in the Introduction, section, under heading "PCA and Dimensionality Reduction", in the last sentence. CURRENT TEXT: "larger magnitude than others THAT the reduction..." SUGGESTED CORRECTION: "larger magnitude than others THEN the reduction..."
4:05 x' = (x-xmin)/(xmax-xmin) x1' = (115-115)/(175-115)=0 x2' = (140-115)/(175-115)=0.417 x3' = (170-115)/(175-115)=1 "That means that data should have a MEAN of 0 and a variance of 1." But here the mean is greater than 0. Mean will be 0 if the formula is: x' = (x-xMEAN)/(xmax-xmin)
Eigen actually roughly translates to "your own", I guess it's a shortening of "Eigenschaft" which literally translates to "characteristic". Even to a German person the term Eigenvalue or Eigenvector is absolutely non-self-explanatory. You have to have your nose put into the fact that these things characterize a matrix, unless you have a brain that just gets that from looking at the relationship of matrices to their Eigenvalue/Eigenvector. Its actually kind of funny that the first 5 minutes of your video do this better than most LA courses in university, practical application FTW, I guess.
3Blue1Brown sqad, I mean Siraj prob had no ill intent. I agree that he should have given him some credit, but he just used it as an aid more so than plagiarizing
Yeah but still, not cool to not mention him in the description. Plus I think a lot of viewers here would greatly benefit from watching 3Brown's algebra video series. I like Siraj a lot and I don't suppose he did this on purpose either.
Pretty useless stuff. Seems like he is reading out of Wikipedia page with some meme flavors. I found this as a better one for PCA. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-FgakZw6K1QQ.html
Hi Siraj, This is my submission for this weeks challenge: github.com/Sri-Vishnu-Kumar-K/MathOfIntelligence/tree/master/dimensionality_reduction_pca I have implemented image compression using PCA and have also depicted it on a gray-scale image of Shakira. I hope you like it. I have taken down my previous solution, because of a valid flaw in the dataset pointed out by someone. Thanks a lot for that, saved me the blues. Thanks a lot for these videos, they serve as great learning.
No Offense but instead of all the tantrums if you could have explained the maths behind it and explained how the math works that would be way too cooler.