Тёмный

17: Principal Components Analysis_ - Intro to Neural Computation 

MIT OpenCourseWare
Подписаться 5 млн
Просмотров 37 тыс.
50% 1

MIT 9.40 Introduction to Neural Computation, Spring 2018
Instructor: Michale Fee
View the complete course: ocw.mit.edu/9-...
RU-vid Playlist: • MIT 9.40 Introduction ...
Covers eigenvalues and eigenvectors, Gaussian distributions, computing covariance matrices, and principal components analysis (PCA).
License: Creative Commons BY-NC-SA
More information at ocw.mit.edu/terms
More courses at ocw.mit.edu
We encourage constructive comments and discussion on OCW’s RU-vid and other social media channels. Personal attacks, hate speech, trolling, and inappropriate comments are not allowed and may be removed.
More details at ocw.mit.edu/co...

Опубликовано:

 

7 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 21   
@SyedMohommadKumailAkbar
@SyedMohommadKumailAkbar 4 месяца назад
Absolutely incredible teaching. started doing math again after a gap of 12 years and this just clicked.
@ksjksjgg
@ksjksjgg 2 года назад
Great thanks Prof. Fee and MIT for sharing this excellent lecture!!!
@prasanth_y_11
@prasanth_y_11 Год назад
Wow, This is an amazing explanation by building the topic from basics, Even spectral clustering comes with a similar final analysis of the eigenvalues of the covariance matrix.
@markdebono1273
@markdebono1273 8 месяцев назад
I have been watching videos on this topic for the last 2-3 months. However, it was ONLY after watching this lecture that I understood EVERYTHING! Not to discredit any of the other videos I watched, but this video is made at a very good pace and I did not need to pause a hundred times to give my mind time to process all the incoming information and cool down 😁 If I could give a thumbs up 10 times, I would do it!
@ElectroCute10
@ElectroCute10 2 года назад
This is nice, I tried to replicate the de-noise in Python using sine and cosine "signals" with some random noise (basically sine(timepoint) + np.random()) and realised that it does not work the way it is described here because the variance is always roughly the same in every dimension (i.e. every timepoint). To be able to isolate the underlying trend using eigenvectors I had to skip the step Z = X - MU at 1:16:01, as that step causes the variance the be approximately the same in all dimensions. If we do not subtract the mean but define our "covariance" matrix as simply Z * ZT then our "variance" is actually higher when the underlying signal is higher and lower when the underlying signal is lower. That way I could isolate the signal. Having said that, maybe I have done something completely wrong. This is MIT after all :)
@jonilyndabalos7105
@jonilyndabalos7105 2 года назад
Thank you, MIT for sharing these awesome lecture series with the public. You make learning accessible to all, especially the underprivileged. Please keep these videos coming.
@ganeshhampapura9842
@ganeshhampapura9842 2 года назад
Superb clarity professor God bless you
@lijisisi
@lijisisi 3 года назад
wonderful material! thank you so much Dr. Fee
@jorgegonzalez-higueras3963
@jorgegonzalez-higueras3963 3 года назад
Thank you, Professor Fee, for a very clear explanation!
@djangoworldwide7925
@djangoworldwide7925 Год назад
This cloud is basically the electron cloud, isnt it? amazing how everything just fits to that same radial gussain distribution
@CristianTraina
@CristianTraina 3 года назад
Is really needed to zoom in and zoom out all the time? It doesn't let me focus on what I'm reading..
@et2124
@et2124 9 месяцев назад
lol Lina always asks the exact questions I want to
@benjaminbazi9355
@benjaminbazi9355 4 года назад
Good lecture!
@nshilla
@nshilla 3 года назад
The application part of the lecture is at time 41.00
@abhay9994
@abhay9994 11 месяцев назад
Awesom
@djangoworldwide7925
@djangoworldwide7925 Год назад
I dont believe anyone actually goes thru this process. nowadays its a simple line of code + biplot
@andrewfalcone2701
@andrewfalcone2701 4 года назад
Can someone confirm that lambda+ = a+b, lambda- = a-b is a mistake? I couldn't simplify the radical and double checked with MATLAB's symbolic equation solver (but it doesn't always simplify correctly) and some numbers.
@andrewfalcone2701
@andrewfalcone2701 4 года назад
The A matrix is not the general 2x2 symmetric matrix but [a, b; b,a]. For some reason, d is used on the right side.
@amiha2443
@amiha2443 2 года назад
40:24 Just tagging the PCA main part for my reference
@jimmylok
@jimmylok 6 месяцев назад
All those zoom in and zoom out make me dizzy...
@animikhaghosh6536
@animikhaghosh6536 2 года назад
3:29 wow
Далее
18: Recurrent Networks - Intro to Neural Computation
1:19:13
Bro's Using 3 Weapons
00:36
Просмотров 4 млн
10: Time Series - Intro to Neural Computation
1:18:53
Просмотров 11 тыс.
Robust Principal Component Analysis (RPCA)
22:11
Просмотров 70 тыс.
9: Receptive Fields - Intro to Neural Computation
1:17:35
Principal Component Analysis (PCA)
26:34
Просмотров 410 тыс.
How are holograms possible?
46:24
Просмотров 441 тыс.
Independent Component Analysis 1
50:51
Просмотров 30 тыс.
PCA : the basics - explained super simple
22:11
Просмотров 58 тыс.