Тёмный
No video :(

07L - PCA, AE, K-means, Gaussian mixture model, sparse coding, and intuitive VAE 

Alfredo Canziani
Подписаться 39 тыс.
Просмотров 10 тыс.
50% 1

Опубликовано:

 

28 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 36   
@ShihgianLee
@ShihgianLee 2 года назад
I am awake at 52:09 as well 🤣Joke aside, 2021 videos are quite different from 2020, which is a great treat! I am being introduced to VAE from EBM's R(z). Also, thanks for sharing the homework 3 questions which help me to think and understand EMB better. Thank you Professor Yann and Professor Alfredo! 🥰
@cambridgebreaths3581
@cambridgebreaths3581 3 года назад
Yesss..missed the notification for few hours.....I should make a model to predict the upload of your next videos:)))
@alfcnz
@alfcnz 3 года назад
Haha, next week, it's the last week of uploading content.
@cambridgebreaths3581
@cambridgebreaths3581 3 года назад
@@alfcnz lovely. Thanks Alf
@user-co6pu8zv3v
@user-co6pu8zv3v 3 года назад
Hello, Alfredo! Thank you for video!
@alfcnz
@alfcnz 2 года назад
Hello 👋🏻👋🏻👋🏻 You're welcome 😊😊😊
@harisumanth
@harisumanth 3 года назад
Hi, this video doesn't have thumbnail. Just letting you know. Thanks for the amazing lectures!!
@alfcnz
@alfcnz 3 года назад
I'll do that in the morning. Thanks! Something must have gone doing with the automatic thumbnail generation.
@alfcnz
@alfcnz 3 года назад
Okay, added from my phone. Thanks again.
@harisumanth
@harisumanth 3 года назад
@@alfcnz welcome!
@hamedgholami261
@hamedgholami261 2 года назад
thank you for this but one request though, can you share homework for these weeks as you did for the last weeks? It is a really good practice to go through. thanks again.
@alfcnz
@alfcnz 2 года назад
There are 3 homework assignments in total.
@hamedgholami261
@hamedgholami261 2 года назад
@@alfcnz Oh I see! so you mean nyu students go through 3 assignments as well? so I'm really grateful you share them with us. thank you, sir!
@hamedgholami261
@hamedgholami261 2 года назад
@@alfcnz but if you mean you don't have the intent to share them publicly, can I contact you in person and provide proof of my identity as a researcher student so that you can share homework with me and be sure that I don't share them anywhere? I am just a student and my only intent is to reinforce my knowledge. anyways, if you disagree, please tell me an alternative way so that I can reinforce my knowledge myself by coding something, thank you in advance.
@alfcnz
@alfcnz 2 года назад
3 assignments and a final project.
@anondoggo
@anondoggo 2 года назад
Time stamps 00:06:55 - Training methods revisited 00:08:03 - Architectural methods 00:12:00 - 1. PCA 00:18:04 - Q&A on Definitions: Labels, (un)conditional, and (un, self)supervised learning 00:25:31 - 2. Auto-encoder with Bottleneck 00:27:40 - 3. K-Means 00:34:40 - 4. Gaussian mixture model 00:41:37 - Regularized EBM 00:52:08 - Yann out of context 00:53:24 - Q&A on Norms and Posterior: when the student is thinking too far ahead 00:53:58 - 1. Unconditional regularized latent variable EBM: Sparse coding 01:06:10 - Sparse modeling on MNIST & natural patches 01:12:18 - 2. Amortized inference 01:17:02 - ISTA algorithm & RNN Encoder 01:26:56 - 3. Convolutional sparce coding 01:36:37 - 4. Video prediction: very briefly 01:39:22 - 5. VAE: an intuitive interpretation 01:48:34 - Helpful whiteboard stuff 01:52:35 - Another interpretation
@alfcnz
@alfcnz 2 года назад
Thank you for this! I'll add the chaptering for the English and French version. Feel free to create more of these! ❤️❤️❤️
@alfcnz
@alfcnz Год назад
Added!
@youtugeo
@youtugeo 2 года назад
In 1:12:00 Yann Lecun says that the brain doesn't do reconstruction, that it doesn't reconstruct an input from an embedding. This seems very counter intuitive to me... Why not? What are dreams then? Aren't they reconstructions of input signals (images, sounds etc.) from some sort of embeddings?
@youtugeo
@youtugeo 2 года назад
Ok, instead of deleting the question I will write what I think the answer is.. I think he means that these embeddings should be learned through feature extraction first (learning to generate z from y with a decoder). Then, of course they could be used for reconstruction...
@AdityaSanjivKanadeees
@AdityaSanjivKanadeees 2 года назад
Hi Alfredo, once gone through this online version, do you recommend watching some parts from the offline version as well. PS: Thanks a Lot for this series!!
@alfcnz
@alfcnz 2 года назад
What is the “offline version”?
@AdityaSanjivKanadeees
@AdityaSanjivKanadeees 2 года назад
@@alfcnz The spring 2020 version, which was recorded in a class.
@alfcnz
@alfcnz 2 года назад
In 2020 we had other guest lectures and a few of our lessons were also different. On the website of the 2021 edition I've listed all the videos that one should study.
@AdityaSanjivKanadeees
@AdityaSanjivKanadeees 2 года назад
@@alfcnz Thanks a lot..
@buoyrina9669
@buoyrina9669 2 года назад
Are there any reference textbooks for this course ?
@alfcnz
@alfcnz 2 года назад
I'm writing it as I'm replying to you 😃😃😃
@buoyrina9669
@buoyrina9669 2 года назад
@@alfcnz thanks. Looking forward to your book
@alfcnz
@alfcnz 2 года назад
Trust me, me too, me too 😅😅😅
@ahmedbahaaeldin750
@ahmedbahaaeldin750 3 года назад
There is a sudeen jump in the prerequisites math needed , why dont you supplement us with the best sources to keep up with u and yann in term of math ?
@alfcnz
@alfcnz 3 года назад
Which part are you facing difficulties with? Can you point out minutes:seconds?
@sujinshrestha2543
@sujinshrestha2543 3 года назад
nice
@alfcnz
@alfcnz 3 года назад
😇😇😇
@anondoggo
@anondoggo Год назад
1:07:41 I still can't quite figure out how Yann visualized the columns, is it by gradient ascend (grad cam I think?) thank you!!!
@petrdvoracek670
@petrdvoracek670 Год назад
I think it is the actual weights W, no grad cam required. He also mentions that each tile is basis function, which confirms my assumption.
Далее
06L - Latent variable EBMs for structured prediction
1:48:54
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
Clustering (4): Gaussian Mixture Models and EM
17:11
Просмотров 280 тыс.
04.1 - Natural signals properties and the convolution
1:09:13
6 Inventions That Are Older Than You Think
14:24
Просмотров 123 тыс.
Why Does Diffusion Work Better than Auto-Regression?
20:18
How AI 'Understands' Images (CLIP) - Computerphile
18:05
All Learning Algorithms Explained in 14 Minutes
14:10
Просмотров 226 тыс.
02L - Modules and architectures
1:42:27
Просмотров 22 тыс.