Тёмный

Variational Autoencoders 

Paul Hand
Подписаться 1,7 тыс.
Просмотров 32 тыс.
50% 1

A lecture that discusses variational autoencoders. We discuss generative models, plain autoencoders, the variational lower bound and evidence lower bound, variational autoencoder architecture, and stochastic optimization of the variational lower bound.
This lecture is from Northeastern University's CS 7150 Summer 2020 class on Deep Learning, taught by Paul Hand.
The notes are available at: khoury.northeas...
References:
Kingma and Welling 2019:
Kingma, Diederik P., and Max Welling. "An Introduction to Variational Autoencoders." Foundations and Trends® in Machine Learning 12, no. 4 (2019): 307-392. arxiv.org/abs/...
Kingma and Welling 2014:
Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes." arXiv preprint arXiv:1312.6114 (2013).
Razavi et al. 2019:
Razavi, Ali, Aaron van den Oord, and Oriol Vinyals. "Generating diverse high-fidelity images with VQ-VAE-2." In Advances in Neural Information Processing Systems, pp. 14866-14876. 2019.

Опубликовано:

 

27 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 25   
@wilsonlwtan3975
@wilsonlwtan3975 8 месяцев назад
This is a gem. Finally, someone that is able to give concise teaching well! Thank you!
@pietrocestola7856
@pietrocestola7856 10 месяцев назад
Clear, concise and very accurate. Thank you so much for sharing with us this wonderful explanation.
@amirhosseinramazani757
@amirhosseinramazani757 2 года назад
I enjoyed your explanation. I needed something like this video to get a little deeper into the theory of the VAEs. Thank you!
@yurigansmith
@yurigansmith 2 месяца назад
Very good presentation. Thanks a lot!
@yurigansmith
@yurigansmith 2 месяца назад
Training of generative models start here: 6:26
@gorgolyt
@gorgolyt 3 года назад
Best explanation on RU-vid. Exactly what I was looking for. Thorough, logical, intuitive.
@user-or7ji5hv8y
@user-or7ji5hv8y 3 года назад
wow, this is so well explained.
@sahhaf1234
@sahhaf1234 10 месяцев назад
How do we know that p(x|z) is normally distributed?? Do we just assume it? x|z is just a neural network and I dont see any reason for p(x|z) to distribute normally. Actually, the relation between x and z must be deterministic.
@AmanSharma-ug6sr
@AmanSharma-ug6sr Месяц назад
Not apparent in the video, but x|z neural network is actually outputting the mean of the distribution of x for that z, which is a gaussian. This means that there can be that a target image that can be generated by multiple z's (thus by multiple means). When computing the loss function, there are two opposing terms, one is the reconstruction error that is minimizing the distance between this mean and the target image, putting the mean generated to the right place, and KL divergence term between the p(z|x) and the standard normal distribution, which is trying to bring the outputs of the means closer for similar images .
@gomctigger4439
@gomctigger4439 2 года назад
Hi @Paul Hand, thank you for the lecture. What is the intuition behind using q(z|x) in the expectation or the expectation at all? I see that it makes sense mathematically, but how would one get the idea? In contrast, there is a derivation of the ELBO via importance sampling and then applying Jensen Inequality or via the optimal sampler.
@oFabianLoL
@oFabianLoL Год назад
I don't understand what phi and theta mean. "the parameters of the model", does that mean the weights of the neural network? or the parameters of the distribution, eg if it is gaussian, the parameters correspond to a mu and sigma. I appreciate if anyone can clarify, thank you!
@ThatQCboy
@ThatQCboy Год назад
parameters of the model. we use MLE principles to find the optimal phi and theta
@doyney
@doyney Год назад
I'm pretty sure phi and theta represent the parameters in terms of weights and biases in the encoder/decoder neural networks.
@madhusudanverma6564
@madhusudanverma6564 3 года назад
24:48, how maximizing vlb will roughly maximize p(x) because, since x is given p(x) should be constant.
@josephpalermo8898
@josephpalermo8898 2 года назад
p(x) is actually parameterized therefore it's not constant
@bluestar2253
@bluestar2253 3 года назад
One of the best explanations on VAE on YT. Thank you and keep up the good work!
@sucramgnat8157
@sucramgnat8157 2 года назад
Thank you so much for your lecture. You truly have a talent for teaching!
@Procuste34iOSh
@Procuste34iOSh Год назад
thank you so much. so underrated
@maximmaximov4147
@maximmaximov4147 Год назад
It would be really perfect if someone started giving some examples on each step since we are talking about real things that exist in the world. Each step has its meaning and intention and is made to overcome challenges or obstacles that come up on the way. I want to know what we are doing and what is the purpose. And what is gonna happen if we wouldn't do it this way. I cannot find anything non abstract, I need examples to put my imagination on. It is clear and good only if you have prior knowledge of the things being discussed. Otherwise there are million ways to interpret things and even more to get lost
@maximmaximov4147
@maximmaximov4147 Год назад
At 11:00 it seems like if we are talking about pictures the formula written in blue should generate an image with real random noise which doesn't make sense. It should have been done differently like is said in other articles so that random distributions of different images (sets of parameters or pixels) overlap. So that it is not purely random noise which is not we're trying to reach
@slemanbisharat6390
@slemanbisharat6390 Год назад
Thank you excellent explanation!!
@MeowlaMars
@MeowlaMars 10 месяцев назад
This is clear and awesome
@trongduong1047
@trongduong1047 3 года назад
very nice explanation!
@hubertnguyen8855
@hubertnguyen8855 3 года назад
Very nice and comprehensive lecture. Thanks
Далее
Variational Autoencoder - VISUALLY EXPLAINED!
35:33
Просмотров 12 тыс.
Continual Learning and Catastrophic Forgetting
42:07
Просмотров 13 тыс.
The Reparameterization Trick
17:35
Просмотров 20 тыс.
Variational Autoencoders | Generative AI Animated
20:09
Why Does Diffusion Work Better than Auto-Regression?
20:18
Evidence Lower Bound (ELBO) - CLEARLY EXPLAINED!
11:33
Variational Autoencoders
15:05
Просмотров 500 тыс.