Тёмный

Lecture 14: Approximating Probability Distributions (IV): Variational Methods 

Jakob Foerster
Подписаться 8 тыс.
Просмотров 19 тыс.
50% 1

Опубликовано:

 

22 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 11   
@mhsnk905
@mhsnk905 5 лет назад
RIP great man!
@shairuno
@shairuno 9 лет назад
great lecture! Thank you very much for sharing this!
@husseinal-asadi3674
@husseinal-asadi3674 9 лет назад
The sum of a discrete probability distribution should always be 1, correct? For the distribution shown at 6:00, sum_x Q(x) = 1+epsilon which doesn't equal 1. What am I missing?
@anib86
@anib86 8 лет назад
I think he meant (1/3, 1/3, 1/3-\epsilon, \epsilon)
@JakobFoerster
@JakobFoerster 8 лет назад
I believe it's meant to illustrate the point about DKL(P||Q) vs DKL(Q||P). Strictly speaking it should be normalized by 1 / (1+epsilon)
@homadavoudi1545
@homadavoudi1545 6 лет назад
At 10:55, when replacing the P in the KL divergence formula, is not a "Q" missed in the second term ( Z part)?... but if there was a Q multiplied, can we still consider this term effectless in the maximization and ignore it?
@JakobFoerster
@JakobFoerster 6 лет назад
Z is a constant and Q is a probability distribution. Therefore you can pull out Z and sum Q over x, which produces 1. That's the second term he writes down, the log(Z). Does that explain?
@homadavoudi1545
@homadavoudi1545 6 лет назад
Jakob Foerster Right. Thank you!
@sahhaf1234
@sahhaf1234 Год назад
21:30: why mixture of gaussians??
@darthyzhu5767
@darthyzhu5767 7 лет назад
GREAT!
@flamingxombie
@flamingxombie 7 лет назад
Excellent.
Далее
Fake Referee Whistle Moments 😅
00:38
Просмотров 9 млн
Britain's Toughest Exam
10:44
Просмотров 307 тыс.
[DeepBayes2019]: Day 1, Lecture 3. Variational inference
1:02:55
Austin Rochford | Variational Inference in Python
36:47
Fake Referee Whistle Moments 😅
00:38
Просмотров 9 млн