Тёмный

Understand the Math and Theory of GANs in ~ 10 minutes 

WelcomeAIOverlords
Подписаться 19 тыс.
Просмотров 61 тыс.
50% 1

Join my Foundations of GNNs online course (www.graphneuralnets.com)! This video takes a deep dive into the math of Generative Adversarial Networks. It explains the optimization function, steps through an algorithm for solving it, and theoretically proves that solving it leads to the perfect Generative model.
Part 1 of this series gave a high-level overview of how GANs work: • Gentle Intro to Genera...
My blog series on GANs: blog.zakjost.com/tags/generat...
The original paper from Ian Goodfellow: papers.nips.cc/paper/5423-gene...
Mailing List: blog.zakjost.com/subscribe
Discord Server: / discord
Blog: blog.zakjost.com
Patreon: / welcomeaioverlords

Опубликовано:

 

25 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 81   
@jlee-mp4
@jlee-mp4 4 месяца назад
Holy sh*t, this guy is diabolically, criminally, offensively underrated. THE best explanation of GANs I have ever seen, somehow rooting it deeply in the mathematics while keeping it surface level enough to fit in a 12 min video. Wow
@elliotha6827
@elliotha6827 7 месяцев назад
The hallmark of a good teacher is when they can explain complex topics simply and intuitively. And your presentation on GANs in this video truly marks you as a phenomenal one. Thanks!
@fidaeharchli4590
@fidaeharchli4590 2 месяца назад
I agreeeeeeeee, you are the best, thank you sooo mutch
@luisr1421
@luisr1421 4 года назад
Didn't think in a million years I'd get the math behind GANs. Thank you man
@welcomeaioverlords
@welcomeaioverlords 4 года назад
That's great to hear!
@shivammehta007
@shivammehta007 4 года назад
This is Gold!!! Pure Gold!!
@shaoxuanchen2052
@shaoxuanchen2052 4 года назад
OMG that is the best one in explaining GANs I found these days!!!!! Thank you so much and I'm so lucky to find this vedio!!!!!!
@alaayoussef315
@alaayoussef315 4 года назад
Brilliant! Never thought I could understand the math behind GAN's
@gianfrancodemarco8065
@gianfrancodemarco8065 2 года назад
Short, concise, clear. Perfect!
@bikrammajhi3020
@bikrammajhi3020 Месяц назад
Best mathematical explanation on GAN on the internet so far
@tusharkantirouth5605
@tusharkantirouth5605 10 месяцев назад
Simply the best .. short and crisp... thanks and keep uploading such beautiful videos..
@TheTakenKing999
@TheTakenKing999 2 года назад
Awesome explanation. The original GAN paper isn't too hard to read but the "maximize" the Discriminator always irked me. Like.. my understanding was correct but I would always have trouble explaining it to someone else, this is a really well put together video. Clean, concise and good explanation. I think because of the way Goodfellow et al. phrased it, as "ascending the gradient" many people get stuck here, because for beginners like us we have gradient "descent" stuck in our heads lol.
@deblu118
@deblu118 6 месяцев назад
This video is amazing! You make things intuitive and really dig down to the core idea. Thank you! And also subscribed your blog!
@janaosea6020
@janaosea6020 8 месяцев назад
Wow. This video is so well explained and well presented!! The perfect amount of detail and explanation. Thank you so much for demystifying GANs. I wish I could like this video multiple times.
@williamrich3909
@williamrich3909 3 года назад
Thank you. This was very clear and easy to follow.
@shashanktomar9940
@shashanktomar9940 3 года назад
I have lost count of how many times I have paused the video to take notes. You're a lifesaver man!!
@user-rr1jk1ws2n
@user-rr1jk1ws2n Год назад
Nice explanation! The argument at 7:13 once felt like a jump for me, but found it similar to 'calculus of variation' I learned in classical physics class.
@dipayanbhadra8332
@dipayanbhadra8332 5 месяцев назад
Great Explanation! Nice and clean! All the best
@EB3103
@EB3103 3 года назад
Best explainer of deep learning!
@wenhuiwang4439
@wenhuiwang4439 6 месяцев назад
Great learning resource for GAN. Thank you.
@superaluis
@superaluis 4 года назад
Thanks for the detailed video.
@Daniel-ed7lt
@Daniel-ed7lt 5 лет назад
I have no idea how I found this video, but it has been very helpful. Thanks a lot and please continue making videos.
@welcomeaioverlords
@welcomeaioverlords 5 лет назад
That's awesome, glad it helped. I'll definitely be making more videos. If there's any particular ML topics you'd like to see, please let me know!
@Daniel-ed7lt
@Daniel-ed7lt 4 года назад
@@welcomeaioverlords I'm currently interested in CNNs and I think it would be really useful if you would describe its base architecture, same as you did for GAN, while simultaneously explaining the underlying math from a relevant paper.
@dingusagar
@dingusagar 4 года назад
best video explaning the math of GAN. Thanks !!
@jovanasavic4357
@jovanasavic4357 3 года назад
This is awesome. Thank you so much!
@siddhantbashisth5486
@siddhantbashisth5486 3 месяца назад
Awesome explanation man.. I loved it!!
@dman8776
@dman8776 4 года назад
Best explanation I've seen. Thanks a lot!
@symnshah
@symnshah 3 года назад
Such a great explanation.
@DavesTechChannel
@DavesTechChannel 4 года назад
Great explanation man, I've read your article on Medium!
@toheebadura
@toheebadura 2 года назад
Many thanks, dude! This is awesome.
@psychotropicalfunk
@psychotropicalfunk 2 года назад
Very well explained!
@tarunreddy7
@tarunreddy7 8 месяцев назад
Lovely explanation.
@paichethan
@paichethan 3 года назад
Fantastic explanation
@anilsarode6164
@anilsarode6164 3 года назад
God bless you, man !! Great Job !! Excellent !!!
@adeebmdislam4593
@adeebmdislam4593 Год назад
man immediately knew you listen to prog and play guitar when i heard the intro hahaha! great explanation
@architsrivastava8196
@architsrivastava8196 3 года назад
You're a blessing.
@walidb4551
@walidb4551 4 года назад
THANK GOD I FOUND THIS ONE THANK YOU
@manikantansrinivasan5261
@manikantansrinivasan5261 Год назад
thanks a ton for this!
@maedehzarvandi3773
@maedehzarvandi3773 3 года назад
you helped a lot of lot 👏🏻🙌🏻👍🏻
@ishanweerakoon9838
@ishanweerakoon9838 2 года назад
Thanks very clear
@muneebhashmi1037
@muneebhashmi1037 3 года назад
tbvvh couldn't have asked for a better explanation!
@ramiismael7502
@ramiismael7502 3 года назад
great video
@caiomelo756
@caiomelo756 2 года назад
four years ago I read the original GAN paper for more than a month and could not understand what I was reading, and now it makes sense
@StickDoesCS
@StickDoesCS 3 года назад
Really great video! I have a little question however since i'm new to this field and i'm a little confused. Why is that at 5:02 you mentioned about ascending the gradient to maximize the cost function? Would like to know exactly why this is the case because I initially thought the cost function generally has to be minimized, so the smaller the cost ideally the better the model. Maybe because of how I'm looking at cost functions in general? Like is there a notion of it being already referred to as something we want to be small, so now we'd simply treat it as the negative of a number in which that number is what you're referring to as the one we want to maximize? Subscribed by the way, keep up the good work! :>
@welcomeaioverlords
@welcomeaioverlords 3 года назад
In most ML, you optimize such that the cost is minimized. In this case, we have two *adversaries* that are working in opposition to one another. One is trying to decrease the cost (discriminator) and one is working to increase the cost (generator).
@shourabhpayal1198
@shourabhpayal1198 2 года назад
Good one
@friedrichwilhelmhufnagel3577
@friedrichwilhelmhufnagel3577 9 месяцев назад
CANNOT UPDATE ENOUGH. EVERY STATISTICS OR ML MATH VIDEO SHOULD BE AS CLEAR AS THIS. YOU DEMONSTRATE THAT MATH AND THEORY EXPLANATION IS ONLY A MATTER OF AN ABLE TEACHER
@bernardoolisan1010
@bernardoolisan1010 2 года назад
I have a question. in 4:49 from were we take the real samples, for example, we want to generate "faces", in the generator m samples are just random vectors of the dimensions of a face image, so it can be a super ugly blur picture right? but what about the real samples? they are just faces images that were taken out of the internet?
@bernardoolisan1010
@bernardoolisan1010 2 года назад
when the training process is done, do we only use the generator model? or what? how to use it in production?
@123epsilon
@123epsilon 2 года назад
Does anyone know any good resources to learn more ML theory like how it’s explained in this video? Specifically content covering proofs and guaranteeing convergence
@bernardoolisan1010
@bernardoolisan1010 2 года назад
also, were it says theory alert it means that is only for proving that the model is kind of good? like the min value is a good value?
@jrt6722
@jrt6722 11 месяцев назад
Would the loss function works the same if I switch the label of the real sample and fake sample? ( 0 for real sample and 1 for fake sample).
@goodn1051
@goodn1051 5 лет назад
Thaaaaaaank youuuuuu
@welcomeaioverlords
@welcomeaioverlords 5 лет назад
I'm glad you got value from this!
@goodn1051
@goodn1051 5 лет назад
@@welcomeaioverlords yup...when you're self taught its videos like this that really help so much
@Darkev77
@Darkev77 3 года назад
This was really good! Though could someone explain to me what does he mean by maximize the loss function for the discriminator? Shouldn't you also train your discriminator via gradient descent to improve classification accuracy?
@welcomeaioverlords
@welcomeaioverlords 3 года назад
To minimize the loss, you use gradient descent. You walk down the hill. To maximize the loss, you use gradient ASCENT. You calculate the same gradient, but walk up the hill. The discriminator walks up, the generator walks down. That’s why it’s adversarial. You could multiply everything by -1 and get the same result.
@sunnydial1509
@sunnydial1509 2 года назад
i am not sure but in this case i think we maximise the discriminator loss function as it is expressed as log(1-D(G(Z)) which is equivalent to minimize the log(D(G(Z))) as it happens on normal neural networks.... so the discriminator is learning by maximising the loss in this case
@koen199
@koen199 4 года назад
@7:20 Why is p_data(x) and p_g(x) assumed constant over x in the integral (a and b)? In my mind the probability changes for each sample...
@welcomeaioverlords
@welcomeaioverlords 4 года назад
Hi Koen. When I say "at any particular point" I mean "at any particular value of x". So p_data(x) and p_g(x) change with x. Those are, for example, the probabilities of seeing any particular image either in the real or generated data. The analysis that follows is for any particular x, for which p_data and p_g have a single value, here called "a" and "b" respectively. The logical argument is that if you can find the D that maximizes the quantity under the integral for every choice of x, then you have found the D that maximizes the integral itself. For example: imagine you're integrating over two different curves and the first curve is always larger in value than the second. You can safely claim the integral of the first curve is larger than the integral of the second curve. I hope this helps.
@koen199
@koen199 4 года назад
@@welcomeaioverlords Oh wow it makes sense now! Thanks man.. keep up the good work
@adityarajora7219
@adityarajora7219 3 года назад
The cost function isn't the difference between True and predicted value right?, it's the actual predicted value in the range [0,1] right??
@welcomeaioverlords
@welcomeaioverlords 3 года назад
It's structured as a classification problem where the discriminator estimates the probability of the sample being real or fake, which is then compared against the ground truth of whether the sample is real, or was faked by the generator.
@adityarajora7219
@adityarajora7219 3 года назад
@@welcomeaioverlords Thank you sir for your reply, Got it.
@adityarajora7219
@adityarajora7219 3 года назад
what do you do for a living?
@saigeeta1993
@saigeeta1993 4 года назад
PLEASE EXPLAIN TEXT TO SPEECH SYNTHESIS EXAMPLE USING GAN
@abdulaziztarhuni
@abdulaziztarhuni Год назад
this was hard for me to follow , from where should i get more resources
@jorgecelis8459
@jorgecelis8459 3 года назад
Very good explanation. One question: If we know the form of the optimal discriminator, don't we only need to get the Pg(x), as we have all the statistics of P(x) in advance? And that would be 'just' sampling from the z?
@welcomeaioverlords
@welcomeaioverlords 3 года назад
Thanks for the question, Jorge. I would point out that knowing the statistics of P(x) is very different than knowing P(x) itself. For instance, I could tell you the mean (and higher-order moments) of a sample from an arbitrary distribution and that wouldn't be sufficient for you to recreate it. The whole point is to model P(x) (the probability that a particular pixel configuration is of a face) , because then we could just sample from it to get new faces. Our real-life sample, which is the training dataset, is obviously a small portion of all possible faces. The generator effectively becomes our sampler of P(x) and the discriminator provides the training signal. I hope this helps.
@jorgecelis8459
@jorgecelis8459 3 года назад
@@welcomeaioverlords right... statistics of P(x) =/= distribution P(x), if we know P(x) we could just generate images and we would have no problem to solve with GAN. Thanks.
@theepicguy6575
@theepicguy6575 2 года назад
Found a gold mine
@sarrae100
@sarrae100 4 года назад
What the fuck, u explained it like it's a toy story, u beauty 😍
@kelixoderamirez
@kelixoderamirez 3 года назад
permission to learn sir
@welcomeaioverlords
@welcomeaioverlords 3 года назад
Permission granted.
@samowarow
@samowarow Год назад
ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-J1aG12dLo4I.html How exactly did you do this variable substitution? Seems not legit to me
@JoesMarineRush
@JoesMarineRush Год назад
I also stopped at this step. I think it is valid. Remember that the transformer g is fixed. In the second term, distribution of z and g(z) are the same, so we can set x = g(z) and replace the z with x. Then we can merge first and second integrals together, with the main difference being that the first term and second term have different probabilities for x since they are being sampled from different distributions.
@samowarow
@samowarow Год назад
@@JoesMarineRush It's not in general legit to say that the distributions of Z and g(Z) are the same. Z is a random variable. A non-linear function of Z changes its distribution.
@JoesMarineRush
@JoesMarineRush Год назад
@@samowarow I looked at it again the other day. Yes you are right. g can change the distribution of z. There are is a clarification step missing. Setting x = g(z) and swapping out z for x. The distribution of x is given be to under g. There is a link between distributions of z and g that needs clarification. I'll try to think on it.
Далее
Building our first simple GAN
24:24
Просмотров 108 тыс.
МАЛОЙ И РЕЧКА
00:36
Просмотров 277 тыс.
СТРИМ ► Elden Ring - Shadow of the Erdtree #4
5:55:46
Simple Explanation of AutoEncoders
10:31
Просмотров 99 тыс.
Zebras, Horses & CycleGAN - Computerphile
13:11
Просмотров 120 тыс.
Generative Adversarial Networks (GANs) - Computerphile
21:21
Diffusion Models | Paper Explanation | Math Explained
33:27
But what is a convolution?
23:01
Просмотров 2,5 млн
Editing Faces using Artificial Intelligence
25:27
Просмотров 370 тыс.
МАЛОЙ И РЕЧКА
00:36
Просмотров 277 тыс.