Тёмный

Building our first simple GAN 

Aladdin Persson
Подписаться 80 тыс.
Просмотров 114 тыс.
50% 1

Опубликовано:

 

27 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 163   
@AladdinPersson
@AladdinPersson 3 года назад
If you're completely new to GANs I recommend you check out the GAN playlist where there is an introduction video to how GANs work and then watch this video where we implement the first GAN architecture from scratch. If you have recommendations on GANs you think would make this into an even better resource for people wanting to learn about GANs let me know in the comments below and I'll try to do it :) I learned a lot and was inspired to make these GAN videos by the GAN specialization on coursera which I recommend. Below you'll find both affiliate and non-affiliate links, the pricing for you is the same but a small commission goes back to the channel if you buy it through the affiliate link. affiliate: bit.ly/2OECviQ non-affiliate: bit.ly/3bvr9qy Here's the outline for the video: 0:00 - Introduction 0:29 - Building Discriminator 2:14 - Building Generator 4:36 - Hyperparameters, initializations, and preprocessing 10:14 - Setup training of GANs 22:09 - Training and evaluation
@Carbon-XII
@Carbon-XII 3 года назад
8:00 - transforms.Normalize((0.1307,), (0.3081,)) will not work because of the following: * nn.Tanh() output of the Generator is (-1, 1) * MNIST values are [0, 1] * Normalize does the following for each channel: image = (image - mean) / std * So transforms.Normalize((0.5,), (0.5,)) converts [0, 1] to [-1, 1], which is ALMOST correct, because nn.Tanh() output of Generator (-1, 1) excluding one and minus one. * transforms.Normalize((0.1307,), (0.3081,)) converts [0, 1] to ≈ (-0.42, 2.82). But Generator can not generate values greater than 0.9999... ≈ 1, so it will not generate 2.8 for white color. That is why transforms.Normalize((0.1307,), (0.3081,)) will not work. P.S. To use transforms.Normalize((0.1307,), (0.3081,)) you should multiply nn.Tanh() with 2.83 ≈ nn.Tanh() * 2.83 ≈ (-2.83, 2.83)
@AladdinPersson
@AladdinPersson 3 года назад
This makes total sense, thanks for clarifying!
@drishtisharma3933
@drishtisharma3933 2 года назад
Thank you so much for explaining this... :)
@aminasadi1040
@aminasadi1040 10 месяцев назад
Awesome video, you explain exactly what should be explained, I love it!
@wongjerry3229
@wongjerry3229 2 года назад
I think in 18:22 usung detach is better. for one thing retain_graph = True cost more memories and for another if we dont use detach we optimize the paras in G when we train D
@minister1005
@minister1005 Год назад
if we use detach, what is the point of disc_fake? disc_fake = disc(fake.detach()).view(-1) and if we do a backward() we get no grads out of it(because fake.detach()'s require_grad=False) which means no update happens here
@minister1005
@minister1005 Год назад
Ah, my bad. fake.detach() won't get updated but disc()'s parameters will
@ShahryarSalmani
@ShahryarSalmani 3 года назад
Perfect explanation of the loss function and why we use the minimization instead of maximization of Discriminator.
@Htyagi1998
@Htyagi1998 2 года назад
Doing minimization of anything is way simpler and faster in terms of computation rather than computing maxima
@hwuM927udq
@hwuM927udq 12 дней назад
Thanks for sharing , that's really helpful!
@mohammedshehada5373
@mohammedshehada5373 3 года назад
Thanks for the amazing content really helpful, Can we have some GAN stuff using audio data please? voice cloning maybe? Thanks again
@aadarshraj1890
@aadarshraj1890 3 года назад
You Are Awesome😎😎. Please Continue This Series...Thanks For Awesome Video Series
@saurabhjain507
@saurabhjain507 3 года назад
Very nicely explained. Loved your clarity.
@deepudeepak1390
@deepudeepak1390 3 года назад
awesome!! one request from me ...can you make a video on text to image using GAN's please !!!
@philwhln
@philwhln 3 года назад
Nice intro to GANs, thanks!
@rabia3746
@rabia3746 2 года назад
Hello. Thx for the video. I tried this code exactly except that i used 400 epoch. But still fake images are like noises. How did you get this results on the tensorboard. Can you please share the hyperparams that you used?
@thederivo5545
@thederivo5545 4 месяца назад
Hello, i love your videos as they are very precised and perfect but how do i view using colab instead of tensor flow
@Abby.Tripathi
@Abby.Tripathi 3 года назад
how can i include tensorboard features within the GAN ipynb file to visualize the log files?
@nitishrawat6872
@nitishrawat6872 3 года назад
If you're using Google Colab Just add these lines: %load_ext tensorboard # To load the tenserboard notebook extension %tensorboard --logdir logs # before training your model
@sourabharsh16
@sourabharsh16 Год назад
Thanks a lot for the video. It really helped me in understanding the naunces of GAN and helped me write it from scratch as well. Keep on going, buddy!
@christianc8265
@christianc8265 2 года назад
out of experience, mixing relu with tanh does not work super well, this is also a point you might add to your final possible improvements list, like only use tanh for the whole generator.
@kae4881
@kae4881 3 года назад
Dang Man! Love your videos, you're EPIC!!!
@FORCP-bq5fo
@FORCP-bq5fo 2 месяца назад
What are your IDE settings btw? Theme and font
@pocketchamp
@pocketchamp 3 года назад
Thank you so much for the material, this is awesome! I have a small question. Why would it be `disc.zero_grad()` instead of `opt_disc.zero_grad()`? in general, are these 2 statements interchangeable?
@leviack4396
@leviack4396 Год назад
yeah ,i'm confused about that too, dude
@VuongNguyen-jr4gl
@VuongNguyen-jr4gl Год назад
they're the same
@minister1005
@minister1005 Год назад
Both are the same since it optimizes the model parameters
@niveyoga3242
@niveyoga3242 3 года назад
Heyo, awesome vid as always! I wanted to ask you if you could do some variational autoencoders in pytorch & maybe also cover some of the mathematics of the special variants, if you are interested (i.e. as you're doing for GANs)? :)
@yardennegri874
@yardennegri874 2 года назад
how do you get the images to show on tensorboard?
@nark4837
@nark4837 2 года назад
Hey, so this simple GAN generates any number? What I mean is, the neural networks have not learnt the features of 0, 1, 2, 3, ... individually, they have learnt what features make up a number in general? Then when z, the random sample from a distribution, is plugged into the generator, it generates a random number because of the noise it was given? Hence, the results could be better if you created a GAN pair for each individual number, which would obviously take a lot more training time and the networks would be mutually exclusive and not random, so you'd have a GAN pair that generates a fake version of every digit.
@eyakaroui3718
@eyakaroui3718 3 года назад
How can I flip the labels 1 for fake and 0 for real ? Thanks a lot this video is helping me a lot !!! 😍
@maxim2727
@maxim2727 2 года назад
Why is your tensorboard updating automatically the new images? For me I have to refesh the page in order for it to update
@mustafashaikh116
@mustafashaikh116 9 месяцев назад
Question : Why we use zero_grad with disc and dis and not opt_disc and opt_gen?
@Astrovic1
@Astrovic1 Год назад
how is the song called at 20:00? sounds so chill made me move like on the dancefloor while at my working desk learning GANs with u
@AliAhmed-mw2vc
@AliAhmed-mw2vc 11 месяцев назад
Please can someone tell which editor he is using?
@aymensekhri2133
@aymensekhri2133 3 года назад
Thank you very much! I got lots of things
@suryagaur4363
@suryagaur4363 3 года назад
Can you made a video on Cyclic GAN ?
@AladdinPersson
@AladdinPersson 3 года назад
A bit late but it's finished now. Paper walkthrough is up and implementation from scratch will be up in a few days :)
@hardtokick-uz2xk
@hardtokick-uz2xk 2 месяца назад
anyone is having trouble related with the tensorboard visualization? like I am using Pycharm but the visualization part doent get executed
@taylorhawkes
@taylorhawkes 4 месяца назад
Thanks!
@HungDuong-dt3lg
@HungDuong-dt3lg 2 года назад
On line #66, why gen function only takes in one argument noise. I thought it must takes in two arguments z_dim and img_dim. Can you explain please?
@mohsenmehranian7571
@mohsenmehranian7571 3 года назад
Thanks, it was a very good video!
@travelthetropics6190
@travelthetropics6190 2 года назад
how would it be different if we use AdamW instead of Adam?
@icanyagmur
@icanyagmur 2 года назад
Nice work!
@bashirsadeghi2821
@bashirsadeghi2821 8 месяцев назад
Great Tutorial.
@123epsilon
@123epsilon 3 года назад
Hi can you explain why we would use BCE loss on the Generator as well and why we would compare it to a tensor of 1s? It makes sense to me to use it for the discriminator as it is a classifier, but is the generator not doing some form of regression?
@ibrahimaba8966
@ibrahimaba8966 3 года назад
The formula is log(1 - D(G(Z))). So we use it on the discriminator.
@sardorabdirayimov
@sardorabdirayimov Год назад
Great effort. Good tutorial
@shambhaviaggarwal9977
@shambhaviaggarwal9977 3 года назад
What changes will be there in the code if we use disc(fake).detach() instead? Will there be any changes in line 77? at 18:34
@tmspeeches8405
@tmspeeches8405 3 года назад
how do we get the training accuracy at each epoch ?
@vatsal_gamit
@vatsal_gamit 3 года назад
You're like a magic 🔥
@noamsalomon01
@noamsalomon01 2 года назад
Thank you, helped me alot
@samernoureddine
@samernoureddine 3 года назад
When computing lossD, what is the difference in practice between summing versus averaging lossD_real and loss_Dfake? @15:20
@ahsannadeem746
@ahsannadeem746 3 года назад
Is it possible to train this gan with a .CSV dataset?
@MorisonMs
@MorisonMs 3 года назад
11:25 You forgot to put right parenthesis.. Kidding :P Thanks for the video bro
@gabrielyashim706
@gabrielyashim706 3 года назад
This video is was really helpful, but what if I don't want to use the MNIST dataset and I want to use my own dataset from my local machine, please how do I go about it?
@AladdinPersson
@AladdinPersson 3 года назад
I have separate videos on how to use custom datasets, for something written I highly recommend: pytorch.org/tutorials/beginner/data_loading_tutorial.html
@lker7489
@lker7489 2 года назад
wonderful intro to GAN, thank you very much! actually not feel a little confused what is z_dim...
@christianc8265
@christianc8265 2 года назад
these are the parameters you can change according to a known distribution to use the generator to produce images. I guess 64 is way to high for mnist. maybe you can use 10 so you can blend any of the digits.
@utkarshjyani8350
@utkarshjyani8350 Год назад
for batch_idx, (real, _) in enumerate(loader): for this part its giving an error TypeError: 'module' object is not callable
@madhuvarun2790
@madhuvarun2790 3 года назад
At discriminator we want max log(D(real)) + log(1-d(g(z))). Since loss functions work by minimizing error we can minimize - (log(D(real)) + log(1-d(g(z)))). The bceloss is similar to min the above written loss. So it works fine. At Generator we want to max log(d(g(z))). Could you please explain how criterion(output, torch.ones_like(output)) maximizes log(d(g(z)))? because the loss function is ln =−wn [yn.logxn+(1−yn)⋅log(1−xn)]. According to your code aren't we trying to maximize -log(d(g(z)))? because there is a negative in loss function. shouldn't we add negative in our training phase? please explain me. I am stuck here
@madhuvarun2790
@madhuvarun2790 3 года назад
Nevermind, I understood it. Thanks
@asagar60
@asagar60 3 года назад
@@madhuvarun2790 can you please elaborate . as i see it on discriminator side, loss_real = - (log(D(real)) and loss_fake = - log(1-d(g(z)))).. but its still minimizing right ? I cant understand how thats maximizing the loss, the same doubt with generator loss
@madhuvarun2790
@madhuvarun2790 3 года назад
@@asagar60 Yes. It is minimizing the loss. I was wrong. At discriminator we are minimizing -(log(d(real)). At generator we are minimizing -log(d(g(z)))
@joefahy4806
@joefahy4806 3 года назад
what program do you do this in?
@MorisonMs
@MorisonMs 3 года назад
Question: 18:18 Code line 77. We have to compute disc(fake) twice? can't we simply write: "output = disc_fake"? (I thought we add retain_graph=True in order to avoid the computation of the disc(fake) twice)
@AladdinPersson
@AladdinPersson 3 года назад
We do retain_graph so that we don't have to compute fake twice so we can re-use the same image that has been generated. We send it through the discriminator again because we updated the discriminator, and they way I showed in the video is the most common setup I've seen when training GANs. Although it would probably also work if you did reuse disc_fake from previously
@MorisonMs
@MorisonMs 3 года назад
@@AladdinPersson Got you...! Thanks a lot
@aras9319
@aras9319 2 года назад
Hello. What should be different for non-square image data?
@pelodofonseca6106
@pelodofonseca6106 Год назад
CNNs instead of fc layers.
@secretgame6434
@secretgame6434 Год назад
idont know much about pytorch but ill figure it out...
@Huy-G-Le
@Huy-G-Le 3 года назад
The code run great, but how did you make those images at 20:37 appear???? I been trying to do that in google colab, the code work, but no image.
@ZOBAER496
@ZOBAER496 6 месяцев назад
Same question.
@Arya-cn4kk
@Arya-cn4kk 5 месяцев назад
@@ZOBAER496 %load_ext tensorboard %tensorboard --logdir logs run this in a seperte cell
@hunterlee9413
@hunterlee9413 2 года назад
why my tensorflow couldn't open
@vaibhavpujari124
@vaibhavpujari124 Месяц назад
can you please share the code
@purnamakkena9553
@purnamakkena9553 2 года назад
I can't see tensorboard. I am running the same code on colab. Please help me. Thank You
@Arya-cn4kk
@Arya-cn4kk 5 месяцев назад
%load_ext tensorboard %tensorboard --logdir logs run this in a seperate cell , it works
@Zeoytaccount
@Zeoytaccount 2 года назад
What notebook prompt are you using to call up that TensorBoard UI?
@Zeoytaccount
@Zeoytaccount 2 года назад
Figured it out, for anyone with the same question. In a separate cell run: %load_ext tensorboard %tensorboard --logdir logs magic
@Arya-cn4kk
@Arya-cn4kk 5 месяцев назад
@@Zeoytaccount Oh Babe it works ,such a sweety wish I could send you a thankuuuuuu
@virtualecho777
@virtualecho777 3 года назад
I have no idea what is happening but its soo interesting
@ethaneaston6443
@ethaneaston6443 2 года назад
what does the parameter z_dim means?
@tzachcohen9124
@tzachcohen9124 3 года назад
How can I transfer this code to work with RGB images? It keeps printing lines as an output after learning instead of images :(
@ibrahimaba8966
@ibrahimaba8966 3 года назад
you need to use dcgan instead of gan.
@donfeto7636
@donfeto7636 2 года назад
you will need to call .detach() on the generator result to ensure that only the discriminator is updated! line 69 should pass fake. detach() so generator weights get removed from the computation graph and there is no need to retrain_graph of discriminator since you will not use it again I think
@zyctc000
@zyctc000 Год назад
Not really, he has two optimizers: opt_disc and opt_gen. Their .step() won’t affect each other
@donfeto7636
@donfeto7636 Год назад
@@zyctc000 opt.disc graph is connected to generator it will update generator layers weights Opt.disc see discriminator and generator as a 1 network Opt.gen optmizer will not affect discrimintor but not vice virsa
@talha_anwar
@talha_anwar 3 года назад
is not it should be optimizer.zero_grad instead of model.zero_grad
@AladdinPersson
@AladdinPersson 3 года назад
You can use both
@ABWXII
@ABWXII 2 года назад
hello sir can you tell me how to convert GANs generated dataset in to .jpg format??? please
@generichuman_
@generichuman_ 2 года назад
Be careful with jpgs in your training set. Jpg uses 8x8 blocks that introduce artifacts, either use very high quality jpgs, or even better, pngs
@judedavis92
@judedavis92 2 года назад
Ooh the jacobian
@Champignon1000
@Champignon1000 2 года назад
7:45 bruh :D
@Sercil00
@Sercil00 3 года назад
Is it normal that this easily takes 1-2 hours for 50 epochs? I first ran it on my computer which unfortunately has no nvidia GPU. Then I tried it on Google Colab, which originally had it running on its CPU too. So I changed their Hardware acceleration to GPU, aaaaand... if it's faster, then not by much. Is that normal? Does this not benefit significantly from GPUs?
@parthrangarajan3241
@parthrangarajan3241 3 года назад
Hey, how did you overcome this error in colab? TypeError Traceback (most recent call last) in () 1 for epoch in range(num_epochs): ----> 2 for batch_idx, (real, _) in enumerate(loader): 3 real=real.view(-1, 784).to(device) 4 batch_sz= real.shape[0] 5 4 frames /usr/local/lib/python3.7/dist-packages/torchvision/datasets/mnist.py in __getitem__(self, index) 132 133 if self.transform is not None: --> 134 img = self.transform(img) 135 136 if self.target_transform is not None: TypeError: 'module' object is not callable
@parthrangarajan3241
@parthrangarajan3241 2 года назад
@@drishtisharma3933 Hey Drishti! Yes, I was able to overcome this error but I do not remember the exact changes I made to the code. I could share my colab notebook for your clarity. Honestly, I didn't try your approach. I was following the video as a code-along. Link: colab.research.google.com/drive/1l1Vt7mcoEQKFxxVbpQOeKZ-UiEHU9ggt?usp=sharing
@m11m
@m11m 3 года назад
I'm admittedly a noob to all of this, but I keep getting this "TypeError: __init__() takes 1 positional argument but 2 were given" and I can't figure out how to resolve the issue, any advice would be appreciated
@AladdinPersson
@AladdinPersson 3 года назад
Difficult to say w/o code, in this case it seems like you're sending in too many arguments haha
@generichuman_
@generichuman_ 3 года назад
If I had to guess, you might have a class method that doesn't have a "self" parameter
@deeshu3456
@deeshu3456 2 года назад
it seems like while defining the class method you originally coded a method which takes one argument , but while calling the same method as object you provided two arguments in there. eg. def lets_solve(error): pass #Instantiating an object now solution = lets_solve(error, YOU PROVIDED ONE EXTRA ARGUMENT HERE) YOU PROVIDED ONE EXTRA ARGUMENT HERE ----> denotes the extra argument which you shouldn't have provided going by the original code which takes just one arg. Hope this makes sense. Good luck!
@pantherwolfbioz13
@pantherwolfbioz13 2 года назад
Why do we maximize the generator loss? Shouldn't the generator be good at identifying the fake generated by descriminator?
@jamesadeke9873
@jamesadeke9873 2 года назад
Generator don't identify. It only generates. To minimize loss, is to make the generator generate samples very close to real in order not to be identified by the discriminator
@ArunKumar-sg6jf
@ArunKumar-sg6jf 3 года назад
Nn.linear for what bro
@ashekpc106
@ashekpc106 Год назад
please makea video about anime infogan
@flakky626
@flakky626 11 месяцев назад
Not pytorch;-; I gotta learn pytoch nonetheless
@hoaanduong3869
@hoaanduong3869 Год назад
Damn, I nearly heartbreak when i set wrong values for transforms.Normalizer
@redhunter408
@redhunter408 2 года назад
Re: (i just wanted to make sure that people understand that this is a joke...) | on lr = 3e-4
@saurrav3801
@saurrav3801 3 года назад
🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥
@wongjerry3229
@wongjerry3229 2 года назад
I think
@xMreilly
@xMreilly 2 года назад
Where should i start? It sounds like you are just reading a book and not even going over anything.
@AladdinPersson
@AladdinPersson 2 года назад
then dont start bro
@AladdinPersson
@AladdinPersson 2 года назад
watch another video you resonate with more :)
@generichuman_
@generichuman_ 2 года назад
If you can't follow this, then you're not ready yet. Start with python basics and work your way up. Plenty of videos out there. Alladin's videos are gold, and when you're ready, you'll appreciate them more.
@jwc8963
@jwc8963 3 года назад
raised NotImplementedError when running through the line disc_real = disc(real).view(-1)
@brianjohnbraddock9901
@brianjohnbraddock9901 2 года назад
Thanks!
@Arya-cn4kk
@Arya-cn4kk 5 месяцев назад
!python3 -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))" - what is the relevance of this to the GAN you have worked on in this video
@yuro-h7m
@yuro-h7m 5 месяцев назад
Why are we using 128 nodes in the Discriminator class? Isn't that kind of a random number? And why 256 in the Generator?
@NewNew-qn7kh
@NewNew-qn7kh 19 дней назад
I love the way your ide looks, what are you using/ what settings?
@ZOBAER496
@ZOBAER496 6 месяцев назад
Do you have this GAN code available for downloading?
@NamTran-cc1ml
@NamTran-cc1ml Год назад
why do we have (lossD_real + lossD_fake)/2
@FlutterStartups
@FlutterStartups 10 месяцев назад
impressive how people use vim to code ML
@maqboolurrahimkhan
@maqboolurrahimkhan 3 года назад
Thanks Awesome and simple implementation :)
@novinnouri764
@novinnouri764 Месяц назад
thanks
@dvrao7489
@dvrao7489 3 года назад
Really love this series man!! Just a quick question though why did we use fixed_noise and noise differently. In the training part can we not have used fixed_noise as input to generator because noise is noise right? Does it matter if we start from the same point?
@generichuman_
@generichuman_ 2 года назад
Fixed noise is used to display the images to track the progress of the GAN. Fixed means it doesn't change over time, so if you were to use this in training, you would be feeding the GAN the same vector over and over again, and the GAN would only be able to generate a single image, and the rest of the latent space would remain unexplored.
@sidrasafdar7325
@sidrasafdar7325 2 года назад
Very good explanation of each and every line of code. Can you please make a video on how to optimize GANs with Whale Optimization Algorithm. i have to do my project in GAN and this is my base paper "Automatic Screening of COVID‑19 Using an Optimized Generative Adversarial Network". I have searched a lot about how to optimize GANs with WHO but couldn't find any related result. please help me as you have a detailed knowledge about GANs.
@canozturk369
@canozturk369 7 месяцев назад
GREAT
@DIYGUY999
@DIYGUY999 3 года назад
Would you mind sharing the name of intro music? :D
@kae4881
@kae4881 3 года назад
SAME!
@beizhou2488
@beizhou2488 3 года назад
It is Straight Fuego by Matt Large
@DIYGUY999
@DIYGUY999 3 года назад
@@beizhou2488 Thankyou mah man.
@hackercop
@hackercop 2 года назад
This worked for me thanks, am enjoying this playist!
@privacywanted434
@privacywanted434 3 года назад
How did you get the tensorboard site to pop up?
@AladdinPersson
@AladdinPersson 3 года назад
Perhaps I didn't show it in the video but you have to run it through conda prompt (or terminal etc). I have more info on using tensorboard in a separate video so I was kind of assuming that people knew it but I could've been clearer on that!
@privacywanted434
@privacywanted434 3 года назад
@@AladdinPersson this is new for me so I’m still learning all the tools. Please keep doing tutorials btw!! You have been helping me learn AI so much faster due to your pytorch implementations.
@AladdinPersson
@AladdinPersson 3 года назад
@@privacywanted434 Thanks for saying that, I appreciate you 👊
@prakhar3134
@prakhar3134 Год назад
can someone explain what z_dim is actually?
@mariamnaeem443
@mariamnaeem443 3 года назад
Nice video, thanks. Can you please make a video on RCGAN?
@SAINIVEDH
@SAINIVEDH 3 года назад
Is the intro eq. cross entropy loss function ?!
@car6647
@car6647 2 года назад
thanks a lot, now i have a better understanding of GAN
@sourabhbhattacharya9133
@sourabhbhattacharya9133 2 года назад
I had confusion regarding line 68 and 70 why are we creating ones and zeros in criterion? Please clarify this portion.... great work as always....
@kdubovetskyi
@kdubovetskyi 2 года назад
Roughly saying, we want the discriminator to estimate the *probability that its input is real*. Therefore the desired output for disc(real) is 1, and 0 for disc(fake).
@cowmos9276
@cowmos9276 Год назад
thank you~
@mustafasidhpuri1368
@mustafasidhpuri1368 3 года назад
in GANs generator loss should decrese and discriminator loss should increase is that so? i am little bit confused .
@AladdinPersson
@AladdinPersson 3 года назад
The loss in GANs don't tell us anything really (one will go up when the other goes down and vice-versa). The only thing you want to watch out for is if discriminator would go to 0 or something like that, so that would be the case if one of them "takes over"
Далее
DCGAN implementation from scratch
35:38
Просмотров 66 тыс.
КАК БОМЖУ ЗАРАБОТАТЬ НА ТАЧКУ
1:36:32
PYTORCH COMMON MISTAKES - How To Save Time 🕒
19:12
OpenAI’s New ChatGPT: 7 Incredible Capabilities!
6:27
Generative Adversarial Networks (GANs) - Computerphile
21:21
Understand the Math and Theory of GANs in ~ 10 minutes
12:03
Why Does Diffusion Work Better than Auto-Regression?
20:18
КАК БОМЖУ ЗАРАБОТАТЬ НА ТАЧКУ
1:36:32