Тёмный

Autoencoder In PyTorch - Theory & Implementation 

Patrick Loeber
Подписаться 273 тыс.
Просмотров 69 тыс.
50% 1

Опубликовано:

 

6 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 83   
@patloeber
@patloeber 3 года назад
Let me know if you enjoyed the new animations in the beginning and want to see this more in the future :)
@sumithhh9379
@sumithhh9379 3 года назад
Hi Patrick, Any plans to have a series on NLP state of the art models?
@MrDeyzel
@MrDeyzel 3 года назад
They're great
@md.musfiqurrahaman8612
@md.musfiqurrahaman8612 3 года назад
Love the animations and want more. Learning pytorch and following your tutorials.
@jh-pq9tp
@jh-pq9tp 3 года назад
big thanks to you. i cannot imagine how could i learn my dl course without your tutorial. Your work is the best in youtube so far!
@maharshipathak
@maharshipathak 5 месяцев назад
For python 3.11+, pytorch 2.3+ change the dataiter.next() to next(dataiter)
@AI-kr7mj
@AI-kr7mj 6 дней назад
Thanks man
@starlite5097
@starlite5097 3 года назад
I love all your PyTorch videos, please do more :D
@patloeber
@patloeber 3 года назад
Thanks! I will :)
@saadmunir1467
@saadmunir1467 3 года назад
Its reaallly nice but it would be a very nice addition to include variational autoencoders and Generative adversial networks as well :). Maybe they can be helpful to many struggling with class imbalance during classification
@patloeber
@patloeber 3 года назад
Great suggestion!
@astridbrenner2957
@astridbrenner2957 3 года назад
This channel is so underrated.Please upload tutorials about Django
@ingenuity8886
@ingenuity8886 5 месяцев назад
Thank you so much , you explained it really good.
@salimibrahim459
@salimibrahim459 3 года назад
Nice, was waiting for this :)
@patloeber
@patloeber 3 года назад
Hope you enjoyed it!
@CodeWithTomi
@CodeWithTomi 3 года назад
Yet another Pytorch video🔥
@devadharshan6328
@devadharshan6328 3 года назад
Can u help to implement pytorch with django
@patloeber
@patloeber 3 года назад
man you are fast :D
@devadharshan6328
@devadharshan6328 3 года назад
Can u upload Ur GUI chat bot code in GitHub I tried code along approach I was able to learn the concept but I got some few bugs . Can u upload it
@patloeber
@patloeber 3 года назад
I added the code here: github.com/python-engineer/python-fun
@devadharshan6328
@devadharshan6328 3 года назад
@@patloeber thanks
@saeeddamadi3823
@saeeddamadi3823 3 года назад
Thank you so much for clear presentation of Autoencoder!
@patloeber
@patloeber 3 года назад
glad you like it!
@shahinghasemi2346
@shahinghasemi2346 3 года назад
Thank you for your nice tutorials please do the same for a non-image data. I'm curious to see CNN auto-encoders with non-image data.
@Mesenqe
@Mesenqe 3 года назад
This channel is really good, I learned PyTorch from this channel. Guys I assure you subscribe to this channel.
@patloeber
@patloeber 3 года назад
Thanks so much:) appreciate the nice words
@adityasaini491
@adityasaini491 3 года назад
Hey Patrick a really informative and concise video! Thoroughly enjoyed it :DD Just a small correction at 12:51, you used the word dimension while explaining the Normalize transform, whereas the two attributes are just the mean and standard deviation of the resultant normalized data.
@patloeber
@patloeber 3 года назад
thanks for the hint!
@Jerrel.A
@Jerrel.A 2 года назад
TopNotch explanation! Thx.
@ayankashyap5379
@ayankashyap5379 3 года назад
at 22:17 when calculating the shape of the conv output, it should be 128*128*1 => 64*64 * 16 and the rest should also be different accordingly
@markavilin5020
@markavilin5020 2 года назад
Very clear, thank you very much
@tilkesh
@tilkesh 2 месяца назад
for _ in range(100): print("Thank you")
@falklumo
@falklumo Год назад
It should be noted that the performance difference between Linear and CNN as shown here comes from the chosen compression factor. Linear chose 12 Byte per image, CNN chose 256 Byte per image, where an original image is 784 Byte. So, the CNN code does not compress enough, less than PNG actually! You need two more linear layers to compress 64 down to 16 and then 4.
@danielevanzan4340
@danielevanzan4340 Месяц назад
Is it wrong to choose 4 feature maps in the last CNN layer, without adding the 2 linear layers?
@huoguo7426
@huoguo7426 2 года назад
Great video! Could you provide the same walkthrough for a variational autoencoder? Or point point me to a good walkthrough on the theory and implementation of a variational autoencoder?
@aisidamayomi8534
@aisidamayomi8534 3 месяца назад
Please can you do for a network intrusion detection
@anarkaliprabhakar6640
@anarkaliprabhakar6640 9 месяцев назад
Nice explanation
@satpalsinghrathore2665
@satpalsinghrathore2665 Год назад
Very cool. Thank you.
@3stdv93
@3stdv93 Год назад
Thanks ❤
@DiegoAndresAlvarezMarin
@DiegoAndresAlvarezMarin 3 года назад
Beautifully explained!! thank you!
@devadharshan6328
@devadharshan6328 3 года назад
Great animations my suggestion is to add in more animations not only in theory but also in the working of the code . Just my suggestion but great video thanks for Ur teaching.
@patloeber
@patloeber 3 года назад
thanks! definitely a good idea
@garikhakobyan3013
@garikhakobyan3013 3 года назад
hello, nice videos you have. looking forward new videos on paper review and implementations.
@harshkumaragarwal8326
@harshkumaragarwal8326 3 года назад
Great work!! Thanks :))
@ujjwalkumar-uf8nj
@ujjwalkumar-uf8nj Год назад
Hey Patrick I used your exact code to train the CNN based autoencoder but couldn't get it to converge without Batch Normalization, after adding BatchNorm2d after every ReLU it works fine, but without it, it doesn't, tried different values for lr from 1e-2 to 1e-5. I was training on MNIST dataset only. the loss becomes NaN or ranges between 0.10 to 0.09.
@김중국-n3n
@김중국-n3n 3 года назад
thank you!
@patloeber
@patloeber 3 года назад
You're welcome!
@saurrav3801
@saurrav3801 3 года назад
Bro always waiting for your pyt🔥rch video ....🤙🏼🤙🏼🤙🏼
@patloeber
@patloeber 3 года назад
🙌
@ArponBiswas-wq3sh
@ArponBiswas-wq3sh 6 месяцев назад
Very nice but need more
@user-wr4yl7tx3w
@user-wr4yl7tx3w 2 года назад
But how do we leverage the low dimensional embedding given that they represent the PCA?
@amzadhossain8118
@amzadhossain8118 3 года назад
Can u make a video on DNCNN
@marytajz1814
@marytajz1814 3 года назад
your tutorials are amazing! Thank you so much... Could you please make a video for nn.embedding as well?
@patloeber
@patloeber 3 года назад
I'll have a look at it
@736939
@736939 3 года назад
Can you please show how to work with variational autoencoders and applications such as Image segmentation.
@patloeber
@patloeber 3 года назад
will look into this!
@736939
@736939 3 года назад
@@patloeber Thank you. Because for me it's hard to program it.
@teetanrobotics5363
@teetanrobotics5363 3 года назад
youre the best
@patloeber
@patloeber 3 года назад
thanks!
@teetanrobotics5363
@teetanrobotics5363 3 года назад
Could you please make GANs, VAE , transformers and BERT in pytorch
@martinmayo8197
@martinmayo8197 3 года назад
I don't understand a little bit the sintaxis. Why do you define the method 'forward' but never call it explicitly ? Maybe the line "recon = model(img)" is where you are using it, but I didn't know that it could be done like this. I would had written "recon = model.forward(img)", is it the same ?
@mojojojo890
@mojojojo890 Год назад
which one is the link that explains how you make the pytorch classes please?
@haikbenlian5466
@haikbenlian5466 2 года назад
How you found that the image size was decreased from 28 to 14?
@YounasKhan-vm8nr
@YounasKhan-vm8nr 6 месяцев назад
Do you have anything specific for face images, this won't work on face images.
@lankanathaekanayake7680
@lankanathaekanayake7680 2 года назад
is it possible to use sentences as input data?
@avivalviannur5610
@avivalviannur5610 Год назад
I tried to rerun your code in the part of Autoencoder CNN, but then I got Loss = nan in each epoch. Do you know what is wrong?
@marinacarnemolla5515
@marinacarnemolla5515 3 года назад
hi, I have a question: if we pass the image as input of the model, it will put the weights to zero and then the output will be exactly the same of the input image. So, why the image is given as input of the model? It doesn't make sense to me. Could yu explain this to me?
@khushpatelmd
@khushpatelmd 3 года назад
If you normalize the input image which is also the label, the values will be between -1 to +1 but your output since passed through sigmoid will be between 0 and 1. How will you decrease loss for pixels that are between -1 to 0 as your predictions will never be less than 0?
@anonim5052
@anonim5052 5 месяцев назад
you need to change sigmoid function at the end to tanh, to output will also be betweet -1 to 1
@Saens406
@Saens406 2 года назад
why there is no require_grad there?
@hadisizadiyekta125
@hadisizadiyekta125 3 года назад
you used recons and img as input for loss function, however if we want to train my model and test it we should use "recon" and "labels" as an input for loss function. but the labels are 3D, how we can do that?
@121horaa
@121horaa 2 месяца назад
Since, AutoEncoder is an unsupervised technique, so, recons and img are used as input to loss function. But, in semi-supervised or supervised methods, yo got labels, so we use them against the predicted values in loss function.
@roshinroy5129
@roshinroy5129 Год назад
Am I the only one encountering nan values during training this ?
@vallisham1756
@vallisham1756 3 года назад
module 'torch.nn' has no attribute 'ReLu' Is anyone else getting the same error
@theupsider
@theupsider 3 года назад
its ReLU
@vallisham1756
@vallisham1756 3 года назад
@@theupsider Thanks a lot!
@AshishSingh-753
@AshishSingh-753 3 года назад
Next video is on GAN
@anirudhjoshi1607
@anirudhjoshi1607 2 года назад
dude my CNN autoencoder is doing worse than the linear autoencoder, lmao
@marc2911
@marc2911 Год назад
me too the ouputs show strange padding artefacts as well
@ryanhoward5999
@ryanhoward5999 2 года назад
"Jew-Pie-Tar notebook"
@pleasedontsubscribeme4397
@pleasedontsubscribeme4397 3 года назад
Great work!
Далее
Simple Explanation of AutoEncoders
10:31
Просмотров 106 тыс.
Diffusion models from scratch in PyTorch
30:54
Просмотров 252 тыс.
Find The Real MrBeast, Win $10,000
00:37
Просмотров 43 млн
180 - LSTM Autoencoder for anomaly detection
26:53
Просмотров 90 тыс.
Variational Autoencoder from scratch in PyTorch
39:34
Autoencoders Explained Easily
27:59
Просмотров 55 тыс.
ML Was Hard Until I Learned These 5 Secrets!
13:11
Просмотров 315 тыс.
Variational Autoencoders
15:05
Просмотров 502 тыс.
Top 10 Python One Liners YOU MUST KNOW!
4:52
Просмотров 138 тыс.