Тёмный

Autoregressive Diffusion Models (Machine Learning Research Paper Explained) 

Yannic Kilcher
Подписаться 262 тыс.
Просмотров 27 тыс.
50% 1

Опубликовано:

 

18 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 31   
@YannicKilcher
@YannicKilcher 2 года назад
Discord link: discord.gg/4H8xxDF
@ryanbaten
@ryanbaten 2 года назад
Very similar to XLnet. If I remember correctly, it was also autoregressively trained and in a permutation order similarly to this. There were extra tricks that made it train in parallel more efficiently. Paper authors claimed the autoregressive training results in a better model and that they would have a V2 soon but haven't seen it. Seemed super impressive at the time it came out but the idea also seems to not have stood the test of time since just training the MLM models longer and on comparable amounts of data beat it performance-wise.
@Kram1032
@Kram1032 2 года назад
Oh I like this idea! Maybe the part where even the stuff that's already there is being predicted could be exploited to allow the generator to change its mind somehow, deleting/replacing some pixels to converge to something better overall. Could even be done on an already complete image. In fact that might be especially helpful for the text variant, so it can delete stuff that didn't work out after all.
@ChuanChihChou
@ChuanChihChou 2 года назад
8:50 BERT is actually also trained to correct some of the input tokens (15% of the token positions chosen * 10% of the time replaced with a random token = 1.5%). I suspect they can get better generation quality if they also allow such token correction.
@SLAM2977
@SLAM2977 2 года назад
Love these straight to the point honest opinions :)
@sujovian
@sujovian 6 месяцев назад
The out of order discernment of ARDM seems really useful in efficient retrieval augmentation.
@CristianGarcia
@CristianGarcia 2 года назад
Not sure if its mentioned but there is tradeoff during training:, auto regressive models like GPT can train over a complete sample all at once, whereas here you need to pass all possible masks for it to "learn" the sample i.e. training could be slower.
@nauman.mustafa
@nauman.mustafa 2 года назад
it is a really powerful model and imo we can specialize it to a much larger number of tasks compared to gpt or gans etc.
@sacramentofwilderness6656
@sacramentofwilderness6656 2 года назад
Thanks as always for great content! I wonder, whether it is possible to predict some optimal order of decoding. Like we generate important details of the image, sentence or any other kind of data, cats, dogs, and then refine less important parts - background. Important parts can serve as an anchors for generation.
@AlbertZeyer
@AlbertZeyer 2 года назад
Just a random idea on splitting the vocabulary (32:40), you could cluster the vocab. This has been done before for hierarchical softmax. So you could still use the same idea as it is used for discretized pixel value classes.
@AlbertZeyer
@AlbertZeyer 2 года назад
Why do you think that a model which is not restricted to left-to-right sampling would always be beaten by an auto-regressive model which is strictly left-to-right? Your argument was that the latter would focus very much on this specific task. But I also see multiple arguments the other way around: The arbitrary order could generalize better. And also, there are probably always better orders than left-to-right, and when the model can automatically use a better order, it could beat the strict left-to-right model.
@ssssssstssssssss
@ssssssstssssssss 2 года назад
I did some research on this type of machine four years ago or so. Perhaps I should have stuck with it. The purpose was much better suited for this type of machine. I believe it is still being used in the software I integrated it into.
@SuperJg007
@SuperJg007 2 года назад
best channel ever.
@박재성학생물리·천문
Yannic you're a life saver
@herp_derpingson
@herp_derpingson 2 года назад
10:40 This is like a regular transformer but we are predicting more than one token at once and out of order. Or a BERT but with multiple iterations. . 29:42 I wonder what would happen if at each step, each generated output pixel will have a probability of being overwritten. So, the model now has the option to reconsider its own previous predictions now that it has more input. . I would like to see how much does the output quality degrades as you decrease the number of steps.
@YannicKilcher
@YannicKilcher 2 года назад
yes I've seen multiple people already wonder about the possibility of the model being able to refine its outputs, very interesting idea!
@thegistofcalculus
@thegistofcalculus 2 года назад
Yes, overwriting is clearly intriguing although stability becomes a concern again, and I wonder if the naive approaches would be incentivized to output very close to training samples.
@priancho
@priancho 2 года назад
Watched twice and understood it ;-) Thanks for the video!
@sarvagyagupta1744
@sarvagyagupta1744 2 года назад
Why are we using categorical distribution? We are trying to predict pixel values which in this case are RGB values. So what categories are being used to get the pixel values?
@marouanemaachou7875
@marouanemaachou7875 2 года назад
It does remind me of the denoising diffusion models as bert like models are denoising autoencoders. Am i wrong ?
@G12GilbertProduction
@G12GilbertProduction 2 года назад
I bet is a coinfidence with Bayesian autoencoders technique with multi-layer simultanical differentials, something like zero-shot but in reverse.
@tripzero0
@tripzero0 2 года назад
Now can we make the diffusion model predict a codebook for a vqgan?
@bluel1ng
@bluel1ng 2 года назад
Yes, it should be definitely possible to model the discrete latent code of a VQ-VAE with an ARDM. I guess the main advantage compared to VQ-GAN (which uses a classic ARM) would be the possibility of parallel decoding. Also depending on the architecture decoding of larger images might be possible (e.g. as diffusion models frequently use a u-net architecture with attention at its core).
@thomashirtz
@thomashirtz 2 года назад
TTP 13:25 .. Just kidding, nice video :)
@matthieulin335
@matthieulin335 2 года назад
damn looks cool
@patf9770
@patf9770 2 года назад
Been working on a similar idea for the greater part of the last year. Gotta be faster! See the wavefunctioncollapse procedural generation algorithm, it's simple yet incredibly powerful and works off the principle of generating the pixel that the "model" is the most "certain" about at each step: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-2SuvO4Gi7uY.html
@Gogargoat
@Gogargoat 2 года назад
Kind of works similar to how when the universe decides that a particle exists in one position (when it is observed), it's as if that sucks 1.0 mass from the probability density cloud. In the back of my mind I always kind of wondered how that worked and how that consistency is achieved, and i guess this decoding method is one way.
@billykotsos4642
@billykotsos4642 2 года назад
Not all languages are read from left to right
@herp_derpingson
@herp_derpingson 2 года назад
You can just reverse it before feeding into the model and then reverse it back after generation.
@arturprzybysz6614
@arturprzybysz6614 2 года назад
@@herp_derpingson Is it legal?
Далее
⚡ #RodrygoGoes ✖️ #Mbappé ⚽ #UCL
00:11
Просмотров 558 тыс.
How to Build a Homemade Bike Using a Barrel
00:21
Просмотров 1,2 млн
Why Does Diffusion Work Better than Auto-Regression?
20:18
What are Transformer Models and how do they work?
44:26
This is why Deep Learning is really weird.
2:06:38
Просмотров 387 тыс.
Diffusion Models | Paper Explanation | Math Explained
33:27
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Просмотров 180 тыс.
Rethinking Attention with Performers (Paper Explained)
54:39
⚡ #RodrygoGoes ✖️ #Mbappé ⚽ #UCL
00:11
Просмотров 558 тыс.