Тёмный

08L - Self-supervised learning and variational inference 

Alfredo Canziani
Подписаться 39 тыс.
Просмотров 9 тыс.
50% 1

Course website: bit.ly/DLSP21-web
Playlist: bit.ly/DLSP21-RU-vid
Speaker: Yann LeCun
Chapters
00:00:00 - Welcome to class
00:00:45 - GANs revisited
00:17:07 - Self-supervised learning: a broader purpose
00:31:59 - Sparse modeling
00:43:25 - Amortized inference
00:51:21 - Convolutional sparse modeling (with group sparsity
00:55:12 - Discriminant recurrent sparse AE
00:57:26 - Other self-supervised learning techniques
00:58:45 - Group sparsity
01:07:47 - Regularization through temporal consistency
01:12:09 - VAE: intuitive interpretation
01:26:13 - VAE: probabilistic variational approximation-based interpretation

Опубликовано:

 

17 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 29   
@dialloibu
@dialloibu 2 года назад
A quick annotation of chapters after first viewing: 00:00:00 - Summary 00:01:00 - GANs 00:17:10 - How do Humans and Animals learn quickly 00:28:05 - Self Supervised Learning 00:32:00 - Sparse Coding Sparce Modeling 01:07:45 - Regularization Through Temporal Consistency 01:12:05 - Variational AE
@alfcnz
@alfcnz 2 года назад
Thanks. I haven't got the chance to create the chapter markers yet. I'll do it next week, perhaps.
@ShihgianLee
@ShihgianLee 2 года назад
Thank you, Alf, for uploading new lecture! I finished the 2020 lectures and started reviewing 2021 lectures. I find a different take helps me to understand the topics better!
@alfcnz
@alfcnz 2 года назад
Yay! 🥳🥳🥳
@ShihgianLee
@ShihgianLee 2 года назад
@@alfcnz Hi Alf, at 57:27, Yann mentioned there are dataset that the NYU students can use for their SSL project. I was wondering if it is possible to release those to students outside of NYU so that we can try them out as well? 🤔
@alfcnz
@alfcnz 2 года назад
It's just a public data set we've reduced in size (image size and number of images). You can get any publicly available data set to run your experiments.
@anondoggo
@anondoggo Год назад
Timestamps: 00:00:45 - GANs revisited 00:17:07 - Self-supervised learning: a broader purpose 00:31:59 - Sparse modeling 00:43:25 - Amortized inference 00:51:21 - Convolutional sparse modeling (with group sparsity 00:55:12 - Discriminant recurrent sparse AE 00:57:26 - Other self-supervised learning techniques 00:58:45 - Group sparsity 01:07:47 - Regularization through temporal consistency 01:12:09 - VAE: intuitive interpretation 01:26:13 - VAE: probabilistic variational approximation-based interpretation
@alfcnz
@alfcnz Год назад
Thanks! ❤️
@prof_shixo
@prof_shixo 2 года назад
Thanks for this very informative lecture. Great effort and it is very much appreciated.
@alfcnz
@alfcnz 2 года назад
💪🏻💪🏻💪🏻
@khoaguin
@khoaguin 2 года назад
Thank you very much, Alfredo!
@alfcnz
@alfcnz 2 года назад
You're very welcome ☺️☺️☺️
@user-co6pu8zv3v
@user-co6pu8zv3v 2 года назад
Thank you, Alfredo :) This video is very helpfull for me
@alfcnz
@alfcnz 2 года назад
🥳🥳🥳
@bmahlbrand
@bmahlbrand 2 года назад
Suppose you take the GAN example and make it conditional, do you sample the noise tensors s.t. you sample the same dimensions as before and you concat (or otherwise condition the model) a real condition tensor to it, or do you sample across the channels of the condition as well?
@alexsht2
@alexsht2 Год назад
An interesting question about variational approximation - what's inside the "log" is an average (an expectation). Expectations can be approximated by sampling from the distribution - in this case, sampling from q. So why do we need a bound? Why can't we just approximate the integral inside the log by sampling, and then take the log?
@petrdvoracek670
@petrdvoracek670 Год назад
Hello, Thank you for sharing such insightful material! Yann frequently points out that pretraining an image classification model on an unsupervised task using GANs doesn't yield the best results (around the 14:15 mark). Could you recommend any scholarly articles that delve into this subject, particularly ones that compare the effectiveness of pretraining using GANs versus other methods, like the Siamese training scheme? Thank you!
@reinerwilhelms-tricarico344
@reinerwilhelms-tricarico344 4 месяца назад
Looks a bit like a course on alchemy - but I still feel I learned a lot, especially great tricks and acronyms. The big picture is still a bit in the dark, but I'm getting there. ;-)
@alfcnz
@alfcnz 4 месяца назад
Hahaha 😅😅😅
@my_master55
@my_master55 2 года назад
If this way of making features (58:55 , 1:12:06) is so cool and more "natural" (kinda same as a brain works with visual features), why the research wasn't turned in that direction starting from 2010 when it was proposed? 🤔 I suggest there are some limitations Yann didn't mention? Or the reason is that the topic is still kinda more complex than the usual convolutions? Thanks for the vid, Alfredo and Yann 🤗
@bmahlbrand
@bmahlbrand 2 года назад
Another question, is there a corresponding practicum to the sparse coding portion (LISTA in particular)?
@alfcnz
@alfcnz 2 года назад
No. I did mostly fail my only attempt to train a sparse AE, even with target prop. I'm open to supervise anyone interested in giving it a try, though. Feel free to reach out on Discord.
@jadtawil6143
@jadtawil6143 2 года назад
at 1:11:40 , how do you know which parts of z to allow to vary, and which to not, exactly? How do you know which parts represent the "objects", and which parts represents the things that are changing, like the location of the objects?
@alfcnz
@alfcnz 2 года назад
Hi Jad, that's a good question! You don't 🤷🏼‍♂️ If you add more inductive bias (enforce partial invariance and partial equivariance of the representation) learning will determine which part of the hidden representation represents _what_ and which _where_. Yann has a few papers on this topic. You should be able to find them online.
@jadtawil6143
@jadtawil6143 2 года назад
@@alfcnz thank you Alfredo, and lots of gratitude to this great series.
@robinranabhat3125
@robinranabhat3125 Год назад
01:35:00 is beautiful.
@alfcnz
@alfcnz Год назад
🤩🤩🤩
@buoyrina9669
@buoyrina9669 2 года назад
I wonder how to get as smart as Yann
@alfcnz
@alfcnz 2 года назад
By gradient descent, of course.
Далее
07 - Unsupervised learning: autoencoding the targets
56:42
ПРОВЕРИЛ АРБУЗЫ #shorts
00:34
Просмотров 2 млн
04L - ConvNet in practice
51:41
Просмотров 10 тыс.
02L - Modules and architectures
1:42:27
Просмотров 22 тыс.
13L - Optimisation for Deep Learning
1:51:32
Просмотров 7 тыс.
Evidence Lower Bound (ELBO) - CLEARLY EXPLAINED!
11:33
01 - History and resources
50:18
Просмотров 97 тыс.
AI: Grappling with a New Kind of Intelligence
1:55:51
Просмотров 738 тыс.
10L - Self-supervised learning in computer vision
1:36:13