Тёмный
No video :(

10P - Non-contrastive joint embedding methods (JEMs) for self-supervised learning (SSL) 

Alfredo Canziani
Подписаться 39 тыс.
Просмотров 4,1 тыс.
50% 1

Опубликовано:

 

28 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 15   
@PedroAugusto-kg1ss
@PedroAugusto-kg1ss 25 дней назад
Just finished all videos. Really amazing. Thank you for sharing.
@alfcnz
@alfcnz 18 дней назад
Glad you like them! 🥳🥳🥳
@lgcondados
@lgcondados 2 года назад
Thank you a million for sharing this amazing lecture with us! greetings from Brazil
@alfcnz
@alfcnz 2 года назад
🥳🥳🥳
@kalokng3572
@kalokng3572 2 года назад
Thanks Alfredo and Jiachen the video really helps people understand the VICreg I've just got a perhaps very simple question to ask about VICReg and JEPA. I've read "A Path Towards Autonomous Machine Intelligence" by Prof. Yann LeCun and saw that VICReg is introduced in the section of Joint Embedding Predictive Architecture (JEPA). Since JEPA defined in the paper contains latent variable z, while VICReg seems not to have latent variable z involved, may I confirm whether latent variable z is necessary in JEPA framework? Or it is just an optional component?
@ikechukwumichael1383
@ikechukwumichael1383 2 года назад
Thank you for sharing
@alfcnz
@alfcnz 2 года назад
🙌🏻🙌🏻🙌🏻
@kiennguyen3228
@kiennguyen3228 2 года назад
How do you measure information content, like from the backbone embedding and projector embedding when you said projector removed some information ? Thanks
@my_master55
@my_master55 Год назад
Thanks, Alfredo 👍 But why only 2 videos (10P and 11L) are available in the DL 22' playlist? ✌
@alfcnz
@alfcnz Год назад
The rest didn't really change much from the previous edition. Still, for the fall semester we'll have at least 6 videos!
@intisarchy7059
@intisarchy7059 2 года назад
Professor, thanks a lot for your teaching. I have a question. Is SSL pre-training better than supervised pre training in terms of quality of features?
@alfcnz
@alfcnz 2 года назад
Depends on the amount and the type of data you have. Say you want to work with radiographs. Pretraining on ImageNet is not really going to work, since the data is rather different. Then, you may want to pretrain on annotated radiographs, but those annotations are very expensive. So, in these cases SSL is going to be superior.
@intisarchy7059
@intisarchy7059 2 года назад
@@alfcnz Thanks. Actually I understand that when annotated dataset is scarce SSL is the best way to pre-train our network. Ok say for example I want to train a person detection model, and first I want to pre-train my backbone network. Let us assume, we have 1000 annotated person dataset. Now, assume I prepare two different backbone models, 1) pre-train on the 1000 annotated person dataset 2) pre-train using SSL on 1000 person data (with out using annotation). My assumption is that SSL based trained backbone will provide better generalization (I remember seeing something like the grad-cam or heatmap from the DINO by facebook where they showed SSL based training provided high quality pre-training). I might be wrong. In summary, my query is that, given similar amount of data, will SSL be better that SL?
@user-co6pu8zv3v
@user-co6pu8zv3v 2 года назад
Thank you for video! How is your book going?
@alfcnz
@alfcnz 2 года назад
Paused. Dealing with other crap, currently.
Далее
10L - Self-supervised learning in computer vision
1:36:13
Reforged | Update 0.30.0 Trailer | Standoff 2
02:05
Просмотров 413 тыс.
02L - Modules and architectures
1:42:27
Просмотров 22 тыс.
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
What Is Self-Supervised Learning and Why Care?
9:43
Просмотров 19 тыс.
07 - Unsupervised learning: autoencoding the targets
56:42