Тёмный

FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence 

Yannic Kilcher
Подписаться 262 тыс.
Просмотров 19 тыс.
50% 1

Опубликовано:

 

11 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 21   
@manuelpariente2288
@manuelpariente2288 4 года назад
Thanks again :-) Loved the critic at the end. Also, nice from them that they report these results, lots of papers would silence it to make it seem like the method brought all the gains !
@herp_derpingson
@herp_derpingson 4 года назад
78% accuracy from 1 image per class. This blew my mind. What a time to be alive.
@TeoZarkopafilis
@TeoZarkopafilis 4 года назад
HOLD ON TO YOUR PAPERS
@meudta293
@meudta293 4 года назад
my brain matter is all over the floor right now hhh
@matthewtang1489
@matthewtang1489 4 года назад
@@TeoZarkopafilis Woah! A fellow scholar here!
@shrinathdeshpande5004
@shrinathdeshpande5004 4 года назад
definitely one of the best ways to explain a paper!! Kudos to you
@hihiendru
@hihiendru 4 года назад
just like UDA, emphasis on way you augment. and poor UDA got rejected. ps LOVE your breakdowns, please keep them coming.
@sora4222
@sora4222 Год назад
I loved the critique at the end. Thanks.
@hungdungnguyen8258
@hungdungnguyen8258 4 месяца назад
well explained. Thank you
@vishalahuja2502
@vishalahuja2502 3 года назад
Yannic, nice coverage of the paper. I have one question: at 15:05, you explain that the pseudo-label is used only if the confidence is above a certain threshold (which is also a hyperparameter). Where is the confidence coming from? It is well known that the confidence score coming out of softmax is not reliable. Can you please explain?
@jurischaber6935
@jurischaber6935 Год назад
Thanks again...Great teacher for us students. 🙂
@AmitKumar-ts8br
@AmitKumar-ts8br 3 года назад
Really nice explanation and concise...
@reginaldanderson7218
@reginaldanderson7218 4 года назад
Nice edit
@tengotooborn
@tengotooborn 3 года назад
Something which I find weird: isn’t a constant pseudolabel always correct? It seems that there are only positive examples in the scheme which uses the unlabeled data, and so there is nothing in the loss which forces the model to not always output the same pseudolabel for everything. Yes, one can argue that this would fail the supervised loss, but then the question becomes “how is the supervised loss weighted w.r.t. the unsupervised loss”. In any case, it seems that one would also desire to have negative examples in the unsupervised case
@ramonbullock6630
@ramonbullock6630 4 года назад
I love this content :D
@NooBiNAcTioN1334
@NooBiNAcTioN1334 2 года назад
Fantastic!
@abhishekmaiti8332
@abhishekmaiti8332 4 года назад
In what order do they train the model, feed the labelled image first and then the unlabelled ones? Also, can two unlabelled images of the same class have a different pseudo label?
@YannicKilcher
@YannicKilcher 4 года назад
I think they do everything at the same time. I guess the labelled images can also go the unlabelled way, yes. But not the other way around, obviously :)
@christianleininger2954
@christianleininger2954 4 года назад
Really Good Job please keep going
@Manu-lc4ob
@Manu-lc4ob 4 года назад
What is the software that you are using to annotate papers Yannic ? I am using Margin notes but it does not seem as smooth
@Dr.Z.Moravcik-inventor-of-AGI
@Dr.Z.Moravcik-inventor-of-AGI 3 года назад
Google again, wow! 😂
Далее
Gradient Surgery for Multi-Task Learning
32:16
Просмотров 8 тыс.
Supervised Contrastive Learning
30:08
Просмотров 59 тыс.
A Tutorial on Semi Supervised Learning
13:37
Просмотров 6 тыс.
Brutally Honest Advice For Young Men - Robert Greene
8:41
Group Normalization (Paper Explained)
29:06
Просмотров 30 тыс.
Self-supervised learning and pseudo-labelling
24:25
Просмотров 4,8 тыс.