Тёмный

A New Perspective on Adversarial Perturbations 

Simons Institute
Подписаться 62 тыс.
Просмотров 7 тыс.
50% 1

Aleksander Madry (MIT)
simons.berkeley.edu/talks/tbd-57
Frontiers of Deep Learning

Опубликовано:

 

5 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 9   
@Kram1032
@Kram1032 5 лет назад
Wow, people had a lot of questions during this one. I can see why though. This was a great talk, about all around great work!
@volotat
@volotat 5 лет назад
Fantastic work. I wonder if this approach can potentially beat GANs at image generation one day. Very impressive.
@TheMarcusrobbins
@TheMarcusrobbins 5 лет назад
Oh god this is fascinating. Is there some domain of perception that is completely inaccessible to us? God find out out what the features look like already!!!
@paulcurry8383
@paulcurry8383 3 года назад
I feel like this parallels the universal adversarial triggers of NLP models. Those are effective because they exploit a low level feature of the dataset the model is trained on. I wonder how you could apply “noise” onto the input of an NLP model to reduce lower level feature dependence... perhaps substituting words for close synonyms?
@PaulLai
@PaulLai 2 года назад
A token in a sentence is more analogous to a pixel in an image. Adding noise can be adding random words that doesn't misled human but misled the model.
@psd993
@psd993 Год назад
there was a paper from ilyas et al out of MIT, that proposed that adv examples come from well generalizing features in the data sets. They call these features "brittle" because they are not what humans would pick up on.
@aBigBadWolf
@aBigBadWolf 5 лет назад
16:55 the comment is valid, the second model just learned to imitate the previous model. The fact that the classifier architecture is slightly different is irrelevant.
@aBigBadWolf
@aBigBadWolf 5 лет назад
The presenter has no idea how humans learn.
@janzaucha7307
@janzaucha7307 4 года назад
How do humans learn?
Далее
On Evaluating Adversarial Robustness
50:32
Просмотров 8 тыс.
Китайка Шрек поймал Зайца😂😆
00:20
Mad Max: Affine Spline Insights into Deep Learning
48:03
Adversarial Examples Are Not Bugs, They Are Features
40:21
Predictive Coding Models of Perception
51:32
Просмотров 20 тыс.
Applied Category Theory. Chapter 1, lecture 1 (Spivak)
50:45
From Classical Statistics to Modern Machine Learning
49:47
Algorithmic Trading and Machine Learning
54:49
Просмотров 95 тыс.
An Observation on Generalization
57:21
Просмотров 158 тыс.