Тёмный

J. Z. Kolter and A. Madry: Adversarial Robustness - Theory and Practice (NeurIPS 2018 Tutorial) 

Steven Van Vaerenbergh
Подписаться 3,4 тыс.
Просмотров 14 тыс.
50% 1

Abstract: The recent push to adopt machine learning solutions in real-world settings gives rise to a major challenge: can we develop ML solutions that, instead of merely working “most of the time”, are truly reliable and robust? This tutorial will survey some of the key challenges in this context and then focus on the topic of adversarial robustness: the widespread vulnerability of state-of-the-art deep learning models to adversarial misclassification (aka adversarial examples). We will discuss the practical as well as theoretical aspects of this phenomenon, with an emphasis on recent verification-based approaches to establishing formal robustness guarantees. Our treatment will go beyond viewing adversarial robustness solely as a security question. In particular, we will touch on the role it plays as a regularizer and its relation to generalization.
Speakers: J. Zico Kolter and Aleksander Madry
Slides: media.neurips.cc/Conferences/...

Наука

Опубликовано:

 

12 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 10   
@andrewstang1123
@andrewstang1123 2 месяца назад
Excellent presentation ❤❤
@rainerzufall1868
@rainerzufall1868 5 лет назад
Very nice, thank you for uploading!
@q44444q
@q44444q 4 года назад
Excellent work and website; thank you both!
@axe863
@axe863 Месяц назад
Neurosymbolic and Heterogenous Modalities are needed
@Kram1032
@Kram1032 5 лет назад
This is amazing. So what would happen if you, say, repeated the Deep Dreaming experiment with a robust network like that? What I found particularly interesting about the Primate->Bird example is, that it didn't just draw in a bird anywhere into the picture. It mostly actually focused on painting out the Primate and only used that as its canvas to change stuff. The rest of the picture was relatively unaffected (as far as I could tell at least)
@Kram1032
@Kram1032 5 лет назад
What happens if you, instead of trying for a fixed delta, use a different delta randomly drawn at each step for optimization size? Maybe that would keep scores near 0 noise intact while still being reasonably robust, and possibly even robust against larger perturbations
@priyamdey3298
@priyamdey3298 2 года назад
31:09 onwards for the deeper dive
@anassaljarroudi2275
@anassaljarroudi2275 4 года назад
25.06
Далее
On Evaluating Adversarial Robustness
50:32
Просмотров 8 тыс.
Блиц по трекам ❤️
00:50
Просмотров 89 тыс.
Lecture 16 | Adversarial Examples and Adversarial Training
1:21:46
A New Perspective on Adversarial Perturbations
48:49
1v1 Coding Lockout Championship Finals
3:37:39
Просмотров 75 тыс.
Здесь упор в процессор
18:02
Просмотров 265 тыс.
Acer Predator Тараканьи Бега!
1:00
Просмотров 426 тыс.
Acer Predator Тараканьи Бега!
1:00
Просмотров 426 тыс.
Игровой Комп с Авито за 4500р
1:00