Тёмный

``Differential Privacy and Machine Unlearning'' by Aaron Roth (12/03) 

Подписаться
Просмотров 2,1 тыс.
% 33

Introduction:
Today, we have Aaron Roth. He is a professor at Penn. Before Penn, he spent a year as a postdoc at Microsoft Research New England. Before that, he received his PhD from CMU where Avrim Blum advised him. Aaron works on various topics, including algorithms, machine learning, game theory, and mechanism design, focusing on privacy and fairness.He is the recipient of several awards, including a Presidential Early Career Award for Scientists and Engineers, an Alfred P. Sloan Research Fellowship, an NSF CAREER award. Today, we will hear about his work on differential privacy and machine unlearning.
Abstract:
The problem of data deletion or "machine unlearning" is to remove the influence of a data point on a trained model, with computational cost that is substantially better than the baseline solution of fully retraining the model. Whereas differential privacy asks that the same algorithm run on different (neighboring) inputs yield nearby distributions, the data deletion problem requires that two different algorithms (full retraining vs. a sequence of deletion operations) yield nearby distributions when run on the same input. So the two goals are not the same: nevertheless, techniques from differential privacy carry over in natural ways to the data deletion problem. In this talk, I'll walk through two simple vignettes that illustrate this point. The work I will discuss is from a pair of papers that are joint with Varun Gupta, Chris Jung, Seth Neel, Saeed Sharifi, and Chris Waites.

Опубликовано:

 

7 янв 2022

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии