Тёмный

Unsupervised Deep Learning - Google DeepMind & Facebook Artificial Intelligence NeurIPS 2018 

The Artificial Intelligence Channel
Подписаться 117 тыс.
Просмотров 43 тыс.
50% 1

Presented by Alex Graves (Google DeepMind) and Marc Aurelio Ranzato (Facebook)
Presented December 3rd, 2018
This tutorial Unsupervised Deep Learning will cover in detail, the approach to simply 'predict everything' in the data, typically with a probabilistic model, which can be seen through the lens of the Minimum Description Length principle as an effort to compress the data as compactly as possible.
Alex Graves is a research scientist at DeepMind. He did a BSc in Theoretical Physics at Edinburgh and obtained a PhD in AI under Jürgen Schmidhuber at IDSIA. He was also a postdoc at TU Munich and under Geoffrey Hinton at the University of Toronto.

Наука

Опубликовано:

 

3 дек 2018

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 26   
@mohammadkhalooei637
@mohammadkhalooei637 5 лет назад
Thank you so much for your interesting presentation!
@zkzhao279
@zkzhao279 5 лет назад
Slides: ranzato.github.io/
@siegmeyer995
@siegmeyer995 5 лет назад
Really useful! Thank you
@sofdff
@sofdff Месяц назад
Amazing
@bingeltube
@bingeltube 5 лет назад
Very recommendable! Talks by two very renowned researchers
@kazz811
@kazz811 5 лет назад
Great talks but wish Alex Graves had paced his talk better to focus on the interesting stuff instead of the more well known ideas.
@torincarl7934
@torincarl7934 3 года назад
Not sure if you guys gives a damn but if you're bored like me during the covid times then you can watch pretty much all of the latest movies on InstaFlixxer. I've been watching with my girlfriend for the last couple of weeks xD
@michaelonyx3903
@michaelonyx3903 3 года назад
@Torin Carl definitely, been using InstaFlixxer for since december myself :)
@Troyster94806
@Troyster94806 5 лет назад
Maybe it's possible to use narrow AI to figure the optimum method of unsupervised learning for us.
@machinistnick2859
@machinistnick2859 3 года назад
thanks god
@messapatingy
@messapatingy 5 лет назад
What is density modelling?
@SudhirPratapYadav
@SudhirPratapYadav 2 года назад
modelling -> finding out /predicting -> Basically finding out from data a model density -> here it means probability density function -> i.e. probability distribution of data/thing to be modelled.
@reinerwilhelms-tricarico344
@reinerwilhelms-tricarico344 3 года назад
0.5 < P(the cat sat on the mat | google talk) < 1
@AnimeshSharma1977
@AnimeshSharma1977 5 лет назад
getting the metric right seems like feature engineering...
@vsiegel
@vsiegel 2 года назад
He does not fully understand, I think.
@jabowery
@jabowery 5 лет назад
About 17 minutes and I had to stop listening because I felt like I had lost about a standard deviation IQ. Hasn't this guy ever heard of Solomonoff induction? Hasn't he ever talked to Shane Legg? The intrinsic motivation is lossless compression and if the agent is active the decision theoretic utility determines the explore exploit tradeoff as in AIXI. If passive it just compresses whatever it's given as data.
@theJACKATIC
@theJACKATIC 5 лет назад
Thats Alex Graves... well renowned at DeepMind. He's released papers with Shane Legg.
@webxhut
@webxhut 5 лет назад
Fish !
@jabowery
@jabowery 5 лет назад
@@theJACKATIC I listened to the rest and he did, finally, bring in compression as one would expect of someone with his background. And it does appear important. His presentation threw me off. At a meta level, he really should start with the "high level coding" of his presentation: Describe the space in terms of AIXI's unification of Solomonoff Induction and Sequential Decision Theory before breaking down into his 2x2 taxonomy. That way it would be clear that "unsupervised learning" is simply lossless compression toward Solomonoff Induction's use of the KC program's "latent representations". He appears to have his head so far into the techniques of lossless compression that he elides the "top down" definition of AGI as the start of his "high level".
@coolmechelugwu7305
@coolmechelugwu7305 5 лет назад
@@jabowery some persons are not so advanced in this field and starting from the known to the unknown is a great technique in passing knowledge. Great presentation🙋
@jabowery
@jabowery 5 лет назад
@@coolmechelugwu7305 Solomonoff Induction is just Ockham's Razor for the Turing Age -- so there's no real challenge in coming up with an exoteric framing. Sequential Decision Theory can be framed quite simply as well: If you know the outcome of all choices available to you (provided by Solomonoff Induction), Decisions become trivial. The reason I'm hammering on this is that the failure to understand lossless compression's value as the intrinsic utility function of unsupervised learning has untold opportunity costs to society: The enormous resources poured, not only into the social sciences but social "experiments" conducted on vast populations without any serious notion of "informed consent", should be informed by the lossless compression of a wide range of longitudinal social data. Google DeepMind should be at the forefront of this given its background and Google's resources. See this question I put to Kaggle: www.kaggle.com/general/37155#post207935
Далее
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Просмотров 168 тыс.
🤢 To try piggy toothpick beauty gadget
00:30
Просмотров 7 млн
Tipuan Jenius dalam Mengasuh Anak & Gadget Cerdas
00:21
Just try to use a cool gadget 😍
00:33
Просмотров 56 млн
Yuval Harari - The Challenges of The 21st Century
46:13
Deep Learning Basics: Introduction and Overview
1:08:06
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
Scalable Bayesian Inference - NeurIPS 2018
1:53:03
Просмотров 8 тыс.
Intelligent Thinking About Artificial Intelligence
1:04:48
MIT Introduction to Deep Learning | 6.S191
1:09:58
Просмотров 264 тыс.
Why Does Diffusion Work Better than Auto-Regression?
20:18
Калькулятор в iPadOS 18 ➕
0:38
Просмотров 146 тыс.
AI от Apple - ОБЪЯСНЯЕМ
24:19
Просмотров 114 тыс.