Тёмный

GFlowNets for generative active learning | Amazon Science 

Amazon Science
Подписаться 6 тыс.
Просмотров 10 тыс.
50% 1

October, 2021, Yoshua Bengio, one of the world’s leading experts in artificial intelligence, gave a keynote presentation at Amazon's annual machine learning conference. Yoshua is a professor in the Department of Computer Science and Operations Research at the Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (MILA).
In Yoshua's presentation, he considers the following setup: a ML system can interact with an expensive oracle (the “real world”) by iteratively proposing batches of candidate experiments and then obtaining a score for each experiment (“how well did it work?”).
The data from all the rounds of queries and results can be used to train a proxy for the oracle, a form of world model. The world model can then be queried (much more cheaply than the world model) in order to train (in-silico) a generative model which proposes experiments, to form the next round of queries.
Systems which can do that well can be applied in interactive recommendations, to discover new drugs, new materials, control plants or learn how to reason and build a causal model. They involve many interesting ML research threads, including active learning, reinforcement learning, representation learning, exploration, meta-learning, Bayesian optimization, black-box optimization.
What should be the training criterion for this generative model? Why not simply use Monte-Carlo Markov chain (MCMC) methods to generate these samples? Is it possible to bypass the mode-mixing limitation of MCMCs? How can the generative model guess where good experiments might be before having tried them? How should the world model construct a representation of its epistemic uncertainty, i.e., where it expects to predict well or not?
On the path to answering these questions, he introduces a new and exciting deep learning framework called GFlowNets which can amortize the very expensive work normally done by MCMC to convert an energy function into samples and opens the door to fascinating possibilities for probabilistic modeling, including the ability to quickly estimate marginalized probabilities and efficiently represent distributions over sets and graphs.
Follow us:
Website: www.amazon.science
Twitter: / amazonscience
Facebook: / amazonscience
Instagram: / amazonscience
LinkedIn: / amazonscience
Newsletter: www.amazon.science/newsletter
#AmazonScience #MachineLearning

Наука

Опубликовано:

 

8 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 7   
@user-hh3nx4ds5b
@user-hh3nx4ds5b 2 года назад
So cool!
@Mostafa-cv8jc
@Mostafa-cv8jc 2 года назад
haha the part about bats had me laugh out loud
@ramakrishna5480
@ramakrishna5480 2 года назад
👍👍👍
@ChocolateMilkCultLeader
@ChocolateMilkCultLeader 2 года назад
Great video. Do you guys do collaborations and guest presentations
@paulcurry8383
@paulcurry8383 2 года назад
I still don’t really get what is new here. It sounds like he is proposing a way to improve generative sampling by modeling a system as a neural network with flow constraints over states? I guess I’d like to see a more concrete implementation over something RL struggles with like hand movement to really grasp what he’s proposing. Otherwise it has a similar ring to liquid neural networks, which supposedly have some great mathematical properties but still don’t work in practice.
@ThePowerExcess
@ThePowerExcess 2 года назад
I love your take on LSMs - even though i love them and I spent like 3 years working on them
@maloxi1472
@maloxi1472 Год назад
It's not clear what it is exactly that you're claiming/asking in the first part of your comment.
Далее
Introduction to GFlowNet
48:26
Просмотров 3,9 тыс.
Introduction to GFlowNets - Part 1 - Emmanuel Bengio
49:04
Open problems in machine learning | Amazon Science
35:28
Так ли Хорош Founders Edition RTX 4080 ?
13:00
Сложная распаковка iPhone 15
1:01
Просмотров 14 тыс.