Тёмный

Advanced Machine Learning Day 3: Neural Architecture Search 

Microsoft Research
Подписаться 324 тыс.
Просмотров 32 тыс.
50% 1

How do you search over architectures?
View presentation slides and more at www.microsoft.com/en-us/resea...

Наука

Опубликовано:

 

5 дек 2018

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 38   
@leixun
@leixun 3 года назад
*My takeaways:* 1. Understand Reinforcement Learning (RL) is important for us to understand Neural Architecture Search (NAS) 1:20 2. What is NAS 3:03 3. A quick introduction om RL 12:17 3.1 Partially Observable Markov Decision Processes (POMDPs) 14:27 3.2 Summary 24:30 4. Markov Decision Processes (MDPs) 25:03 5. Policy gradients 39:10 6. An overview of NAS 48:56 7. Efficient Neural Architecture Search via Parameters Sharing 1:01:10 8. Progressive Neural Architecture Search 1:10:39 9. DARTS: Differentiable Architecture Search 1:14:28 10. Open questions 1:21:28 11. Q&A 1:23:25
@arthomas73
@arthomas73 5 лет назад
48:00 NAS 1:01:00 ENAS 1:10:00 PNAS 1:14:00 DARTS
@shivamkaushik6637
@shivamkaushik6637 3 года назад
thank you
@sreelakshmimenon8
@sreelakshmimenon8 3 года назад
Thank you
@r0ckThiz
@r0ckThiz 5 лет назад
Oh man, that was a really great lecture! Easy to follow and understand, thanks a lot!
@jagadeeshdondeti6254
@jagadeeshdondeti6254 4 года назад
Thanks for giving a wonderful lecture on the topic.
@akshayshrivastava97
@akshayshrivastava97 3 года назад
Good lecture, you have a knack for explaining. This certainly isn't an easy topic to tackle.
@dogeofvenice5624
@dogeofvenice5624 3 года назад
Great lecture. Many thanks!
@stars-flow
@stars-flow 3 года назад
Great lecture. Helped me a lot
@karthiksomayaji3460
@karthiksomayaji3460 Год назад
Great lecture. Thanks!
@TheAIEpiphany
@TheAIEpiphany 3 года назад
Nice talk, thanks! Loved the gesticulation haha.
@josephedappully1482
@josephedappully1482 5 лет назад
Great lecture; thank you!
@dor00012
@dor00012 2 года назад
Hey man I'm 30:13 in and I think giving a simple example for an MDP problem can be a huge help. They didn't even understand what you asked, some guy answered you "To maximize the reward 😆"
@pb25193
@pb25193 4 года назад
1:05:00 the head scratching moment is mind blowing. had me smiling and clapping.
@user-wn1sm8dj7h
@user-wn1sm8dj7h 4 года назад
thanks!
@user-wn9jq3zn6u
@user-wn9jq3zn6u 5 лет назад
drink the coffee!
@SanduUrsu
@SanduUrsu 5 лет назад
Would've been cool if the provided slides preserved the hyperlinks.
@arjunghosh1047
@arjunghosh1047 4 года назад
Sir i am new on this field i.e NAS....from where i have to start my basic preparation?
@arthomas73
@arthomas73 4 года назад
At 1:22:00 he discusses his group's paper that is coming out. efficient forward architecture search. arxiv.org/abs/1905.13360
@brandomiranda6703
@brandomiranda6703 4 года назад
They have an important question that I feel wasn't discussed properly. "What makes a problem an RL problem". The speaker claims is that its when the transition function is unknown. But then he was challenged and it wasn't exactly made clear what the disagreement was. Was somebody able to catch it more precisely? Was it that they agree that it's when the transition function is unknown they just disagreed that chess was an RL problem or not by its nature that you can check all possible ways the opponent might react and thus the transition is "known"? But isn't the speaker right, we don't actually know which action the other person will do and thus we need to estimate it from data (even if his steps are finite)?
@debadeepta
@debadeepta 4 года назад
Someone in the audience asked that they are familiar with methods for coming up with policies which dont use reinforcement learning algorithms to solve the problem. For example alpha-beta search (see this lecture for a nice overview: gki.informatik.uni-freiburg.de/teaching/ss14/gki/lectures/ai06.pdf). But note that alpha-beta search (or other search variants) make an assumption to turn an innately RL problem to a search problem. They assume that the transition function is known in that the other player/s will play optimally in response to your move. By assuming a transition model you can plan ahead by simulating what will happen to a (finite) horizon into the future if you now played a particular move. This search is still difficult due to the massive game tree that you need to expand and indeed until recent advances most of the SOTA agents in chess and other board games were mostly variants of search algorithms. But the problem remains inherently a RL problem (transitions are unknown). You can cast it down to a search/planning problem by assuming a transition model but your mileage will vary with how realistic the assumption of the opposition playing optimally is.
@debadeepta
@debadeepta 4 года назад
I didn't want to spend too much time on this aspect as I wanted to get to the NAS content instead of being bogged down in RL basics. It is difficult to teach NAS overview to such a broad audience (the set of all Microsoft employees) with widely varying backgrounds. I ended up spending too much time in the RL part just because some NAS algorithms use some techniques (policy-gradient and evolutionary methods). In later iterations I have abstracted the RL part away a bit more.
@first-thoughtgiver-of-will2456
@first-thoughtgiver-of-will2456 3 года назад
I don't think this question is as important. Having read the Sutton et al. book I would've said reward and value functions then pointed at the Bellman equation. Finite MDPs are chapter 2 of the RL introductory book so I dont really understand the original question unless the answer was finite. They are all generally different models of the same problem sets. 'RL' is introduced when AB pruning and other more exhaustive searches are impractical due to the scale of the domain (or for sample efficiency) and a larger heuristic must be introduced. This trend continues to this day with Deep RL where even more approximation is implemented to scale even further to domains (S-A spaces) previously unsolvable (AlphaGo, OpenAI Five, Alphastar). I'm sure theoretical computing models would show exhaustive search (not considering observability) can do the same but the problem and field of study therein is based upon finite compute resources: heuristics.
@gokulakrishnancandassamy4995
@gokulakrishnancandassamy4995 2 года назад
@@debadeepta Great explanantion, Dr. Dey! Thank you!
@dor00012
@dor00012 2 года назад
Didnt you come to talk about NAS? It doesn't even use that much RL anymore
@shuphoo
@shuphoo 3 года назад
It says "Day 3" what about the day 1 & 2 ?
@dor00012
@dor00012 2 года назад
But man wow you ask gread questions
@faruknane
@faruknane 5 лет назад
where was this course done? anyone who knows?
@debadeepta
@debadeepta 5 лет назад
A class taught by Microsoft Research for engineers in the rest of the company.
@pengpai812
@pengpai812 5 лет назад
@@debadeepta Thanks for your response. Do you have any idea where we could find day 1 and day 2 tutorial videos...?
@dor00012
@dor00012 2 года назад
38:47 Dude next time tell a guy like this that you can talk about it later and then disappear
@prathameshdinkar2966
@prathameshdinkar2966 3 года назад
Nice try! But he didn't drink the coffee
@brandomiranda6703
@brandomiranda6703 4 года назад
Why is it handy that he’s not taking derivatives of the state distribution in the policy gradient theorem?
@merlepatterson
@merlepatterson 5 лет назад
*Unsuspecting elderly American woman picks up the phone* ...."Hello Madam, This is Windows Service Department, your computer is sending us errors"
@merlepatterson
@merlepatterson 5 лет назад
@@GAURAVKAUL84 Rain check?
Далее
Despicable Me Fart Blaster
00:51
Просмотров 9 млн
ШОКОЛАДКА МИСТЕРА БИСТА
00:44
Просмотров 1,3 млн
CS480/680 Lecture 19: Attention and Transformer Networks
1:22:38
The Most Important Algorithm in Machine Learning
40:08
Просмотров 322 тыс.
MIT Introduction to Deep Learning | 6.S191
1:09:58
Просмотров 350 тыс.
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Просмотров 171 тыс.
Neural Architecture Search Explained
13:32
Просмотров 4,3 тыс.
Neural Network Architectures & Deep Learning
9:09
Просмотров 777 тыс.
Я УКРАЛ ТЕЛЕФОН В МИЛАНЕ
9:18
Просмотров 55 тыс.