Тёмный
No video :(

CVPR 2021 Keynote -- Pieter Abbeel -- Towards a General Solution for Robotics. 

Pieter Abbeel
Подписаться 20 тыс.
Просмотров 29 тыс.
50% 1

Опубликовано:

 

21 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 21   
@minakhan1390
@minakhan1390 3 года назад
I love your work! Yours is the first online AI class I took ~7 years ago, even before I knew who you were! Thank you for being a constant teacher and inspiration.
@PieterAbbeel
@PieterAbbeel 3 года назад
so nice to read, hopefully we'll cross paths at one of the major conferences some day
@Dannyboi91
@Dannyboi91 3 года назад
Thank you for sharing these interesting ideas! As a junior software robotics engineer with keen interest in RL/Robotics Learning, your papers & materials have been an inspiration to me and an source of optimism in the field of robotics when I've questioned it. Excited and looking forward for more to come from you and Covariant! (And also thanks for the great Podcast!)
@Shah_Khan
@Shah_Khan 3 года назад
Thanks Pieter for sharing such a great material. I'm following you from a long time and now I also want to work seriously on Robotics and AI.
@PieterAbbeel
@PieterAbbeel 3 года назад
wonderful, hopefully we'll cross paths at one of the major conferences some time
@Shah_Khan
@Shah_Khan 3 года назад
@@PieterAbbeel I wish your words may come true.
@ProlificSwan
@ProlificSwan 3 года назад
It feels like robots are still missing some sense of "why" in all these combined approaches. We have neural networks which can conceivably handle concepts of experience/curiosity (RL), and association (transformers, image word, etc.), but there's no model that can really handle a surprise, create a hypothesis for the surprise, and then test the hypothesis (or make new ones and test them) until it is satisfied with why that experience occurred. You could argue that a robot does not need to understand "why" something happened, but I think that until it does, it will be quite brittle and unable to handle a variety of edge cases or unexpected experiences without a considerable amount of human directed fine-tuning. All that said, perhaps I'm just summarizing the area of continuous learning. Hard to say.
@kenfuliang
@kenfuliang 3 года назад
Thank you for sharing these interesting research directions.
@BlockDesignz
@BlockDesignz 3 года назад
Brilliant keynote thank you for uploading
@user-bw5dy8oq7c
@user-bw5dy8oq7c 3 года назад
Thanks a lot for sharing!
@zhenghaopeng6633
@zhenghaopeng6633 3 года назад
Hi Pieter! Can I upload this video to Bilibili, a Chinese video website? I guess there are many audiences that are interested on this! Thanks!
@PieterAbbeel
@PieterAbbeel 3 года назад
sure, go ahead!
@eranjitkumar11
@eranjitkumar11 3 года назад
Thank you for this presentation. The CURL and RAD paper really show the exiting potential of unsupervised self-supervised learning and data augmentation for robotics. A question: According to your statement in regards to the gap (16:20 -17:35 in results between image-based and state-based on the hard problems), do you feel (in those situations) it’s not possible to extract more useful information and close the gap, with maybe a different auxiliary task like a non-contrastive one? I feel like a generative model would learn more, but heard Aravind Srinivas mentioned that a reconstructive loss would potentially demand more focus (compared to the contrastive loss) over the RL task, which would in the end have a negative effect on the sample - efficiency. Are Generative model not right in this framework, as presented in the CURL paper?
@TheGagman2000
@TheGagman2000 3 года назад
Thanks for the amazing talk! I wonder what your opinion is about using RL methods in the animal world ? I've studied animal behavior from a neuroscience perspective (specifically bird song learning). Its a motor learning task (bird learns to sing like a model bird). The reward function used by birds is unknown. Similarly, there are other learning problems like monkeys learning how to manipulate new objects. Can we use modern RL methods to describe goal directed complex behavior ?
@PieterAbbeel
@PieterAbbeel 3 года назад
@Pedro Abreu indeed, it's likely something similar to CURL (or RAD or DrQ or SPR) could be used in audio space for this (whereas the original works are in image space); that said, there is still the other question even if how these birds (and other animals/humans) come up with their own reward functions, which is also really interesting to think about
@MrAlextorex
@MrAlextorex 3 года назад
Unfortunately the hardware required for such a general framework for embodied robotics is not yet there. We could use 5G + centralized cloud AI hardware which would result in some sort of costly & dangerous SkyNet.
@PieterAbbeel
@PieterAbbeel 3 года назад
definitely not looking for skynet; and more compute will always help; but I'd still be curious what kind of useful, fairly general systems we can build with today's compute if we were to follow the general ideas from the talk...
@hungry4001
@hungry4001 3 года назад
hi ,teacher~
@codelaborative7127
@codelaborative7127 3 года назад
this is great but I just need something that is simple to use now :(
@NextFuckingLevel
@NextFuckingLevel 3 года назад
So, decision transformer 👐😂
@PieterAbbeel
@PieterAbbeel 3 года назад
lol
Далее
L4 TRPO and PPO (Foundations of Deep RL Series)
25:21
The moment we stopped understanding AI [AlexNet]
17:38
Просмотров 934 тыс.
ЛИЗА - СПАСАТЕЛЬ😍😍😍
00:25
Просмотров 2,3 млн
Has Generative AI Already Peaked? - Computerphile
12:48
The Greenwich Meridian is in the wrong place
25:07
Просмотров 793 тыс.
AI Robotics for the Real World
23:57
Просмотров 12 тыс.
GEOMETRIC DEEP LEARNING BLUEPRINT
3:33:23
Просмотров 177 тыс.
L6 Model-based RL (Foundations of Deep RL Series)
18:14
ЛИЗА - СПАСАТЕЛЬ😍😍😍
00:25
Просмотров 2,3 млн