Тёмный

Transporter Networks: Rearranging the Visual World for Robotic Manipulation 

Andy Zeng
Подписаться 814
Просмотров 5 тыс.
50% 1

Learn more: transporternets.github.io/
Abstract: Robotic manipulation can be formulated as inducing a sequence of spatial displacements: where the space being moved can encompass an object, part of an object, or end effector. In this work, we propose the Transporter Network, a simple model architecture that rearranges deep features to infer spatial displacements from visual input -- which can parameterize robot actions. It makes no assumptions of objectness (e.g. canonical poses, models, or keypoints), it exploits spatial symmetries, and is orders of magnitude more sample efficient than our benchmarked alternatives in learning vision-based manipulation tasks: from stacking a pyramid of blocks, to assembling kits with unseen objects; from manipulating deformable ropes, to pushing piles of small objects with closed-loop feedback. Our method can represent complex multi-modal policy distributions and generalizes to multi-step sequential tasks, as well as 6DoF pick-and-place. Experiments on 10 simulated tasks show that it learns faster and generalizes better than a variety of end-to-end baselines, including policies that use ground-truth object poses. We validate our methods with hardware in the real world.
Narration: Laura Graesser

Наука

Опубликовано:

 

15 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии    
Далее
CLIPort (CoRL 2021)
10:36
Просмотров 8 тыс.
My Puzzle Robot is 200x Faster Than a Human
21:21
Просмотров 8 млн
Career advice from Amazon Robotics recruiters
5:48
Просмотров 13 тыс.
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
Scientist think THIS is Alien?!
16:16
Просмотров 150 тыс.
Wylsa Pro: опять блокировка YouTube?
17:49
#engineering #diy #amazing #electronic #fyp
0:59
Просмотров 658 тыс.