Тёмный

DeepGait: Planning and Control of Quadrupedal Gaits using Deep Reinforcement Learning (Presentation) 

Robotic Systems Lab: Legged Robotics at ETH Zürich
Подписаться 35 тыс.
Просмотров 21 тыс.
50% 1

Опубликовано:

 

3 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 28   
@rodrigob
@rodrigob 4 года назад
Looking forward for part 2 !
@revimfadli4666
@revimfadli4666 4 года назад
7:44 looks straight out of an 80's retro game where people ride robots instead of cars
@ching-anwu2410
@ching-anwu2410 4 года назад
Nice work!
@vmguerra
@vmguerra 4 года назад
wow ... nice results!
@leejunja
@leejunja 4 года назад
Hi Vassilios!
@nunobartolo2908
@nunobartolo2908 7 месяцев назад
why is this better than model mpc with pure math optimization? is it just better because it can learn to handle noisy contacts?
@Frankx520
@Frankx520 4 года назад
Love it. Thank You!
@ahmedwaly9073
@ahmedwaly9073 4 года назад
Waiting for part two
@ArbazFirozKhanM22ME020
@ArbazFirozKhanM22ME020 Год назад
what software do you use for simulations?
@shivohcn1684
@shivohcn1684 4 года назад
I am in Love ! 😍❤️
@ycyang2698
@ycyang2698 4 года назад
Just had a quick look at your paper, great work and thanks for sharing. Quick question: For GP controller, is it right that you sample from the distribution of the policy until a feasible action found? What if the probability of a feasible sample is very low in a certain situation?
@leggedrobotics
@leggedrobotics 4 года назад
Thanks, we are glad you enjoyed it. During deployment we do not need to re-sample until it's valid, we only need a compute the mean of the policy's distribution to generate phase plans. That's the point of formulating the MDP in order to train the policy with RL; instead of using it like in sampling-based-planning methods, we train the parameterized policy distribution with RL so it learns to always output valid phase transitions.
@ycyang2698
@ycyang2698 4 года назад
@@leggedrobotics Thanks, understood.
@TheChromePoet
@TheChromePoet 2 года назад
@@leggedrobotics Hi, Is it possible to fast forward the learning process so that the robot can spend 1 million years learning in only a few weeks?
@jaesungahn1603
@jaesungahn1603 3 года назад
good work i have a question! how to get terrain information? IMU? camera(vision), lidar? i wonder how it is thank you in advance
@wayneyue1662
@wayneyue1662 4 года назад
这个真牛,讲的比较深入了
@apollodong6521
@apollodong6521 3 года назад
This is very good, does this code have open source,Thank you very much!
@ShustrovIliya
@ShustrovIliya 4 года назад
Waiting so much to see these anymals in action.
@manuel_ahumada
@manuel_ahumada 4 года назад
Is there a way were I could get access to the rviz configuration for the 80s theme? Looks very cool!
@leggedrobotics
@leggedrobotics 3 года назад
This visualization was made in raisimOgre so unfortunately there is no easy-to-use configuration to share. Stay tuned for when we release the code though.
@prajulp
@prajulp 4 года назад
In which software are these 3D simulations done
@leggedrobotics
@leggedrobotics 3 года назад
This work uses the RaiSim physics engine that was developed in-house. Link: raisim.com/
@retrorobodog
@retrorobodog 4 года назад
:)
@hamidsk2573
@hamidsk2573 4 года назад
is there any coding to share?
@leggedrobotics
@leggedrobotics 4 года назад
Unfortunately not yet. We do plan to open-source the code later this year though.
@wahabfiles6260
@wahabfiles6260 4 года назад
@@leggedrobotics that shall be great contribution!
@williamlee7119
@williamlee7119 4 года назад
@@leggedrobotics what is the constraint for the speed at which It walks? Does it have to go at that speed or that as fast as possible?
@BarkanUgurlu
@BarkanUgurlu Год назад
Great work. I hope "part 2: back with vengeance" is a reference to Last Ninja 2 (ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Gfkk9BnFB7w.html)
Далее
МАЛОЙ ГАИШНИК
00:35
Просмотров 495 тыс.
Китайка нашла Метиорит😂😆
00:21
AI Learns to Walk (deep reinforcement learning)
8:40
Reinforcement Learning from scratch
8:25
Просмотров 62 тыс.
Legged Robots - Computerphile
10:39
Просмотров 42 тыс.
Building an Internal Cycloidal Robotic Actuator
19:01
Просмотров 585 тыс.
Can we simulate a real robot?
21:26
Просмотров 107 тыс.
МАЛОЙ ГАИШНИК
00:35
Просмотров 495 тыс.