Тёмный
Yunzhu Li
Yunzhu Li
Yunzhu Li
Подписаться
Комментарии
@Patrick-wn6uj
@Patrick-wn6uj 3 месяца назад
any links to the paper?
@delegatewu
@delegatewu 3 месяца назад
nice presentation. Thank you.
@LeoTX1
@LeoTX1 3 месяца назад
Good repersentation!
@yukuanlu6676
@yukuanlu6676 7 месяцев назад
Excellent! I'm doing world models research and this is quite informative. Thanks Prof. Li!
@FahadRazaKhan
@FahadRazaKhan 2 года назад
Hi Li, this is very interesting work. I have a couple of questions if you may answer, 1: How do you synch the tactile and visual information? 2: Can this system predict other tasks for which it is not trained?
@yunzhuli2308
@yunzhuli2308 2 года назад
Hi Fahad, thank you for your interest in our work! 1. We record the timestamps for both the tactile and visual recordings. The stamps are then used to synchronize the collected frames from different data sources. 2. The test set contains motion trajectories that have different initial configurations and action sequences, but they are still from the same task that the model was trained on. We didn't test the model's generalization ability on unseen tasks, in which we would expect certain levels of generalization if the model is trained on a diversified set of tasks, but more experiments are needed to make more concrete statements.
@FahadRazaKhan
@FahadRazaKhan 2 года назад
@@yunzhuli2308 thanks.
@gowthamkanda
@gowthamkanda 3 года назад
Great work!
@ycyang2698
@ycyang2698 3 года назад
Inspiring!
@ashwinsrinivas7278
@ashwinsrinivas7278 4 года назад
Uber cool!
@justdrive5287
@justdrive5287 5 лет назад
Thats brilliancy sir, what tools u used to write down the syntax and how the machines learn this exactly ? would love to know
@yunzhuli2308
@yunzhuli2308 5 лет назад
You will be able to find more information here dpi.csail.mit.edu/, including paper and code.
@justdrive5287
@justdrive5287 5 лет назад
thank you mr.li..appreciate what you are doin.