furukawa-ai / deeplearning_papers

AI論文読みメモ
26 stars 0 forks source link

Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping #101

Closed msrks closed 1 year ago

msrks commented 6 years ago

ロボットの転移学習(simulation env. からreal world env.) に関する研究。 pixel-levelの domain adaptationと、feature-levelの domain adaptationを行うことで、real world dataset を50倍削減することに成功。

2018-04-05 17 29 50

推論モデルの目的

時刻ゼロの時の画像(handの影に隠れてない画像)、現在の画像、manipulatorの動作ベクトル を入れると、graspingの成功確率を返す

2018-04-05 17 41 41

学習モデルの全体像

1. grasp GAN

2. classifierの詳細

2018-04-05 17 42 09

結果

2018-04-05 17 42 15

Instrumenting and collecting annotated visual grasping datasets to train modern machine learning algorithms can be extremely time-consuming and expensive. An appealing alternative is to use off-the-shelf simulators to render synthetic data for which ground-truth annotations are generated automatically. Unfortunately, models trained purely on simulated data often fail to generalize to the real world. We study how randomized simulated environments and domain adaptation methods can be extended to train a grasping system to grasp novel objects from raw monocular RGB images. We extensively evaluate our approaches with a total of more than 25,000 physical test grasps, studying a range of simulation conditions and domain adaptation methods, including a novel extension of pixel-level domain adaptation that we term the GraspGAN. We show that, by using synthetic data and domain adaptation, we are able to reduce the number of real-world samples needed to achieve a given level of performance by up to 50 times, using only randomly generated simulated objects. We also show that by using only unlabeled real-world data and our GraspGAN methodology, we obtain real-world grasping performance without any real-world labels that is similar to that achieved with 939,777 labeled real-world samples.

msrks commented 6 years ago

https://research.googleblog.com/2017/10/closing-simulation-to-reality-gap-for.html