real-stanford / universal_manipulation_interface

Universal Manipulation Interface: In-The-Wild Robot Teaching Without In-The-Wild Robots
https://umi-gripper.github.io/
MIT License
701 stars 138 forks source link

Difference of training configs #20

Open Dingry opened 8 months ago

Dingry commented 8 months ago

Hi, thank you for sharing your inspiring work. I am wondering what is the primary policy difference between train_diffusion_unet_timm_umi_workspace and train_diffusion_unet_image_workspace? In my understanding, they both condition on visual and proprioception observations to predict robot actions. Aside from variations in training hyperparameters, are there any specific design features intended for the UMI task?