This repository contains code for language-guided data generation and language-conditioned diffusion policy training for Scaling Up And Distilling Down. It has been tested on Ubuntu 18.04, 20.04 and 22.04, NVIDIA GTX 1080, NVIDIA RTX A6000, NVIDIA GeForce RTX 3080, and NVIDIA GeForce RTX 3090.
If you find this codebase useful, consider citing:
@inproceedings{ha2023scalingup,
title={Scaling Up and Distilling Down: Language-Guided Robot Skill Acquisition},
author={Huy Ha and Pete Florence and Shuran Song},
year={2023},
eprint={2307.14535},
archivePrefix={arXiv},
primaryClass={cs.RO}
}
If you have any questions, please contact me at huy [at] cs [dot] columbia [dot] edu
.
Table of Contents
We would like to thank Cheng Chi, Zeyi Liu, Samir Yitzhak Gadre, Mengda Xu, Zhenjia Xu, Mandi Zhao and Dominik Bauer for their helpful feedback and fruitful discussions.
This work was supported in part by Google Research Award, NSF Award #2143601, and #2132519. We would like to thank Google for the UR5 robot hardware. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the sponsors.