Khrylx / Transform2Act

[ICLR 2022 Oral] Official PyTorch Implementation of "Transform2Act: Learning a Transform-and-Control Policy for Efficient Agent Design".
https://sites.google.com/view/transform2act
MIT License
54 stars 14 forks source link
agent-design iclr2022 reinforcement-learning robot-design

Transform2Act

This repo contains the official implementation of our paper:

Transform2Act: Learning a Transform-and-Control Policy for Efficient Agent Design
Ye Yuan, Yuda Song, Zhengyi Luo, Wen Sun, Kris Kitani
ICLR 2022 (Oral)
website | paper

Installation

Environment

Pretrained Models

Training

You can train your own models using the provided config in design_opt/cfg:

python design_opt/train.py --cfg hopper --gpu 0

You can replace hopper with {ant, gap, swimmer} to train other environments. Here is the correspondence between the configs and the environments in the paper: hopper - 2D Locomotion, ant - 3D Locomotion, swimmer - Swimmer, and gap - Gap Crosser.

Visualization

If you have a display, run the following command to visualize the pretrained model for the hopper:

python design_opt/eval.py --cfg hopper

Again, you can replace hopper with {ant, gap, swimmer} to visualize other environments.

You can also save the visualization into a video by using --save_video:

python design_opt/eval.py --cfg hopper --save_video

This will produce a video out/videos/hopper.mp4.

Citation

If you find our work useful in your research, please cite our paper Transform2Act:

@inproceedings{yuan2022transform2act,
  title={Transform2Act: Learning a Transform-and-Control Policy for Efficient Agent Design},
  author={Yuan, Ye and Song, Yuda and Luo, Zhengyi and Sun, Wen and Kitani, Kris},
  booktitle={International Conference on Learning Representations},
  year={2022}
}

License

Please see the license for further details.