lerrel / rllab-adv

Code to train RL agents along with Adversarial distrubance agents
https://arxiv.org/abs/1703.02702
63 stars 14 forks source link
adversarial-learning reinforcement-learning robust-optimization

Under Development

Robust Adversarial Reinforcement Learning

This repo contains code for training RL agents with adversarial disturbance agents in our work on Robust Adversarial Reinforcement Learning (RARL). We build heavily build on the OpenAI rllab repo.

Installation instructions

Since we build upon the rllab package for the optimizers, the installation process is similar to rllab's manual installation. Most of the packages are virtually installated in the anaconda rllab3-adv enivronment.

sudo apt-get build-dep python-scipy
conda env create -f environment.yml
export PYTHONPATH=<PATH_TO_RLLAB_ADV>:$PYTHONPATH

Example

# Enter the anaconda virtual environment
source activate rllab3-adv
# Train on InvertedPendulum
python adversarial/scripts/train_adversary.py --env InvertedPendulumAdv-v1 --folder ~/rllab-adv/results

Contact

Lerrel Pinto -- lerrelpATcsDOTcmuDOTedu.