clvrai / leaps

Code for Learning to Synthesize Programs as Interpretable and Generalizable Policies in NeurIPS 2021
MIT License
32 stars 4 forks source link

Learning to Synthesize Programs as Interpretable and Generalizable Policies

This project is an implementation of Learning to Synthesize Programs as Interpretable and Generalizable Policies, which is published in NeurIPS 2021. Please visit our project page for more information.

Neural network policies produced with DRL methods are not human-interpretable and often have difficulty generalizing to novel scenarios. To address these issues, we explore learning structured, programmatic policies. In our framework we learn to synthesize programs solely from reward signals. However, programs are difficult to synthesize purely from environment reward. To this end, we propose a framework Learning Embeddings for lAtent Program Synthesis (LEAPS), which first learns a program embedding space that continuously parameterizes diverse behaviors in an unsupervised manner and then search over the learned program embedding space to yield a program that maximizes the return for a given task.

We evaluate our model on a set of sparse-reward Karel environments---commonly used in the program synthesis domain---specially designed to evaluate the performance differences between our program policies and DRL baselines.

Environments

Karel environment

Getting Started

pip3 install --upgrade virtualenv
virtualenv prl
source prl/bin/activate
pip3 install -r requirements.txt

Usage

LEAPS Training

We already include a pre-trained model for LEAPS in this repo, located in weights/LEAPS/best_valid_params.ptp. If you want to train an additional model (e.g., with a different seed), follow the steps in Stage 1: Learning program embeddings. Otherwise, to run RL with LEAPS, go directly to Stage 2 instructions.

Stage 1: Learning program embeddings

CUDA_VISIBLE_DEVICES="0" python3 pretrain/trainer.py -c pretrain/cfg.py -d data/karel_dataset/ --verbose --train.batch_size 256 --num_lstm_cell_units 256 --loss.latent_loss_coef 0.1 --rl.loss.latent_rl_loss_coef 0.1 --device cuda:0 --algorithm supervisedRL --optimizer.params.lr 1e-3 --prefix LEAPS
Ablation Command
LEAPS python3 pretrain/trainer.py -c pretrain/cfg.py -d data/karel_dataset/ --verbose --train.batch_size 256 --num_lstm_cell_units 256 --loss.latent_loss_coef 0.1 --rl.loss.latent_rl_loss_coef 0.1 --device cuda:0 --algorithm supervisedRL --optimizer.params.lr 1e-3 --prefix LEAPS
LEAPSP python3 pretrain/trainer.py -c pretrain/cfg.py -d data/karel_dataset/ --verbose --train.batch_size 256 --device cuda:0 --num_lstm_cell_units 256 --loss.latent_loss_coef 0.1 --loss.condition_loss_coef 0.0 --net.condition.observations initial_state --optimizer.params.lr 1e-3 --prefix LEAPSP
LEAPSPR python3 pretrain/trainer.py -c pretrain/cfg.py -d data/karel_dataset/ --verbose --train.batch_size 256 --num_lstm_cell_units 256 --loss.latent_loss_coef 0.1 --rl.loss.latent_rl_loss_coef 0.1 --device cuda:0 --algorithm supervisedRL --net.condition.freeze_params True --loss.condition_loss_coef 0.0 --optimizer.params.lr 1e-3 --net.condition.observations initial_state --prefix LEAPSPR
LEAPSPL python3 pretrain/trainer.py -c pretrain/cfg.py -d data/karel_dataset/ --verbose --train.batch_size 256 --device cuda:0 --num_lstm_cell_units 256 --loss.latent_loss_coef 0.1 --optimizer.params.lr 1e-3 --prefix LEAPSPL

Stage 2: CEM search

python3 pretrain/trainer.py --configfile pretrain/leaps_[leaps_maze/leaps_stairclimber/leaps_topoff/leaps_harvester/leaps_fourcorners/leaps_cleanhouse].py --net.saved_params_path weights/LEAPS/best_valid_params.ptp --save_interval 10 --seed [SEED]

Results

Stage 1: Learning program embeddings

Stage 2: CEM search

Cite the paper

If you find this useful, please cite

@inproceedings{
trivedi2021learning,
title={Learning to Synthesize Programs as Interpretable and Generalizable Policies},
author={Dweep Trivedi and Jesse Zhang and Shao-Hua Sun and Joseph J Lim},
booktitle={Thirty-Fifth Conference on Neural Information Processing Systems},
year={2021},
url={https://openreview.net/forum?id=wP9twkexC3V}
}

Authors

Dweep Trivedi, Jesse Zhang, Shao-Hua Sun