rail-berkeley / rlkit

Collection of reinforcement learning algorithms
MIT License
2.52k stars 553 forks source link

Cannot reproduce the results of IQL on antmaze #163

Open Shenzhi-Wang opened 2 years ago

Shenzhi-Wang commented 2 years ago

I've run examples/iql/antmaze_finetune.py, but the results are so bad, oscillating between 0 and 1 (as shown in the figure below), which are totally different from the result figures in examples/iql/README.md.

飞书20220403-174713

anair13 commented 2 years ago

I think you just need to smooth (each epoch contains 1 rollout which either succeeds or fails), can you average the returns over a moving window and plot it again? Our results were plotted with https://github.com/rail-berkeley/rlkit/blob/master/rlkit/visualization/plot_util.py