rll / rllab

rllab is a framework for developing and evaluating reinforcement learning algorithms, fully compatible with OpenAI Gym.
Other
2.91k stars 799 forks source link

DDPG has no function of plotting? #13

Closed Alex-zhai closed 8 years ago

Alex-zhai commented 8 years ago

When i use ddpg algorithm, I set plot=True. But the evaluation run after each iteration did't appear. So what's the problem?

dementrock commented 8 years ago

Seems like I forgot to implement it. Fixed in https://github.com/rllab/rllab/commit/4362ad2e26dd99137cbe6e6804137bdd4034a3e3. Also added a sample script:

https://github.com/rllab/rllab/blob/master/examples/ddpg_cartpole_stub.py

Alex-zhai commented 8 years ago

So are the hyper-parameters of DDPG algorithm fixed between all tasks? I remember the hidden size of networks is {300,400}. However, you set {32,32}.

dementrock commented 8 years ago

I set it to a small network so that it runs faster, and gives sufficiently good results on cartpole balancing. If you'd like to reproduce the results in the paper, you should use larger networks and keep the exact same settings.

dementrock commented 8 years ago

Also I'd recommend sticking with smaller networks at least when you are e.g. tweaking algorithms. The larger networks run much, much slower.

Alex-zhai commented 8 years ago

I used larger networks {300,400} and kept the exact same settings as the original paper in order to solve Half-Cheetah tasks. But the agent had a poor performance. This is my setting: n_epochs=200, epoch_length=1000, batch_size=32, min_pool_size=10000, replay_pool_size=1000000, eval_samples=10000, hidden_sizes=(400, 300)

Alex-zhai commented 8 years ago

So could you offer your settings of DDPG algorithm on Half-Cheetah tasks? Thank you!!!

dementrock commented 8 years ago

Almost the same configuration as in the sample script, except:

Also I used n_parallel=4 for my experiments. Although this parallelization is only used when sampling trajectories for evaluation.

The whole experiment runs really slow since it's actually using 25x samples than in the original DDPG paper, to match the settings of other algorithms evaluated in the benchmark paper. You should be able to get pretty good results by just using 100 epochs. You can also get more intermediate progress by setting n_epochs to 1000 and then epoch_length to 1000 (the total number of samples = n_epochs * epoch_length).

dementrock commented 8 years ago

The reward scaling is really important. Make sure you have that.

Alex-zhai commented 8 years ago

Perfectly, Thank you for your sharing!!!

dementrock commented 8 years ago

No problem. Let me know if you have any further issues getting it to work.

Alex-zhai commented 8 years ago

Ok, no problem. Thank you!!!