Closed Alex-zhai closed 8 years ago
Seems like I forgot to implement it. Fixed in https://github.com/rllab/rllab/commit/4362ad2e26dd99137cbe6e6804137bdd4034a3e3. Also added a sample script:
https://github.com/rllab/rllab/blob/master/examples/ddpg_cartpole_stub.py
So are the hyper-parameters of DDPG algorithm fixed between all tasks? I remember the hidden size of networks is {300,400}
. However, you set {32,32}
.
I set it to a small network so that it runs faster, and gives sufficiently good results on cartpole balancing. If you'd like to reproduce the results in the paper, you should use larger networks and keep the exact same settings.
Also I'd recommend sticking with smaller networks at least when you are e.g. tweaking algorithms. The larger networks run much, much slower.
I used larger networks {300,400}
and kept the exact same settings as the original paper in order to solve Half-Cheetah tasks. But the agent had a poor performance. This is my setting:
n_epochs=200, epoch_length=1000, batch_size=32, min_pool_size=10000, replay_pool_size=1000000, eval_samples=10000, hidden_sizes=(400, 300)
So could you offer your settings of DDPG algorithm on Half-Cheetah tasks? Thank you!!!
Almost the same configuration as in the sample script, except:
Also I used n_parallel=4 for my experiments. Although this parallelization is only used when sampling trajectories for evaluation.
The whole experiment runs really slow since it's actually using 25x samples than in the original DDPG paper, to match the settings of other algorithms evaluated in the benchmark paper. You should be able to get pretty good results by just using 100 epochs. You can also get more intermediate progress by setting n_epochs to 1000 and then epoch_length to 1000 (the total number of samples = n_epochs * epoch_length).
The reward scaling is really important. Make sure you have that.
Perfectly, Thank you for your sharing!!!
No problem. Let me know if you have any further issues getting it to work.
Ok, no problem. Thank you!!!
When i use ddpg algorithm, I set
plot=True
. But the evaluation run after each iteration did't appear. So what's the problem?