Closed RamiRibat closed 5 years ago
FetchReach environment has Dict observation space (because it packages not only arm position, but also the target location into the observation), and spinning up does not implement support for Dict observation spaces yet. One thing you can do is add a FlattenDictWrapper from gym (for example usage see, for instance, https://github.com/openai/baselines/blob/3f2f45acef0fdfdba723f0c087c9d1408f9c45a6/baselines/common/cmd_util.py#L110). Note, however, that by default FetchReach provides sparse rewards (1 if the goal is reached, 0 otherwise), which makes it rather hard for ppo. To make learning easier you can modify the spinning up code a bit to initialize environment with reward_type='dense'
kwarg, like this:
env = gym.make('FetchReach-v1', reward_type='dense')
.
Hope this helps!
Thanks for your response @pzhokhov ,
I went to the following lines of spinningup/spinup/algos/ppo.ppo.py code file:
env = env_fn()
obs_dim = env.observation_space.shape
act_dim = env.action_space.shape
and I added:
env = env_fn()
env = gym.wrappers.FlattenDictWrapper(env, ['observation', 'desired_goal'])
obs_dim = env.observation_space.shape
act_dim = env.action_space.shape
Also:
ppo(lambda : gym.make(args.env), actor_critic=core.mlp_actor_critic,
ac_kwargs=dict(hidden_sizes=[args.hid]*args.l), gamma=args.gamma,
seed=args.seed, steps_per_epoch=args.steps, epochs=args.epochs,
logger_kwargs=logger_kwargs)
then I added _rewardtype='dense' as follows:
ppo(lambda : gym.make(args.env, reward_type='dense'), actor_critic=core.mlp_actor_critic,
ac_kwargs=dict(hidden_sizes=[args.hid]*args.l), gamma=args.gamma,
seed=args.seed, steps_per_epoch=args.steps, epochs=args.epochs,
logger_kwargs=logger_kwargs)
It worked really fine, thank you very much.
@pzhokhov I ran the algorithms on 'FetchReach-v1' environment, and only the On-Policy algorithms [VPG, PPO, TRPO] work.
@RamiSketcher I defer to people with proper theory background (@jachiam) to answer whether off-policy algorithms should work with FetchReach with dense rewards - I think that should be possible; but may require some hyperparameter tuning (off-policy methods are more sensitive to the hyperparameter settings).
Hi @RamiSketcher! Not sure what you mean by "only the On-Policy algorithms work"---do you mean that only those algorithms reach a level of performance you think is good? Or that the other ones experience some kind of breaking bug?
Hi @jachiam ! sorry to be late.
It was actually my mistake, I didn't notice that there is a test_env
beside env
in On-Policy codes, so I had to do the same thing I did for it, I replaced:
123 env, test_env = env_fn(), env_fn()
124 obs_dim = env.observation_space.shape[0]
125 act_dim = env.action_space.shape[0]
by:
123 env, test_env = env_fn(), env_fn()
124 env, test_env = gym.wrappers.FlattenDictWrapper(env, ['observation', 'desired_goal']), gym.wrappers.FlattenDictWrapper(test_env, ['observation', 'desired_goal'])
125 obs_dim = env.observation_space.shape[0]
126 act_dim = env.action_space.shape[0]
and it worked!
However, and now you mentioned the performance, I trained the 'FetchReach-v1' using PPO and DDPG with the following commands:
python -m spinup.run ppo --exp_name PPO_FetchReach_Long --env FetchReach-v1 --clip_ratio 0.1 0.2 --hid[h] [32,32] [64,32] --act tf.nn.tanh --seed 0 10 20
and:
python -m spinup.run ddpg --exp_name DDPG_FetchReach_Long --env FetchReach-v1 --hid[h] [32,32] [64,32] --act tf.nn.tanh --seed 0 10 20
and my results was:
but I didn't get an improvement in any of these results! (compared to the other environments [Atari, MuJoCo, ..etc]).
May be pure PPO or DDPG doesn't work well with this type of environment, so may be they need some additional auxiliary stuff to be added.
If you haven't done so already, I think you should check out this paper, the original tech report put out by the OpenAI robotics team about these environments. It looks like you should be able to get DDPG+dense rewards to succeed on FetchReach-v1, but you should change hyperparameters to get as close as possible to what they had (eg hid [256,256,256], relu activations, possibly the various other details as well). What's more: you may want to try running for longer than 400k transitions.
Since this is not a code issue but is a matter of scientific exploration, I'm going to mark this closed. But feel free to continue asking questions here and I'll try to answer them when I can. (Or feel free to email me, jachiam[at]openai.com.)
Hi,
I've tried spinningup in running many experiments using the different algorithms in different Gym environments. It works well in most environments, like Atari, Box2D, Classic control and MuJoCo, however it didn't work with the new gym environments of "Robotics".
For example when I run the following command on terminal:
python -m spinup.run ppo --env FetchReach-v1 --exp_name FetchReach
It shows:
Does SpinningUp support this enviroments (Robotics) or it is a problem from my side?