openai / gym

A toolkit for developing and comparing reinforcement learning algorithms.
https://www.gymlibrary.dev
Other
34.61k stars 8.6k forks source link

Why is it impossible to double wrap? #966

Closed haudren closed 6 years ago

haudren commented 6 years ago

I am trying to run a "new-style" gym, i.e. one that returns a dictionary of observation, desired_goal, achieved_goal in the ddpg baseline. If I attempt to use the evaluation option, the program crashes because it's trying to wrap the original environment first in FlattenDictWrapper and then in Monitor.

Why is it impossible to wrap an environment in multiple wrappers ? In this case, it would be very useful to both flatten and monitor.

mpSchrader commented 6 years ago

Hi @haudren , I think it is possible to use multiple wrappers. E.g. check out: https://github.com/dgriff777/rl_a3c_pytorch/blob/master/environment.py This guy uses about 5 wrappers. ;-)

haudren commented 6 years ago

You're right, it's actually impossible to wrap twice with the same wrapper an environment, not double-wrapping. It would help if the message was more informative (Basically, I had a bug and was flattening my GoalEnv twice).