openai / gym

A toolkit for developing and comparing reinforcement learning algorithms.
https://www.gymlibrary.dev
Other
34.47k stars 8.59k forks source link

[Question] What should be done with the GoalEnv and DiscreteEnv classes? #2381

Closed jkterry1 closed 2 years ago

jkterry1 commented 3 years ago

Right now, Gym has a GoalEnv class and Env class as base classes in core.py. The GoalEnv class was added as part of the robotics environments, and impose special requirements on the observation space. From what I can tell, this class has not been used outside of Gym's robotics environments and is largely unnecessary. Unless I'm missing something here, removing this class sounds like the way to go.

jkterry1 commented 3 years ago

A related issue is what to do with the DiscreteEnv class. It's also included in the toy_text envs not in core.py, which needs to be fixed if we keep it.

araffin commented 3 years ago

From what I can tell, this class has not been used outside of Gym's robotics environments and is largely unnecessary.

It is quite useful when using multi-goal task with Hindsight Experience Replay (many robotics problem can be framed as such).

jkterry1 commented 3 years ago

Perhaps I'm missing something here, but why can't the regular Gym class be used? Like, how does this solve a particular problem in this domain?

araffin commented 3 years ago

Like, how does this solve a particular problem in this domain?

It enforces a common API for those type of environments (which is the big strength of Gym): https://github.com/openai/gym/blob/ee0a56800435fc437090c6ac182ad2fa0fe7724e/gym/core.py#L182 and https://github.com/openai/gym/blob/ee0a56800435fc437090c6ac182ad2fa0fe7724e/gym/core.py#L207

jkterry1 commented 3 years ago

@araffin do you have any thoughts on the utility of the discrete env?

araffin commented 3 years ago

i don't know much about the DiscreteEnv, but it looks like a good example for tabular RL (so more for pedagogical purposes)

jkterry1 commented 2 years ago

So I sat down and looked through the code again. I can see the value of the GoalEnv class, but I'm of the opinion that the DiscreteEnv class should be removed (and the remaining toy_text environments should be modified to use the regular Gym base class).

brett-daley commented 2 years ago

DiscreteEnv appears to implement a small amount of boilerplate tabular RL code, like sampling the next state according to a probability table. Arguably, this is useful in that it maybe saves a few lines of code, but it also probably could be deleted without being missed (only 3 environments use it currently: Cliff Walking, Frozen Lake, and Taxi).

ZhiqingXiao commented 2 years ago

@jkterry1 @carlosluis

Excuse me for my late comments, but please allow me to show my suggestion to keep the class DiscreteEnv.

We know that one of the greatest contributions of Gym is to provide unified interfaces for different environments. Without the uniform interfaces, different environments would have their own APIs and the agents designed for different environments would not be migrated to other environments easily.

The unified interfaces in Gym include:

Especially, the class DiscreteEnv is a good example of class hierarchy that correctly reflect the taxonomy of POMDP. It is an excellent implementation of finite MDP that I often demo in the class. Uses can not only use the existing finite MDPs in Gym, but also derive their own finite MDP environments upon it. Furthermore, users can use it to implement the optimal DP agent in a uniform way.

At large, I encourage abstracting the common part of different environments, rather than the opposite.

michaelfeil commented 2 years ago

As we have been using gym.GoalEnv, the release 0.22 breaks compatibility to older third party gym environments or whereever gym.GoalEnv is imported. (https://github.com/search?q=gym.GoalEnv&type=code)

Is the way to go to replace inheritance from Class(gym.GoalEnv) with Class(gym.Env)?

jkterry1 commented 2 years ago

Could you elaborate on your problem a bit more? I asked someone else to read this issue to make sure I wasn't going crazy, but we were both slightly confused what compatibility is broken.

masterdezign commented 2 years ago

@michaelfeil You should probably see this https://github.com/openai/gym/pull/2516

michaelfeil commented 2 years ago

Sorry, my question's formulation was not very precise.

Since gym.GoalEnv is inherited from gym.Env and simply enforces a certain structure. I would like to leave a suggestion to e.g. leave a symbolic link with a decapitation warning, advising to inherit from gym.Env instead of gym.GoalEnv or integrate 'gym.GoalEnv' in their codebase.

masterdezign commented 2 years ago

As far as I have seen, they suggest avoiding gym.GoalEnv in favor of gym.Env.

masterdezign commented 2 years ago

See comments from https://github.com/DLR-RM/stable-baselines3/pull/780: ee71299

image

My intention was/is to remove the GoalEnv class from Gym in favor of the regular base class. We just kept it in gym-robotics because we didn't want to bother removing it. I would discourage other projects from using it.