StepNeverStop / RLs

Reinforcement Learning Algorithms Based on PyTorch
https://stepneverstop.github.io
Apache License 2.0
449 stars 93 forks source link

s_dim dimension. #17

Closed samuelgja closed 4 years ago

samuelgja commented 4 years ago

Hi, i dont know if this is bug or just this library doesnt handle it.

But firstly thanks for this library.. ..its really good kick starter in RL learning process. Help me a lot :)

My question is if i have more than 1 dimensions -- s_dim (env.observation_space). For example with shape Box(4,8) ...i always had a error.

So i have to handle it myself? Or library can handle it?

Thanks.

StepNeverStop commented 4 years ago

Hi, @samuelgjabel

For now, this library doesn't support observation shape with 2 dimensions, like Box(4, 8) or Discrete(4, 8). s_dim means the length of the vector of observation.

I don't think I'll ever support this type of state input, like you mentioned Box(4, 8). Because if you need the observation to be Box(4, 8), why not reshape it to Box(32, ) (maybe 4*8) or Box(12, ) (maybe 4+8) before feeding it into RL policy.

Let policy do what it should do, the preprocessing of states and the post-processing of actions should be processed in a custom environment wrapper. What's more, I have not encountered an environment with Box(4,8) as state input, both in Gym or Unity. Therefore, this form of input is not standard and difficult to deal with. It is recommended to avoid this form of state when designing the environment.

By the way, this library is currently under reconstructing, so it is unstable now, and varying degrees of bugs may appear.

samuelgja commented 4 years ago

@StepNeverStop ok..

I'am pretty new to RL....so thanks for explanation :)

A maybe sorry for this silly issue.