Closed animesh-garg closed 5 years ago
Hi @animesh-garg
In the current version (and the version we will release for the challenge), the only low level state information we will provide are the number of keys and time left. We do this because for the challenge we are particularly interested in learning from pixels. After the challenge we will release an open source version where it will be possible to define additional elements in a state space.
Thanks a lot for the clarification. Testing algorithms without pixels is computationally less taxing, hence the request. We will await the open source release.
As noted in the paper , the observation space includes both visual input and a vector valued observation.
Does the vector valued observation space only has the number of keys and time? Is the low-level state exposed for usage at all, for instance if we choose not to use images but train only on low level state space?
Thanks!