-
## Motivation
thanks for this nice library, which is trying to provide is a general solution for various kinds of speeding-up RL environment parallelization. Currently, envpool treats the python user…
-
## Motivation
AFAIK (and also in [#194](https://github.com/sail-sg/envpool/issues/194#issue-1373652231)), currently it's unable to cherry-pick terminated envs for reset **in xla mode** as:
1. `…
-
This has been mentioned several times: #153 #164 #61
-
## Describe the bug
Related to https://github.com/sail-sg/envpool/issues/33.
When an environment is "done", the autoreset feature in openai/gym' API will reset this environment and return the in…
-
### 🐛 Bug
i'm not sure if it's due to specific version of Atari, but I remember having to add `terminal_on_life_loss: False` for PPO LSTM to prevent those hangs.
I also used version from envpool…
-
## Describe the bug
When we call async_reset multiple times, we get a crash.
The use case here is that I want to run many parallel episodes with the async interface, and then once all the episod…
-
From https://github.com/sail-sg/envpool/issues/89#issuecomment-1111713904
- [ ] pistonball
- [ ] atari
-
## Describe the bug
It seems that envpool's vectorized environment is not compatible with gymnasium's NormalizeObservation wrapper due to missing "num_envs", "is_vector_env" and "single_observation…
-
I guess there are no standard implement of lstm version ppo. First we should focus on the training implement
implement of cleanrl :
just save initial_lstm_state, and burn in with prefix data in buff…
-
## Describe the bug
SB3 VecNormalize wrapper allows saving an environment. This is required for instance, if a VecNormalize wrapper is applied to the env, to retrieve at test/evaluation time. Envpo…