-
From version 0.26, gym supports `sequence` as observation space (https://github.com/openai/gym/pull/2968)
After https://github.com/Stable-Baselines-Team/stable-baselines3-contrib/pull/71, I think it …
-
Change custom Space in single agent environment to DiscreteSpace from Gymnasium to be compatible with Stable-baselines3
-
Dear authors,
I run the examples of pettingzoo on colab. However I still face these problems.
2
Using cpu device
Wrapping the env with a `Monitor` wrapper
Wrapping the env in a DummyVecEnv.
--…
-
### Platform
iPadOS 16.0
### Plugin
share_plus
### Version
6.3.0
### Flutter SDK
3.3.10
### Steps to reproduce
1. Check out the share_plus example
2. Increase the height of…
-
Hi, In the process of training the King of Fighters agent, after outputting the value of observation['P1']['oppChar'] in the observation space, I saw that sometimes the value is wrong. as shown in the…
-
# Bug Description
There seems to be a small glitch in current pod latencies being calculated.
## **Output of `kube-burner` version**
Version: latest
Git Commit: 8b56817e4e978dd8ae5b59e246a97b895…
-
### ❓ Question
Hey, thank you for your work on the MaskablePPO algorithm.
In the environment where I use Openai_ROS for initialized, I use PPO to remove invalid actions, but during the training proc…
-
### Proposal
I would very much appreciate there to be a unified way to find out how well did agent do, in a predefined range for all environments. For example, a method that returns 1 on "perfect a…
-
### Describe the bug
I guess its a bug?
(using a custom environment)
I had an environment that is old API compatible (e.g reset returns just obs instead of obs,{} etc...), and registered through g…
-
How to use other DRL libraries (e.g., elegantRL, tianshou) with highway-env as env?