-
In v2 there's no mention of `seed` in the RLGym class. FWIW, PettingZoo passes it as an optional arg to the reset function.
I'm also seeing two sources of randomness in the code so far. IIRC Rocket…
-
Hi,
I noticed you used SB3's VecNormalize during training. But I haven't found how to evaluate the trained agent.
In SB3's tutorial, they said that the VecNormalize statistics should be saved when s…
-
Im part of the dev team on PettingZoo.
We were through our CI trying to reduce the number of warnings and spotted this warning
```
test/pytest_runner_test.py: 99 warnings
test/unwrapped_test.py: 6…
-
In the source code, I saw some code about discrete actions. I changed act_space=MultiDiscrete() to spaces.Discrete(), but the output actions were not expected discrete numbers, but decimals, which ma…
-
-
Hi, sorry if feature requests are not accepted (close this if so), but I was wondering if it would be possible to upgrade this repo from gym to gymnasium?
[Gymnasium](https://github.com/Farama-Fou…
-
Ideally, we eventually add support for the full PettingZoo multi-agent API. As a much simpler first step that already covers a lot of interesting environments, we could add support for multi-agent env…
-
Hi,can you share me the package named social-dilemma?thanks alot.
Another question how can i to change the single agent enviroment(gym tpye ) to mulltiple agent env
catnt updated
3 years ago
-
I haave Macbook with M3 and, currently, I am notable to install the package. Attaching the run below.
The same issue I had with Macbook M1 chip.
Conda environment.
macOS version: **14.6 (23G80)**
…
-
PS E:\study\machineStudy\project\rlFrame\rl_frame> pip install multi_agent_ale_py
Looking in indexes: http://mirrors.aliyun.com/pypi/simple/
Collecting multi_agent_ale_py
Downloading http://mirr…