-
Hello, can you please provide example of loading model for backtesting and trading with RLlib?
t3ch9 updated
2 years ago
-
### What happened + What you expected to happen
/ray/rllib/examples/action_masking.py
modify:
replace action_masking.py line 97 "ppo.PPOConfig()" with" dreamerv3.DreamerV3Config()"
bug:
Va…
-
I tried following this guide: https://github.com/HumanCompatibleAI/overcooked-demo/tree/master/server/static/assets/agents/RllibSelfPlay_CrampedRoom to add my pretrained rllib agent and play with it, …
-
(pongSTPN) ➜ AtariMujoco git:(master) ✗ python run-pong.py
Traceback (most recent call last):
File "/home/jz/github/STPN/AtariMujoco/run-pong.py", line 10, in
from rllib_nets import TorchML…
-
-
-
### What happened + What you expected to happen
I can’t seem to replicate the original [PPO](https://arxiv.org/pdf/1707.06347) algorithm's performance when using RLlib's PPO implementation. The hyp…
-
CI test **linux://rllib:learning_tests_multi_agent_cartpole_crashing_and_stalling_appo_old_api_stack** is flaky. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/4923#0190178e-5…
-
CI test **linux://rllib:learning_tests_multi_agent_cartpole_crashing_and_stalling_appo_old_api_stack** is flaky. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/4923#0190178e-5…
-
## Describe the bug
When training on `PettingZoo/MultiWalker-v9` with `Multi-Agent Soft Actor-Critic`, **all** losses (`loss_actor`, `loss_qvalue`, `loss_alpha`) explode after ~1M environment steps…