-
If anyone else is facing this same issue. I tried reinstalling the metadrive to the latest version but the latest one requires some dependencies incompatible with older python version.
```
Failur…
-
There appears to be a problem when using a masked action space with the QMIX algorithm. I think the qmix_policy_graph expects there to be at least one valid action at all times.
Full traceback is …
-
I am trying to use Ray rllib run multiple environments that require GPU resources. My goal is to allocate a fraction of the GPU (e.g., 0.05) for the learner worker (policy) and share the remaining fra…
-
Is it planned to update MARLlib to the newest versions of Ray RLlib, PyTorch and so on? I mean Ray RLlib 1.8. is really old... and there is already PyTorch 2.2.
-
Hello,
The ray example was super helpful in getting things up and running, however, when I tried to configure the PPOTrainer to use one policy per agent, the wrapper provided by VMAS could not be u…
-
The conflict is caused by:
copo 0.0.0 depends on gym==0.19.0
ray[rllib] 2.2.0 depends on gym=0.21.0; extra == "rllib"
-
I met a similar problem with #963, when I was trying to run the last part in tutorial04_visualize.ipynb.
!python ../flow/visualize/visualizer_rllib.py data/trained_ring 20 …
-
Hello, when I try to evaluate with Commander, I got the error below. Looks like it failed to restored the commander policy. Do I need to change some configurations before evaluating with Commander? I …
-
### What happened + What you expected to happen
I implement a vectorized environment drived from ray.rllib.env.vector_env.VectorEnv.
However, we I use the new API stack, it says
```bash
Attrib…
-
I have installed the gym==0.19.0 and there is a error about the conflict between the copo==0.0.0 and ray[rllib]==2.2.0.
The error just is like this:
"
The conflict is caused by:
copo 0.0.0 dep…