-
How do you guys plan to support multi agent environment. I came though a blog post for multiple model Mario Kart and I want to run it very bad. Can someone help me in make this work for multi agent en…
-
- Add the ability to mix non-homogeneous observations, for example images and with proprioceptive states.
- Add example task:
Cartpole balancing towards a red sphere. Observations: cartpole joint sta…
-
Hey so I want to use some of the Fetch robot environments in OpenAI but, for some weird reason, they continue to use MuJoCo 🙄.
I was wondering if there is any work being done on adding the robotics…
-
I need some other environments from the openai gym. Any guidelines to help to port the rest of the environments from there?
-
Hi,
the gym environments now return the 5-tuple (next state, action, reward, terminate, truncate, info) instead of their previous 4-tuple setup; however, RLHive still expects their previous setup a…
-
I am using gym 0.21.0 and stable baslines master 2.4.0a8.
The error i am facing is
```
Traceback (most recent call last):
File "/home/aghnw/.conda/envs/RL-agent/mine-env-main/trainer_sac.py…
-
Hello everybody,
when I used Garage's `EpsilonGreedyStrategy` and Gym environments I found that sampling is not deterministic. I've set the seed via `deterministic.set_seed(seed)`.
After some inve…
-
Hello,
I am an engineer with access to an a1 robot, and experience in RL, so I thought I'd try to apply your impressive parkour package. After having no issues with installation process (I hade to …
-
While running "example.py" , it throws error as in title
-
I think you are depending on an older version of gym in some of the environments you made, a common error is
```
/usr/local/lib/python3.9/site-packages/gym/wrappers/time_limit.py in step(self, ac…