-
Hello! I'm sorry to bother you again. I created a sub optimal policy when running ' mpirun -oversubscribe -np 16 python -u train.py --env-name='FetchPickAndPlace-v1' 2>&1 | tee pick.log'. When the suc…
-
Hi @jonasschneider @welinder
It seems that in the HER [paper](https://arxiv.org/abs/1707.01495), the author recommends using the trick in PickAndPlace task about "start half of the training episode w…
-
### Describe the bug
I am trying to use the function `gymnasium.utils.play.play` with mujoco-based environments. However, I get this error.
```
gymnasium/utils/play.py:137: RuntimeWarning: inval…
-
I am trying to reproduce HER results, here is my setting:
`mpirun -version mpirun (Open MPI) 1.10.2`
`mujoco_py.__version__ '1.50.1.56'`
`gym.__version__ '0.10.5'`
`mujoco 150`
`tensorflow both…
ghost updated
6 years ago
-
Hi, I tried to run the baseline for the openAI robotics gym environment using this command
python -m baselines.run --alg=ppo2 --env=FetchPickAndPlace-v1
But I instead get a value error,
```
Fi…
-
i want to get document of usage of API of mujoco-py
for example,the meaning of model.data.com_subtree model.data.qpos model.data.qvel
-
The following command runs fine:
```
time mpirun -np 8 python -m baselines.run --num_env=2 --alg=her --env=FetchReach-v1 --num_timesteps=100000
```
However, if I try changing the environment …
-
Hello, I have some questions about the results about the environment of _pick_and_place_.
I used ddpg+her to train the agent, but get bad result(success rate=0), I read your paper, you said you use *…
-
I find this but didn't help for me...
[https://github.com/openai/mujoco-py/issues/187](url)
My devire is ubuntu16, and has run FetchPickAndPlace-v1 from (penai/baselines) then it can run with env(s…
-
**Describe the bug**
The gymnasium API allows users to seed the environment on each reset to yield reproducible results. Running the environment with the same seed should always give the exact same r…