-
Hi,
I've tried normalizing environments, revising reward functions, upgrading/downgrading MuJoCo versions, but still not able to reproduce the performance declared in your paper on ant-dir. The ave…
-
Hello, I am trying to setup the project in my own system. I am not using Docker since my mujoco key ist not valid for that. I have Ubuntu 18.04 and installed CUDA 10.0 and cudnn 7.6.5. However I am g…
-
Hi ARISE team,
## Issue
I'm looking into your package in order to use it for reinforcement learning in robotics.
If I understood it correctly, RL is one of the main applications of robosuite.
Un…
-
When running the example code:
```
bin/examine.py mae_envs/envs/base.py
```
I get the following error:
```
Loading env from the module: mae_envs/envs/base.py
Creating window glfw
Traceback (mo…
-
Currently, its not straightforward how the save and load functionality works for the environment. This functionality is valuable for:
- replication of the results as just setting random seed (``ran…
-
**Describe the bug**
I have an OpenAI Gym library (https://github.com/p-morais/gym-cassie) that uses a custom ctypes wrapper for a C library that uses MuJoCo 1.5. The problem is when I import my libr…
-
Hi, thank you for the baseline code, it helps me a lot. But I have a little problem with running it. I first sample data through the trained expert strategy, and then provide it to GAIL, but in the en…
-
Hi,
I already had to model my environments in MuJoCo due to baseline algorithms that used it. As a proof of concept, I would like to use you pretrained Humanoid and let it run through my environme…
-
The absolute joint positions being used now in the `graph_dataset.py` are being calculated with forward kinematics from the franka panda robot. However, they don't necessary match the absolute positio…
-
Hello,
after installing mujoco I encountered an error tring to run this base script of gym on spyder,
I tried to load Hopper-v2 environment which requires mujoco :
> import gym
> env = gym.make(…