I'm looking into your package in order to use it for reinforcement learning in robotics.
If I understood it correctly, RL is one of the main applications of robosuite.
Unfortunately, after some first tests it seems like the addition of robosuite to mujoco slows down the performance significantly.
On my local machine I get around 250 fps when running robogym's FetchReach-v1 (also mujoco based) on PPO and only around 50 when running the Lift environment with one Swayer robot.
It gets even worse when running multiple envs in parallel. For n_envs=8 I get around 400 fps with FetchReach-v1 and around 60 fps with robosuite.
Do you have any idea what causes these massive performance drops?
Is there any way to get close to the base mujoco performance (a factor of 2 would still be acceptable)?
To reproduce
I'm using stable baselines 3 (+zoo) to compare the two environments. SB3 is, as the name suggests, a very stable implementation of the most common RL algorithms and therefore a fair tool for comparing the performance.
To install SB3 and zoo, I would like to refer you to https://github.com/DLR-RM/rl-baselines3-zoo
To use a robosuite env in the same way, I created a make function and added a registry to SB3 zoo
In robosuite/environments/gym_envs/make_gym.py:
from typing import Union, List
import gym
from robosuite.wrappers.gym_wrapper import GymWrapper
from robosuite.environments.base import make
from gym import Env
from gym.envs.registration import spec
def make_gym(
env: str,
robots: Union[str, List[str]],
id: str,
**kwargs) -> Env:
gym_env = GymWrapper(env=make(env, robots=robots, **kwargs))
gym_env.spec = spec(id)
return gym_env
In rl-baselines3-zoo/utils/import_envs.py I added
try:
from gym.envs.registration import register
import robosuite.environments.gym_envs
kwargs = {"env": "Lift",
"robots": "Sawyer", # use Sawyer robot
"id": "LiftSwayer-v1",
"use_camera_obs": False, # do not use pixel observations
"has_offscreen_renderer": False, # not needed since not using pixel obs
"has_renderer": False, # make sure we can render to the screen
"reward_shaping": True, # use dense rewards
"control_freq": 20, # control should happen fast enough so that simulation looks smooth
}
register(
id="LiftSwayer-v1",
entry_point="robosuite.environments.gym_envs.make_gym:make_gym",
kwargs=kwargs,
max_episode_steps=100
)
except Exception:
print(Exception)
robotsuite = None
Hi ARISE team,
Issue
I'm looking into your package in order to use it for reinforcement learning in robotics. If I understood it correctly, RL is one of the main applications of robosuite. Unfortunately, after some first tests it seems like the addition of robosuite to mujoco slows down the performance significantly.
On my local machine I get around 250 fps when running robogym's FetchReach-v1 (also mujoco based) on PPO and only around 50 when running the Lift environment with one Swayer robot. It gets even worse when running multiple envs in parallel. For
n_envs=8
I get around 400 fps with FetchReach-v1 and around 60 fps with robosuite.Do you have any idea what causes these massive performance drops? Is there any way to get close to the base mujoco performance (a factor of 2 would still be acceptable)?
To reproduce
I'm using stable baselines 3 (+zoo) to compare the two environments. SB3 is, as the name suggests, a very stable implementation of the most common RL algorithms and therefore a fair tool for comparing the performance. To install SB3 and zoo, I would like to refer you to https://github.com/DLR-RM/rl-baselines3-zoo
Running FetchReach-v1:
To use a robosuite env in the same way, I created a make function and added a registry to SB3 zoo In
robosuite/environments/gym_envs/make_gym.py
:In
rl-baselines3-zoo/utils/import_envs.py
I addedIn
rl-baselines3-zoo/hyperparams/ppo.yml
I addedFinally, we can run the env using