ARISE-Initiative / robosuite-benchmark

Benchmarking Repository for robosuite + SAC
56 stars 16 forks source link

Slow performance in comparison to robogym #17

Open JakobThumm opened 2 years ago

JakobThumm commented 2 years ago

Hi ARISE team,

Issue

I'm looking into your package in order to use it for reinforcement learning in robotics. If I understood it correctly, RL is one of the main applications of robosuite. Unfortunately, after some first tests it seems like the addition of robosuite to mujoco slows down the performance significantly.

On my local machine I get around 250 fps when running robogym's FetchReach-v1 (also mujoco based) on PPO and only around 50 when running the Lift environment with one Swayer robot. It gets even worse when running multiple envs in parallel. For n_envs=8 I get around 400 fps with FetchReach-v1 and around 60 fps with robosuite.

Do you have any idea what causes these massive performance drops? Is there any way to get close to the base mujoco performance (a factor of 2 would still be acceptable)?

To reproduce

I'm using stable baselines 3 (+zoo) to compare the two environments. SB3 is, as the name suggests, a very stable implementation of the most common RL algorithms and therefore a fair tool for comparing the performance. To install SB3 and zoo, I would like to refer you to https://github.com/DLR-RM/rl-baselines3-zoo

Running FetchReach-v1:

~/rl-baselines3-zoo$ python train.py --algo ppo --env FetchReach-v1

To use a robosuite env in the same way, I created a make function and added a registry to SB3 zoo In robosuite/environments/gym_envs/make_gym.py:

from typing import Union, List
import gym

from robosuite.wrappers.gym_wrapper import GymWrapper
from robosuite.environments.base import make
from gym import Env
from gym.envs.registration import spec

def make_gym(
    env: str, 
    robots: Union[str, List[str]],
    id: str,
    **kwargs) -> Env: 

    gym_env = GymWrapper(env=make(env, robots=robots, **kwargs))
    gym_env.spec = spec(id)
    return gym_env

In rl-baselines3-zoo/utils/import_envs.py I added

try:
    from gym.envs.registration import register
    import robosuite.environments.gym_envs
    kwargs = {"env": "Lift",
              "robots": "Sawyer",  # use Sawyer robot
              "id": "LiftSwayer-v1",
              "use_camera_obs": False,  # do not use pixel observations
              "has_offscreen_renderer": False,  # not needed since not using pixel obs
              "has_renderer": False,  # make sure we can render to the screen
              "reward_shaping": True,  # use dense rewards
              "control_freq": 20,  # control should happen fast enough so that simulation looks smooth
              }
    register(
        id="LiftSwayer-v1",
        entry_point="robosuite.environments.gym_envs.make_gym:make_gym",
        kwargs=kwargs,
        max_episode_steps=100
    )
except Exception:
    print(Exception)
    robotsuite = None

In rl-baselines3-zoo/hyperparams/ppo.yml I added

LiftSwayer-v1:
    env_wrapper: sb3_contrib.common.wrappers.TimeFeatureWrapper
    normalize: true
    n_envs: 1
    n_timesteps: !!float 1e6
    policy: 'MlpPolicy'
    batch_size: 64
    n_steps: 512
    gamma: 0.99
    gae_lambda: 0.9
    n_epochs: 20
    ent_coef: 0.0
    sde_sample_freq: 4
    max_grad_norm: 0.5
    vf_coef: 0.5
    learning_rate: !!float 3e-5
    use_sde: True
    clip_range: lin_0.4
    policy_kwargs: "dict(log_std_init=-2.7,
                         ortho_init=False,
                         activation_fn=nn.ReLU,
                         net_arch=[dict(pi=[256, 256], vf=[256, 256])]
                         )"

Finally, we can run the env using

~/rl-baselines3-zoo$ python train.py --algo ppo --env LiftSwayer-v1
SATE001 commented 2 years ago

Hi.

I'm looking into your package in order to use it for reinforcement learning in robotics.

What is the best at the moment? robogym?