stepjam / RLBench

A large-scale benchmark and learning environment.
https://sites.google.com/corp/view/rlbench
Other
1.03k stars 218 forks source link

RLBench: Robot Learning Benchmark Unit Tests Task Tests Discord

task grid image missing

RLBench is an ambitious large-scale benchmark and learning environment designed to facilitate research in a number of vision-guided manipulation research areas, including: reinforcement learning, imitation learning, multi-task learning, geometric computer vision, and in particular, few-shot learning. Click here for website and paper.

Contents:

Announcements

11 May 2022

18 February 2022

1 July 2021

8 September 2020

1 April 2020

28 January 2020

17 December 2019

Install

RLBench is built around CoppeliaSim v4.1.0 and PyRep.

First, install CoppeliaSim:

# set env variables
export COPPELIASIM_ROOT=${HOME}/CoppeliaSim
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$COPPELIASIM_ROOT
export QT_QPA_PLATFORM_PLUGIN_PATH=$COPPELIASIM_ROOT

wget https://downloads.coppeliarobotics.com/V4_1_0/CoppeliaSim_Edu_V4_1_0_Ubuntu20_04.tar.xz
mkdir -p $COPPELIASIM_ROOT && tar -xf CoppeliaSim_Edu_V4_1_0_Ubuntu20_04.tar.xz -C $COPPELIASIM_ROOT --strip-components 1
rm -rf CoppeliaSim_Edu_V4_1_0_Ubuntu20_04.tar.xz

To install the RLBench python package:

pip install git+https://github.com/stepjam/RLBench.git

And that's it!

Running Headless

If you are running on a machine without display (i.e. Cloud VMs, compute clusters), you can refer to the following guide to run RLBench headlessly with rendering.

Initial setup

First, configure your X config. This should only be done once to set up.

sudo nvidia-xconfig -a --use-display-device=None --virtual=1280x1024
echo -e 'Section "ServerFlags"\n\tOption "MaxClients" "2048"\nEndSection\n' \
    | sudo tee /etc/X11/xorg.conf.d/99-maxclients.conf

Leave out --use-display-device=None if the GPU is headless, i.e. if it has no display outputs.

Running X

Then, whenever you want to run RLBench, spin up X.

# nohup and disown is important for the X server to keep running in the background
sudo nohup X :99 & disown

Test if your display works using glxgears.

DISPLAY=:99 glxgears

If you have multiple GPUs, you can select your GPU by doing the following.

DISPLAY=:99.<gpu_id> glxgears

Running X without sudo

To spin up X with non-sudo users, edit file '/etc/X11/Xwrapper.config' and replace line:

allowed_users=console

with lines:

allowed_users=anybody
needs_root_rights=yes

If the file does not exist already, you can create it.

Getting Started

The benchmark places particular emphasis on few-shot learning and meta learning due to breadth of tasks available, though it can be used in numerous ways. Before using RLBench, checkout the Gotchas section.

Few-Shot Learning and Meta Learning

We have created splits of tasks called 'Task Sets', which consist of a collection of X training tasks and 5 tests tasks. Here X can be 10, 25, 50, or 95. For example, to work on the task set with 10 training tasks, we import FS10_V1:

import numpy as np
from rlbench.action_modes.action_mode import MoveArmThenGripper
from rlbench.action_modes.arm_action_modes import JointVelocity
from rlbench.action_modes.gripper_action_modes import Discrete
from rlbench.environment import Environment
from rlbench.tasks import FS10_V1

action_mode = MoveArmThenGripper(
  arm_action_mode=JointVelocity(),
  gripper_action_mode=Discrete()
)
env = Environment(action_mode)
env.launch()

train_tasks = FS10_V1['train']
test_tasks = FS10_V1['test']
task_to_train = np.random.choice(train_tasks, 1)[0]
task = env.get_task(task_to_train)
task.sample_variation()  # random variation
descriptions, obs = task.reset()
obs, reward, terminate = task.step(np.random.normal(size=env.action_shape))

A full example can be seen in examples/few_shot_rl.py.

Reinforcement Learning

import numpy as np
from rlbench.action_modes.action_mode import MoveArmThenGripper
from rlbench.action_modes.arm_action_modes import JointVelocity
from rlbench.action_modes.gripper_action_modes import Discrete
from rlbench.environment import Environment
from rlbench.tasks import ReachTarget

action_mode = MoveArmThenGripper(
  arm_action_mode=JointVelocity(),
  gripper_action_mode=Discrete()
)
env = Environment(action_mode)
env.launch()

task = env.get_task(ReachTarget)
descriptions, obs = task.reset()
obs, reward, terminate = task.step(np.random.normal(size=env.action_shape))

A full example can be seen in examples/single_task_rl.py. If you would like to bootstrap from demonstrations, then take a look at examples/single_task_rl_with_demos.py.

Sim-to-Real

import numpy as np
from rlbench import Environment
from rlbench import RandomizeEvery
from rlbench import VisualRandomizationConfig
from rlbench.action_modes.action_mode import MoveArmThenGripper
from rlbench.action_modes.arm_action_modes import JointVelocity
from rlbench.action_modes.gripper_action_modes import Discrete
from rlbench.tasks import OpenDoor

# We will borrow some from the tests dir
rand_config = VisualRandomizationConfig(
    image_directory='../tests/unit/assets/textures')

action_mode = MoveArmThenGripper(
  arm_action_mode=JointVelocity(),
  gripper_action_mode=Discrete()
)
env = Environment(
    action_mode, randomize_every=RandomizeEvery.EPISODE, 
    frequency=1, visual_randomization_config=rand_config)

env.launch()

task = env.get_task(OpenDoor)
descriptions, obs = task.reset()
obs, reward, terminate = task.step(np.random.normal(size=env.action_shape))

A full example can be seen in examples/single_task_rl_domain_randomization.py.

Imitation Learning

import numpy as np
from rlbench.action_modes.action_mode import MoveArmThenGripper
from rlbench.action_modes.arm_action_modes import JointVelocity
from rlbench.action_modes.gripper_action_modes import Discrete
from rlbench.environment import Environment
from rlbench.tasks import ReachTarget

# To use 'saved' demos, set the path below
DATASET = 'PATH/TO/YOUR/DATASET'

action_mode = MoveArmThenGripper(
  arm_action_mode=JointVelocity(),
  gripper_action_mode=Discrete()
)
env = Environment(action_mode, DATASET)
env.launch()

task = env.get_task(ReachTarget)

demos = task.get_demos(2)  # -> List[List[Observation]]
demos = np.array(demos).flatten()

batch = np.random.choice(demos, replace=False)
batch_images = [obs.left_shoulder_rgb for obs in batch]
predicted_actions = predict_action(batch_images)
ground_truth_actions = [obs.joint_velocities for obs in batch]
loss = behaviour_cloning_loss(ground_truth_actions, predicted_actions)

A full example can be seen in examples/imitation_learning.py.

Multi-Task Learning

We have created splits of tasks called 'Task Sets', which consist of a collection of X training tasks. Here X can be 15, 30, 55, or 100. For example, to work on the task set with 15 training tasks, we import MT15_V1:

import numpy as np
from rlbench.action_modes.action_mode import MoveArmThenGripper
from rlbench.action_modes.arm_action_modes import JointVelocity
from rlbench.action_modes.gripper_action_modes import Discrete
from rlbench.environment import Environment
from rlbench.tasks import MT15_V1

action_mode = MoveArmThenGripper(
  arm_action_mode=JointVelocity(),
  gripper_action_mode=Discrete()
)
env = Environment(action_mode)
env.launch()

train_tasks = MT15_V1['train']
task_to_train = np.random.choice(train_tasks, 1)[0]
task = env.get_task(task_to_train)
task.sample_variation()  # random variation
descriptions, obs = task.reset()
obs, reward, terminate = task.step(np.random.normal(size=env.action_shape))

A full example can be seen in examples/multi_task_learning.py.

RLBench Gym

RLBench is Gym compatible! Ensure you have gym installed (pip3 install gym).

Simply select your task of interest from rlbench/tasks/, and then load the task by using the task name (e.g. 'reach_target') followed by the observation mode: 'state' or 'vision'.

import gym
import rlbench

env = gym.make('reach_target-state-v0')
# Alternatively, for vision:
# env = gym.make('reach_target-vision-v0')

training_steps = 120
episode_length = 40
for i in range(training_steps):
    if i % episode_length == 0:
        print('Reset Episode')
        obs = env.reset()
    obs, reward, terminate, _ = env.step(env.action_space.sample())
    env.render()  # Note: rendering increases step time.

print('Done')
env.close()

A full example can be seen in examples/rlbench_gym.py.

Swapping Arms

The default Franka Panda Arm can be swapped out for another. This can be useful for those who have custom tasks or want to perform sim-to-real experiments on the tasks. However, if you swap out the arm, then we can't guarantee that the task will be solvable. For example, the Mico arm has a very small workspace in comparison to the Franka.

For benchmarking, the arm should remain as the Franka Panda.

Currently supported arms:

You can then swap out the arm using robot_configuration:

env = Environment(action_mode=action_mode, robot_setup='sawyer')

A full example (using the Sawyer) can be seen in examples/swap_arm.py.

Don't see the arm that you want to use? Your first step is to make sure it is in PyRep, and if not, then you can follow the instructions for importing new arm on the PyRep GitHub page. After that, feel free to open an issue and we can being it in to RLBench for you.

Tasks

To see a full list of all tasks, see here.

To see gifs of each of the tasks, see here.

Task Building

The task building tool is the interface for users who wish to create new tasks to be added to the RLBench task repository. Each task has 2 associated files: a V-REP model file (.ttm), which holds all of the scene information and demo waypoints, and a python (.py) file, which is responsible for wiring the scene objects to the RLBench backend, applying variations, defining success criteria, and adding other more complex task behaviours.

Video tutorial series here!

In-depth text tutorials:

Gotchas!

Contributing

New tasks using our task building tool, in addition to bug fixes, are very welcome! When building your task, please ensure that you run the task validator in the task building tool.

A full contribution guide is coming soon!

Acknowledgements

Models were supplied from turbosquid.com, cgtrader.com, free3d.com, thingiverse.com, and cadnav.com.

Citation

@article{james2019rlbench,
  title={RLBench: The Robot Learning Benchmark \& Learning Environment},
  author={James, Stephen and Ma, Zicong and Rovick Arrojo, David and Davison, Andrew J.},
  journal={IEEE Robotics and Automation Letters},
  year={2020}
}