hsp-iit / pybullet-robot-envs

A Python package that collects robotic environments based on the PyBullet simulator, suitable to develop and test Reinforcement Learning algorithms on simulated grasping and manipulation applications.
GNU Lesser General Public License v2.1
187 stars 24 forks source link

Pybullet-robot-envs

⚠️ Status: Pre-alpha ⚠️

Notice: at the moment we are not actively maintaining this repository so we may not be able to reply to issues in a timely manner.


pybullet-robot-envs is a Python package that collects robotic environments based on the PyBullet simulator, suitable to develop and test Reinforcement Learning algorithms on simulated grasping and manipulation applications.

The pybullet-robot-envs inherit from the OpenAI Gym interface.

The package provides environments for the iCub Humanoid robot and the Franka Emika Panda manipulator.

Overview


Motivation

This repository is part of a project which aims to develop Reinforcement Learning approaches for the accomplishment of grasp and manipulation tasks with the iCub Humanoid robot and the Franka Emika Panda robot.

A Reinforcement Learning based approach generally includes two basics modules: the environment, i.e. the world, and the agent, i.e. the algorithm. The agent sends action to the environment, which replies with observations and rewards. This repository provides environments with OpenAI Gym interface that interact with the PyBullet module to simulate the robotic tasks and the learned policies.

Simulators are a useful resource to implement and test Reinforcement Learning algorithm on a robotic system before porting them to the real-world platform, in order to avoid any risk for the robot and the environment. PyBullet is a Python module for physics simulation for robotics, visual effect and reinforcement learning based on the Bullet Physics SDK. See PyBullet Quickstart Guide for specific information. Its main features are:

The pybullet-robot-envs environments adopt the OpenAI Gym environment interface, that has become as sort of standard in the RL world. RL agents can easily interact with different environments through this common interface without any additional implementation effort. An OpenAI Gym interface has the following basic methods:

Prerequisites

pybullet-robot-envs requires python3 (>=3.5).

Installation

  1. Before installing the required dependencies, you may want to create a virtual environment and activate it:

    $ virtualenv ve_pybullet
    $ source ve_pybullet/bin/activate
  2. Install git lfs by following instructions here: git-lfs.github.com.

  3. Clone the repository:

    $ git clone https://github.com/robotology-playground/pybullet-robot-envs.git
    $ cd pybullet-robot-envs
  4. Install the dependencies necessary to run and test the environments with PyBullet:

    $ pip3 install -r requirements.txt

    Note: Installing the requirements will install also Stable Baselines.

  5. Install the package:

    $ pip3 install -e .

    After this step, they can be instantiated in any script by doing:

      import gym
      env = gym.make('pybullet_robot_envs:iCubReach-v0')

    where iCubReach-v0 is the environment id. You can check the available environment ids in the file pybullet_robot_envs/init.py. If you create a new environment and you want to register it as Gym environment, you can modify this file by adding a new register( id=<id_env>, entry_point=path_to_import_env>). See this guide for detailed instruction.

Testing

You can test your installation by running the following basic robot simulations on PyBullet:

$ python pybullet_robot_envs/examples/helloworlds/helloworld_icub.py
$ python pybullet_robot_envs/examples/helloworlds/helloworld_panda.py

Environments

The README.md file provides detailed information about the robotic environments of the repository. In general, there are three types of environments:

iCub

Run the following script to open an interactive GUI in PyBullet and test the iCub environment:

RL Examples

Run the following scripts to train and test the implemented environments with standard DDPG algorithm from Stable Baselines.

You can find more examples in the repository https://github.com/eleramp/robot-agents which is a Python-based framework composed of two main cores:

iCubPush-v0
PandaReach-v0