PyBullet Gymperium is an open-source implementation of the OpenAI Gym MuJoCo environments for use with the OpenAI Gym Reinforcement Learning Research Platform in support of open research.
OpenAI gym is currently one of the most widely used toolkit for developing and comparing reinforcement learning algorithms. Unfortunately, for several challenging continuous control environments it requires the user to install MuJoCo, a commercial physics engine which requires a license to run for longer than 30 days. Such a commercial barrier hinders open research, especially in the perspective that other appropriate physics engines exist. This repository provides alternative implementations of the original MuJoCo environments which can be used for free. The environments have been reimplemented using BulletPhysics' python wrapper pybullet, such that they seamlessly integrate into the OpenAI gym framework. In order to show the usability of the new environments, several RL agents from the Tensorforce Reinforcement Learning Library are configured to be trainable out of the box. To simplify research with the implemented environment, each environment is featured with pretrained agents which serve as unit tests for the implementations and could well serve as baselines for other purposes.
If you find our work useful in your research please consider citing as follows:
@misc {benelot2018, author = {Benjamin Ellenberger}, title = {PyBullet Gymperium}, howpublished = {\url{ https://github.com/benelot/pybullet-gym}} , year = {2018--2019} }
Environment Name | Implemented | Similar to Reference Implementation | Pretrained agent available |
---|---|---|---|
RoboSchool Envs | |||
InvertedPendulumPyBulletEnv-v0 | Yes | Yes | No |
InvertedDoublePendulumPyBulletEnv-v0 | Yes | Yes | No |
InvertedPendulumSwingupPyBulletEnv-v0 | Yes | Yes | No |
ReacherPyBulletEnv-v0 | Yes | Yes | No |
Walker2DPyBulletEnv-v0 | Yes | No | No |
HalfCheetahPyBulletEnv-v0 | Yes | No | No |
AntPyBulletEnv-v0 | Yes | Yes | No |
HopperPyBulletEnv-v0 | Yes | Yes | No |
HumanoidPyBulletEnv-v0 | Yes | Yes | No |
HumanoidFlagrunPyBulletEnv-v0 | Yes | Yes | No |
HumanoidFlagrunHarderPyBulletEnv-v0 | Yes | Yes | No |
AtlasPyBulletEnv-v0 | WIP | No | No |
PusherPyBulletEnv-v0 | WIP | No | No |
ThrowerPyBulletEnv-v0 | WIP | No | No |
StrikerPyBulletEnv-v0 | WIP | No | No |
MuJoCo Envs | |||
InvertedPendulumMuJoCoEnv-v0 | Yes | Yes | Yes |
InvertedDoublePendulumMuJoCoEnv-v0 | Yes | Yes | Yes |
ReacherMuJoCoEnv-v0 | No | No | No |
Walker2DMuJoCoEnv-v0 | Yes | No | No |
HalfCheetahMuJoCoEnv-v0 | Yes | No | No |
AntMuJoCoEnv-v0 | Yes | No | No |
HopperMuJoCoEnv-v0 | Yes | No | No |
HumanoidMuJoCoEnv-v0 | Yes | No | No |
PusherMuJoCoEnv-v0 | No | No | No |
ThrowerMuJoCoEnv-v0 | No | No | No |
StrikerMuJoCoEnv-v0 | No | No | No |
[See What's New section below](#What's New)
(taken from OpenAI gym readme)
There are two basic concepts in reinforcement learning: the
environment (namely, the outside world) and the agent (namely, the
algorithm you are writing). The agent sends actions
to the
environment, and the environment replies with observations
and
rewards
(that is, a score).
The core gym
interface is Env <https://github.com/openai/gym/blob/master/gym/core.py>
_, which is
the unified environment interface. There is no interface for agents;
that part is left to you. The following are the Env
methods you
should know:
reset(self)
: Reset the environment's state. Returns observation
.step(self, action)
: Step the environment by one timestep. Returns observation
, reward
, done
, info
.render(self, mode='human', close=False)
: Render one frame of the environment. The default mode will do something human friendly, such as pop up a window. Passing the close
flag signals the renderer to close any such windows.In addition to the basic concepts of reinforcement learning, this framework extends the notion of an environment into two subconcepts being the robot (the agents directly controllable body) and the scene (all things the agents interacts with). Implementing RL environments in this way allows us to switch parts of the environment to generate new robot-scene combinations.
First, you can perform a minimal installation of OpenAI Gym with
git clone https://github.com/openai/gym.git
cd gym
pip install -e .
Then, the easiest way to install Pybullet-Gym is to clone the repository and install locally
git clone https://github.com/benelot/pybullet-gym.git
cd pybullet-gym
pip install -e .
Important Note: Do not use python setup.py install
as this will not copy the assets (you might get missing SDF file errors).
Finally, to test installation, open python and run
import gym # open ai gym
import pybulletgym # register PyBullet enviroments with open ai gym
env = gym.make('HumanoidPyBulletEnv-v0')
# env.render() # call this before env.reset, if you want a window showing the environment
env.reset() # should return a state vector if everything worked
We currently support Linux, Windows and OS X running Python 2.7 or 3.5.
To run pip install -e '.[all]'
, you'll need a semi-recent pip.
Please make sure your pip is at least at version 1.5.0
. You can
upgrade using the following: pip install --ignore-installed pip
. Alternatively, you can open setup.py <https://github.com/openai/gym/blob/master/setup.py>
_ and
install the dependencies by hand.
As some sort of unit test for the environments, we provide pretrained agents for each environment. The agents for the roboschool envs and the mujoco were trained on the original implementations of roboschool and mujoco respectively.
The code for each environment group is housed in its own subdirectory
gym/envs <https://github.com/openai/gym/blob/master/gym/envs>
. The
specification of each task is in gym/envs/__init__.py <https://github.com/openai/gym/blob/master/gym/envs/__init__.py>
. It's
worth browsing through both.