perezjln / gym-lowcostrobot

Other
29 stars 18 forks source link

Gym Low Cost Robot

License Python Version

This repository provide comprehensive gymnasium environments for simulated applications of the Low Cost Robot. These environments are designed to facilitate robot learning research and development while remaining accessible and cost-effective.

https://github.com/perezjln/gym-lowcostrobot/assets/45557362/cb724171-3c0e-467f-8957-97e79eb9c852

Features

Goals

The primary objective of these environments is to promote end-to-end open-source and affordable robotic learning platforms. By lowering the cost and accessibility barriers, we aim to:

By leveraging these open-source tools, we believe that more individuals, research institutions and schools can participate in and contribute to the growing field of robotic learning, ultimately driving forward the discipline as a whole.

Installation

To install the package, use the following command:

pip install rl_zoo3
pip install git+https://github.com/perezjln/gym-lowcostrobot.git

Usage

Simulation Example: PickPlaceCube-v0

Here's a basic example of how to use the PickPlaceCube-v0 environment in simulation:

import gymnasium as gym
import gym_lowcostrobot # Import the low-cost robot environments

# Create the environment
env = gym.make("PickPlaceCube-v0", render_mode="human")

# Reset the environment
observation, info = env.reset()

for _ in range(1000):
    # Sample random action
    action = env.action_space.sample()

    # Step the environment
    observation, reward, terminted, truncated, info = env.step(action)

    # Reset the environment if it's done
    if terminted or truncated:
        observation, info = env.reset()

# Close the environment
env.close()

Real-World Interface

For real-world interface with the real-world robot, just pass simulation=False to the environment:

env = gym.make("PickPlaceCube-v0", simulation=False)

Environments

Currently, the following environments are available:

Headless Mode

To run the environment in an headless machine, make sure to set the following environment variable:

export MUJOCO_GL=osmesa
export DISPLAY=:0

Training Policies with Stable Baselines3 and RL Zoo3 - step-by-step guide

To train a reinforcement learning policy using Stable Baselines3 and RL Zoo3, you need to define a configuration file and then launch the training process.

Step 1: Define a Configuration File

Create a YAML configuration file specifying the training parameters for your environment. Below is an example configuration for the ReachCube-v0 environment:

ReachCube-v0:
  n_timesteps: !!float 1e7
  policy: 'MultiInputPolicy'
  frame_stack: 3
  use_sde: True

Step 2: Launch the Training Process

After defining the configuration file, you can start the training of your policy using the following command:

python -u -m rl_zoo3.train --algo tqc --env ReachCube-v0 --gym-packages gym_lowcostrobot -conf examples/rl_zoo3_conf.yaml --env-kwargs observation_mode:'"both"' -f logs

For more detailed information on the available options and configurations, refer to the RL Zoo3 documentation.

Contributing

We welcome contributions to the project! Please follow these general guidelines:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Commit your changes with clear messages.
  4. Push your branch to your fork.
  5. Create a pull request with a description of your changes.

Format your code with Ruff

ruff format gym_lowcostrobot examples tests setup.py --line-length 127

and test your changes with pytest:

pytest

For significant changes, please open an issue first to discuss what you would like to change.

Currently, our todo list is:

Training:

Real-world:

Simulation:

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.