jhu-lcsr / costar_plan

Integrating learning and task planning for robots with Keras, including simulation, real robot, and multiple dataset support.
https://sites.google.com/site/costardataset
Apache License 2.0
71 stars 23 forks source link
architecture-search deep-learning deep-neural-nets deep-neural-networks grasping keras keras-tensorflow learning learning-from-demonstration lfd motion-planning neural-architecture-search planner planning robotics ros simulation task-planning tensorflow ur5

CoSTAR Plan

Build Status

CoSTAR Plan is for deep learning with robots divided into two main parts: The CoSTAR Task Planner (CTP) library and CoSTAR Hyper.

CoSTAR Task Planner (CTP)

Code for the paper Visual Robot Task Planning.

CoSTAR Hyper

Code for the paper The CoSTAR Block Stacking Dataset: Learning with Workspace Constraints.

@article{hundt2019costar,
    title={The CoSTAR Block Stacking Dataset: Learning with Workspace Constraints},
    author={Andrew Hundt and Varun Jain and Chia-Hung Lin and Chris Paxton and Gregory D. Hager},
    journal = {Intelligent Robots and Systems (IROS), 2019 IEEE International Conference on},
    year = 2019,
    url = {https://arxiv.org/abs/1810.11714}
}

Training Frankenstein's Creature To Stack: HyperTree Architecture Search

Code instructions are in the CoSTAR Hyper README.md.

Supported Datasets

CoSTAR Task Planner (CTP)

The CoSTAR Planner is part of the larger CoSTAR project. It integrates some learning from demonstration and task planning capabilities into the larger CoSTAR framework in different ways.

Visual Task Planning

Specifically it is a project for creating task and motion planning algorithms that use machine learning to solve challenging problems in a variety of domains. This code provides a testbed for complex task and motion planning search algorithms.

The goal is to describe example problems where the actor must move around in the world and plan complex interactions with other actors or the environment that correspond to high-level symbolic states. Among these is our Visual Task Planning project, in which robots learn representations of their world and use these to imagine possible futures, then use these for planning.

To run deep learning examples, you will need TensorFlow and Keras, plus a number of Python packages. To run robot experiments, you'll need a simulator (Gazebo or PyBullet), and ROS Indigo or Kinetic. Other versions of ROS may work but have not been tested. If you want to stick to the toy examples, you do not need to use this as a ROS package.

About this repository: CTP is a single-repository project. As such, all the custom code you need should be in one place: here. There are exceptions, such as the CoSTAR Stack for real robot execution, but these are generally not necessary. The minimal installation of CTP is just to install the costar_models package as a normal python package ignoring everything else.

CTP Datasets

Contents

Package/folder layout

Many of these sections are a work in progress; if you have any questions shoot me an email (cpaxton@jhu.edu).

Contact

This code is maintained by:

Cite

Visual Robot Task Planning

@article{paxton2018visual,
  author    = {Chris Paxton and
               Yotam Barnoy and
               Kapil D. Katyal and
               Raman Arora and
               Gregory D. Hager},
  title     = {Visual Robot Task Planning},
  journal   = {ArXiv},
  year      = {2018},
  url       = {http://arxiv.org/abs/1804.00062},
  archivePrefix = {arXiv},
  eprint    = {1804.00062},
  biburl    = {https://dblp.org/rec/bib/journals/corr/abs-1804-00062},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Training Frankenstein's Creature To Stack: HyperTree Architecture Search

@article{hundt2018hypertree,
    author = {Andrew Hundt and Varun Jain and Chris Paxton and Gregory D. Hager},
    title = "{Training Frankenstein's Creature to Stack: HyperTree Architecture Search}",
    journal = {ArXiv},
    archivePrefix = {arXiv},
    eprint = {1810.11714},
    year = 2018,
    month = Oct,
    url = {https://arxiv.org/abs/1810.11714}
}