CoSTAR Plan is for deep learning with robots divided into two main parts: The CoSTAR Task Planner (CTP) library and CoSTAR Hyper.
Code for the paper Visual Robot Task Planning.
Code for the paper The CoSTAR Block Stacking Dataset: Learning with Workspace Constraints.
@article{hundt2019costar,
title={The CoSTAR Block Stacking Dataset: Learning with Workspace Constraints},
author={Andrew Hundt and Varun Jain and Chia-Hung Lin and Chris Paxton and Gregory D. Hager},
journal = {Intelligent Robots and Systems (IROS), 2019 IEEE International Conference on},
year = 2019,
url = {https://arxiv.org/abs/1810.11714}
}
Code instructions are in the CoSTAR Hyper README.md.
The CoSTAR Planner is part of the larger CoSTAR project. It integrates some learning from demonstration and task planning capabilities into the larger CoSTAR framework in different ways.
Specifically it is a project for creating task and motion planning algorithms that use machine learning to solve challenging problems in a variety of domains. This code provides a testbed for complex task and motion planning search algorithms.
The goal is to describe example problems where the actor must move around in the world and plan complex interactions with other actors or the environment that correspond to high-level symbolic states. Among these is our Visual Task Planning project, in which robots learn representations of their world and use these to imagine possible futures, then use these for planning.
To run deep learning examples, you will need TensorFlow and Keras, plus a number of Python packages. To run robot experiments, you'll need a simulator (Gazebo or PyBullet), and ROS Indigo or Kinetic. Other versions of ROS may work but have not been tested. If you want to stick to the toy examples, you do not need to use this as a ROS package.
About this repository: CTP is a single-repository project. As such, all the custom code you need should be in one place: here. There are exceptions, such as the CoSTAR Stack for real robot execution, but these are generally not necessary. The minimal installation of CTP is just to install the costar_models
package as a normal python package ignoring everything else.
setup
: contains setup scriptsslurm
: contains SLURM scripts for running on MARCCcommand
: contains scripts with example CTP command-line callsdocs
: markdown files for information that is not specific to a particular ROS package but to all of CTPphotos
: example imageslearning_planning_msgs
: ROS messages for data collection when doing learning from demonstration in ROSMany of these sections are a work in progress; if you have any questions shoot me an email (cpaxton@jhu.edu
).
This code is maintained by:
@article{paxton2018visual,
author = {Chris Paxton and
Yotam Barnoy and
Kapil D. Katyal and
Raman Arora and
Gregory D. Hager},
title = {Visual Robot Task Planning},
journal = {ArXiv},
year = {2018},
url = {http://arxiv.org/abs/1804.00062},
archivePrefix = {arXiv},
eprint = {1804.00062},
biburl = {https://dblp.org/rec/bib/journals/corr/abs-1804-00062},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
Training Frankenstein's Creature To Stack: HyperTree Architecture Search
@article{hundt2018hypertree,
author = {Andrew Hundt and Varun Jain and Chris Paxton and Gregory D. Hager},
title = "{Training Frankenstein's Creature to Stack: HyperTree Architecture Search}",
journal = {ArXiv},
archivePrefix = {arXiv},
eprint = {1810.11714},
year = 2018,
month = Oct,
url = {https://arxiv.org/abs/1810.11714}
}