automl / CAVE

[deprecated] Configuration Assessment, Visualization and Evaluation
https://www.automl.org
BSD 3-Clause "New" or "Revised" License
46 stars 13 forks source link
archived

CAVE [deprecated]

No Maintenance Intended DEPRECATED SUCCESSOR

Deprecated

:warning: NOTE This repository is deprecated. However, feel free to report bugs or ask questions in the issues, however there is no guarantee we will answer in time. Last known working versions of major dependencies can be found here.
We've released an improved successor to CAVE called DeepCAVE. To use it please go to https://github.com/automl/DeepCAVE

Configuration Assessment, Visualization and Evaluation (CAVE)

master (docs) development (docs)
Build Status Build Status

CAVE is a versatile analysis tool for automatic algorithm configurators. It generates comprehensive reports to give insights into the configured algorithm, the instance/feature set and also the configuration tool itself.

The current version works out-of-the-box with BOHB and SMAC3, but can be easily adapted to other configurators: either add a custom reader or use the CSV-Reader integrated in CAVE. You can also find a talk on CAVE online.

If you use this tool, please cite us.

If you have feature requests or encounter bugs, feel free to contact us via the issue-tracker.

OVERVIEW

CAVE is an analysis tool for algorithm configurators. The results of an algorithm configurator, e.g. SMAC or BOHB, are processed and visualized to elevate the understanding of the optimization.

It is written in Python 3 and builds on SMAC3, pyimp, and ConfigSpace.

Core features:

REQUIREMENTS

Some of the plots in the report are generated using bokeh. To automagically export them as .pngs, you need to also install phantomjs-prebuilt. CAVE will run without it, but you will need to manually export the plots if you wish to use them (which is easily done through a button in the report).

INSTALLATION

You can install CAVE via pip:

pip install cave

or clone the repository and install requirements into your virtual environment.

git clone https://github.com/automl/CAVE.git && cd CAVE
pip install -r requirements.txt
python3 setup.py install  # (or: python3 setup.py develop)

In case you have trouble with your virtualenv+pip setup, try:

pip install -U setuptools

Optional: To have some .pngs automagically available, you also need phantomjs.

npm install phantomjs-prebuilt

USAGE

Have a look at the docs of CAVE for details. Here a little Quickstart-Guide.

There are two ways to use CAVE: via the commandline (CLI) or in a jupyter-notebook / python script.

Jupyter-Notebooks / Python

Using CAVE in your scripts is very similar to using CAVE in a jupyter-notebook. Take a look at the demo.

CLI

You can analyze results of an optimizer in one or multiple folders (multiple folders assume the same scenario, i.e. parallel runs within a single optimization). CAVE generates a HTML-report with all the specified analysis methods. Provide paths to all the individual parallel results.

cave /path/to/configurator/output

NOTE: CAVE supports glob like path-expansion (as in `output/run_for multiple folders starting withoutput/run(...)`*

NOTE: the --folders-flag is optional, CAVE interprets positional arguments in the commandline as folders of parallel runs

Important optional flags:

Some flags provide additional fine-tuning of the analysis methods:

For a full list and further information on how to use CAVE, see: cave --help

EXAMPLE

SMAC3

Run CAVE on SMAC3-data for the spear-qcp example, skipping budget-correlation:

cave examples/smac3/example_output/* --ta_exec_dir examples/smac3/ --output output/smac3_example --skip budget_correlation

This analyzes the results located in examples/smac3 in the directories example_output/run_1 and example_output/run_2. The resulting report is located in CAVE_results/report.html. View it in your favourite browser. --ta_exec_dir corresponds to the folder from which the optimizer was originally executed (used to find the necessary files for loading the scenario).

BOHB

You can also use CAVE with configurators that use budgets to estimate a quality of a certain algorithm (e.g. epochs in neural networks). A good example for this behaviour is BOHB. To call it, for exemplary purposes only on a selection of analyzers, run:

cave examples/bohb --output output/bohb_example --only fanova ablation budget_correlation parallel_coordinates

CSV

All your favourite configurators can be processed using this simple CSV-format.

cave examples/csv_allinone/run_* --ta_exec_dir examples/csv_allinone/ --output output/csv_example

Auto-PyTorch

While APT is still in alpha and work in progress at the time of writing, CAVE strives to support it as closely as possible. There is no unified output available right now, so we provide a notebook to showcase some exemplary analysis.

SMAC2

The legacy format of SMAC2 is still supported, though not extensively tested

cave examples/smac2/ --ta_exec_dir examples/smac2/smac-output/aclib/state-run1/ --output output/smac2_example

LICENSE

Please refer to LICENSE

If you use out tool, please cite us:

@InProceedings{biedenkapp-lion18a,
            author = {A. Biedenkapp and J. Marben and M. Lindauer and F. Hutter},
            title = {{CAVE}: Configuration Assessment, Visualization and Evaluation},
            booktitle = {Proceedings of the International Conference on Learning and Intelligent Optimization (LION'18)},
            year = {2018}}

@journal{
    title   = {BOAH: A Tool Suite for Multi-Fidelity Bayesian Optimization & Analysis of Hyperparameters},
    author  = {M. Lindauer and K. Eggensperger and M. Feurer and A. Biedenkapp and J. Marben and P. Müller and F. Hutter},
    journal = {arXiv:1908.06756 {[cs.LG]}},
    date    = {2019},
}