This repository contains the source code used for evaluation in [1], a large-scale comparison of state-of-the-art superpixel algorithms.
ArXiv | Project Page | Datasets | Doxygen Documentation
This repository subsumes earlier work on comparing superpixel algorithms: davidstutz/gcpr2015-superpixels, davidstutz/superpixels-revisited.
Please cite the following work if you use this benchmark or the provided tools or implementations:
[1] D. Stutz, A. Hermans, B. Leibe.
Superpixels: An Evaluation of the State-of-the-Art.
Computer Vision and Image Understanding, 2018.
Also make also sure to cite additional papers when using datasets or superpixel algorithms.
Updates:
lib_eval/evaluation.h
and an easy-to-use command line tool is provided,
see eval_average_cli
and the corresponding documentation and examples in
Executables and Examples respectively.Superpixels group pixels similar in color and other low-level properties. In this respect, superpixels address two problems inherent to the processing of digital images: firstly, pixels are merely a result of discretization; and secondly, the high number of pixels in large images prevents many algorithms from being computationally feasible. Superpixels were introduced as more natural entities - grouping pixels which perceptually belong together while heavily reducing the number of primitives.
This repository can be understood as supplementary material for an extensive evaluation of 28 algorithms on 5 datasets regarding visual quality, performance, runtime, implementation details and robustness - as presented in [1]. To ensure a fair comparison, parameters have been optimized on separate training sets; as the number of generated superpixels heavily influences parameter optimization, we additionally enforced connectivity. Furthermore, to evaluate superpixel algorithms independent of the number of superpixels, we propose to integrate over commonly used metrics such as Boundary Recall, Undersegmentation Error and Explained Variation. Finally, we present a ranking of the superpixel algorithms considering multiple metrics and independent of the number of generated superpixels, as shown below.
The table shows the average ranks across the 5 datasets, taking into account Average Boundary Recall (ARec) and Average Undersegmentation Error (AUE) - lower is better in both cases, see Benchmark. The confusion matrix shows the rank distribution of the algorithms across the datasets.
The following algorithms were evaluated in [1], and most of them are included in this repository:
Included | Algorithm | Reference |
---|---|---|
:ballot_box_with_check: | CCS | Ref. & Web |
Instructions | CIS | Ref. & Web |
:ballot_box_with_check: | CRS | Ref. & Web |
:ballot_box_with_check: | CW | Ref. & Web |
:ballot_box_with_check: | DASP | Ref. & Web |
:ballot_box_with_check: | EAMS | Ref., Ref., Ref. & Web |
:ballot_box_with_check: | ERS | Ref. & Web |
:ballot_box_with_check: | FH | Ref. & Web |
:ballot_box_with_check: | MSS | Ref. |
:ballot_box_with_check: | PB | Ref. & Web |
:ballot_box_with_check: | preSLIC | Ref. & Web |
:ballot_box_with_check: | reSEEDS | Web |
:ballot_box_with_check: | SEAW | Ref. & Web |
:ballot_box_with_check: | SEEDS | Ref. & Web |
:ballot_box_with_check: | SLIC | Ref. & Web |
:ballot_box_with_check: | TP | Ref. & Web |
:ballot_box_with_check: | TPS | Ref. & Web |
:ballot_box_with_check: | vlSLIC | Web |
:ballot_box_with_check: | W | Web |
:ballot_box_with_check: | WP | Ref. & Web |
:ballot_box_with_check: | PF | Ref. & Web |
:ballot_box_with_check: | LSC | Ref. & Web |
:ballot_box_with_check: | RW | Ref. & Web |
:ballot_box_with_check: | QS | Ref. & Web |
:ballot_box_with_check: | NC | Ref. & Web |
:ballot_box_with_check: | VCCS | Ref. & Web |
:ballot_box_with_check: | POISE | Ref. & Web |
:ballot_box_with_check: | VC | Ref. & Web |
:ballot_box_with_check: | ETPS | Ref. & Web |
:ballot_box_with_check: | ERGC | Ref., Ref. & Web |
To keep the benchmark alive, we encourage authors to make their implementations publicly available and integrate them into this benchmark. We are happy to help with the integration and update the results published in [1] and on the project page. Also see the Documentation for details.
Licenses for source code corresponding to:
D. Stutz, A. Hermans, B. Leibe. Superpixels: An Evaluation of the State-of-the-Art. Computer Vision and Image Understanding, 2018.
Note that the source code/data is based on other projects for which separate licenses apply, see:
Copyright (c) 2016-2018 David Stutz, RWTH Aachen University
Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use this software and associated documentation files (the "Software").
The authors hereby grant you a non-exclusive, non-transferable, free of charge right to copy, modify, merge, publish, distribute, and sublicense the Software for the sole purpose of performing non-commercial scientific research, non-commercial education, or non-commercial artistic projects.
Any other use, in particular any use for commercial purposes, is prohibited. This includes, without limitation, incorporation in a commercial product, use in a commercial service, or production of other artefacts for commercial purposes.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
You understand and agree that the authors are under no obligation to provide either maintenance services, update services, notices of latent defects, or corrections of defects with regard to the Software. The authors nevertheless reserve the right to update, modify, or discontinue the Software at any time.
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. You agree to cite the corresponding papers (see above) in documents and papers that report on research using the Software.