Here we provide the implementation of Fully Convolutional Data Description (FCDD), an explainable approach to deep one-class classification. The implementation is based on PyTorch 1.9.1 and Python 3.8. The code is tested on Linux only. There is a windows branch where we have fixed some errors to make the code Windows compatible. However, there are no guarantees that it will work as expected.
Deep one-class classification variants for anomaly detection learn a mapping that concentrates nominal samples in feature space causing anomalies to be mapped away. Because this transformation is highly non-linear, finding interpretations poses a significant challenge. In this paper we present an explainable deep one-class classification method, Fully Convolutional Data Description (FCDD), where the mapped samples are themselves also an explanation heatmap. FCDD yields competitive detection performance and provides reasonable explanations on common anomaly detection benchmarks with CIFAR-10 and ImageNet. On MVTec-AD, a recent manufacturing dataset offering ground-truth anomaly maps, FCDD sets a new state of the art in the unsupervised setting. Our method can incorporate ground-truth anomaly maps during training and using even a few of these (∼5) improves performance significantly. Finally, using FCDD’s explanations we demonstrate the vulnerability of deep one-class classification models to spurious image features such as image watermarks. The following image shows some of the FCDD explanation heatmaps for test samples of MVTec-AD:
A PDF of our ICLR 2021 paper is available at: https://openreview.net/forum?id=A5VV3UyIQz.
If you use our work, please also cite the paper:
@inproceedings{
liznerski2021explainable,
title={Explainable Deep One-Class Classification},
author={Philipp Liznerski and Lukas Ruff and Robert A. Vandermeulen and Billy Joe Franks and Marius Kloft and Klaus-Robert M{\"u}ller},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=A5VV3UyIQz}
}
It is recommended to use a virtual environment to install FCDD.
Assuming you're in the python
directory, install FCDD via pip:
virtualenv -p python3 venv
source venv/bin/activate
pip install .
Log data Log data -- i.e. heatmaps, metrics, snapshots, etc. -- is stored on the disc in a certain log directory.
The default log directory is located at ../../data/results/fcdd_TIMESTAMP
.
Thus, log data is being saved in the data directory in the root folder of this repository if the code is run from python/fcdd
.
The same holds for the data directory, where the datasets are downloaded to.
You can change both, the log directory and the data directory, by altering the respective arguments (logdir and datadir).
Virtual Environment If you have used a virtual environment to install FCDD, make sure to activate it.
Train Scripts We recommend training FCDD by starting one of the runners in python/fcdd/runners
from the python/fcdd
directory.
They train several (randomly initialized) separate FCDD models for each class, where in each the respective class is considered nominal.
With EMNIST OE:
python runners/run_fmnist.py --noise-mode emnist
With CIFAR-100 OE:
python runners/run_fmnist.py
For full OE:
python runners/run_cifar10.py
For limited OE, change the respective parameter, e.g. for using 8 OE samples:
python runners/run_cifar10.py --oe-limit 8
For full OE:
python runners/run_imagenet.py
Please note that you have to manually download ImageNet1k and ImageNet22k and place them in the correct folders.
Let dsdir be your specified dataset directory (per default ../../data/datasets/
).
ImageNet1k needs to be in dsdir/imagenet/
, containing the devkit, train, and val split in form of a tar file each -- with names ILSVRC2012_devkit_t12.tar.gz
, ILSVRC2012_img_train.tar
, and ILSVRC2012_img_val.tar
.
These are the default names expected by the PyTorch loaders. You can download ImageNet1k on the official website: http://image-net.org/download. Note that you have to register beforehand.
ImageNet22k needs to be in dsdir/imagenet22k/fall11_whole_extracted/
, containing all the extracted class directories with pictures, e.g. the folder n12267677 having pictures of acorns.
Decompressing the downloaded archive should automatically yield this structure.
ImageNet22k, i.e. the full release fall 11, can also be downloaded on the official website: http://image-net.org/download.
Using confetti noise:
python runners/run_mvtec.py
Using a semi-supervised setup with one true anomaly per defection:
python runners/run_mvtec.py --supervise-mode noise --noise-mode mvtec_gt --oe-limit 1
python runners/run_pascalvoc.py
Let again dsdir be your specified dataset directory (per default ../../data/datasets/
).
Place your training data in dsdir/custom/train/classX/
and your test data in dsdir/custom/test/classX/
, with classX being one of the class folders (they can have arbitrary names, but need to be consistent for training and testing).
For a one-vs-rest setup (as used for Cifar-10, etc.), place the corresponding images directly in the class folders and run:
python runners/run_custom.py -ovr
Otherwise, each class requires a separate set of nominal and anomalous test samples.
Place the corresponding images in dsdir/custom/test/classX/normal/
, dsdir/custom/test/classX/anomalous/
, dsdir/custom/train/classX/normal/
and run:
python runners/run_custom.py
If you have some training anomalies in dsdir/custom/train/classX/anomalous/
, you can use them in a semi-supervised setting with
python runners/run_custom.py --supervise-mode other
In general, you can adapt most training parameters using the program's arguments (see python runners/run_custom.py --help
).
Per default, it chooses some parameters that are assumed general-purpose, such as the imagenet-pre-trained CNN for 224x224 images and imagenet22k outlier exposure.
To, for example, use confetti noise instead of outlier exposure, set --supervise-mode
to malformed_normal
and --noise-mode
to confetti
.
To run the baseline experiments -- i.e. HSC with heatmaps based on gradients and AE with reconstruction error heatmaps -- little has to be done. Only a few parameters have to be adjusted, most importantly the objective and the network. For instance, in the case of Cifar-10 and HSC:
python runners/run_cifar10.py --objective hsc -n CNN32 --blur-heatmaps
Similarily for the AE baseline:
python runners/run_cifar10.py --objective ae -n AE32 --supervise-mode unsupervised --blur-heatmaps
A runner saves the achieved scores, metrics, plots, snapshots, and heatmaps in a given log directory. Each log directory contains a separate subdirectory for each class that is trained to be nominal. These subdirectories are named "normal_x", where x is the class number. The class subdirectories again contain a subdirectory for each random seed. These are named "it_x" for x being the iteration number (random seed). Inside the seed subdirectories all actual log data can be found. Additionally, summarized plots will be created for the class subdirectories and the root log directory. For instance, a plot containing ROC curves for each class (averaged over all seeds) can be found in the root log directory.
Visualization for 2 classes and 2 random seeds:
./log_directory
./log_directory/normal_0
./log_directory/normal_0/it_0
./log_directory/normal_0/it_1
./log_directory/normal_1
./log_directory/normal_1/it_0
./log_directory/normal_1/it_1
...
Note that the leaf nodes, i.e. the iteration subdirectories, contain completely separate training results and have no impact on each other.
The actual log data consists of:
In general, the FCDD implementation splits in five packages: datasets
, models
, runners
, training
, and util
.
dataset
contains an implementation of the base class for AD datasets, actual torchvision-style dataset implementations, and artificial anomaly generation implementations.
models
contains an implementation of the network base class -- including receptive field upsampling and gradient-based heatmap computation -- and implementations of all network architectures.
runners
contains scripts for starting training runs, processing program arguments, and preparing all training parameters -- like creating an optimizer and network instances.
training
contains an implementation for actual training and evaluation of the network.
util
contains all the rest, i.e. e.g. a logger that handles all I/O interactions.
In the following we give a brief tutorial on how to modify the FCDD implementation for specific requirements.
If you find any bugs, have questions, need help modifying FCDD, or want to get in touch in general, feel free to write us an email!