matejgrcic / DenseHybrid

Official implementation of paper "DenseHybrid: Hybrid Anomaly Detection for Dense Open-set Recognition"
GNU General Public License v2.0
34 stars 2 forks source link

DenseHybrid

Official implementation of ECCV2022 paper DenseHybrid: Hybrid Anomaly Detection for Dense Open-set Recognition [arXiv].

Update April 2024: The extended version of the paper is now publised in IEEE TPAMI with the title Hybrid Open-set Segmentation with Synthetic Negative Data [URL].

PWC

PWC

PWC

visitors

Abstract

Anomaly detection can be conceived either through generative modelling of regular training data or by discriminating with respect to negative training data. These two approaches exhibit different failure modes. Consequently, hybrid algorithms present an attractive research goal. Unfortunately, dense anomaly detection requires translational equivariance and very large input resolutions. These requirements disqualify all previous hybrid approaches to the best of our knowledge. We therefore design a novel hybrid algorithm based on reinterpreting discriminative logits as a logarithm of the unnormalized joint distribution p(x,y). Our model builds on a shared convolutional representation from which we recover three dense predictions: i) the closed-set class posterior P(y|x), ii) the dataset posterior P(din|x), iii) unnormalized data likelihood p(x). The latter two predictions are trained both on the standard training data and on a generic negative dataset. We blend these two predictions into a hybrid anomaly score which allows dense open-set recognition on large natural images. We carefully design a custom loss for the data likelihood in order to avoid backpropagation through the untractable normalizing constant Z(θ). Experiments evaluate our contributions on standard dense anomaly detection benchmarks as well as in terms of open-mIoU - a novel metric for dense open-set performance. Our submissions achieve state-of-the-art performance despite neglectable computational overhead over the standard semantic segmentation baseline.

Project setup

Create a new conda environment with the provided environment.yml file:

conda env create -f environment.yml

Datasets

Cityscapes can be downloaded from here.

StreetHazards can be downloaded from here.

COCO dataset is available at the offical GitHub repo.

Fishyscapes validation subsets with the appropriate structure: FS LAF, FS Static.

ADE20k dataset (used as the negative examples) can be downloaded by running wget http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip.

Weights

ViT-B ImageNet: weights

DeepLabV3+ trained on Cityscapes by NVIDIA: weights

DeepLabV3+ fine-tuned with ADE20k negatives (Fishyscapes benchmark): weights

LDN-121 trained on StreetHazards: weights

LDN-121 fine-tuned on StreetHazards with ADE20k negatives: weights

LDN-121 fine-tuned with ADE20k negatives (SMIYC benchmark): weights

Segmenter trained on COCO20: weights

Segmenter fine-tuned on COCO20 with synthetic negatives: weights

DenseFlow pretrained on traffic scenes: weights

DenseFlow pretrained on COCO: weights

Evaluation

Dense anomaly detection

Fishyscapes LostAndFound val results:

python evaluate_ood.py --dataroot LF_DATAROOT --dataset lf --folder OUTPUT_DIR --params WEIGHTS_FILE

Fishyscapes Static val results:

python evaluate_ood.py --dataroot STATIC_DATAROOT --dataset static --folder OUTPUT_DIR --params WEIGHTS_FILE

StreetHazards results:

python evaluate_ood.py --dataroot SH_DATAROOT --dataset street-hazards --folder OUTPUT_DIR --params WEIGHTS_FILE

Open-set Segmentation

StreetHazards:

python evaluate_osr_sh.py --dataroot SH_DATAROOT --model WEIGHTS_FILE

COCO20/80:

python evaluate_osr_coco.py --dataroot COCO_DATAROOT --model WEIGHTS_FILE

Training

Fine-tune DeepLabV3+ on Cityscapes with real negatives:

python dlv3_cityscapes_finetune.py --dataroot CITY_DATAROOT --neg_dataroot ADE_DATAROOT --exp_name EXP_NAME

Fine-tune DeepLabV3+ on Cityscapes with synthetic negatives:

python dlv3_cityscapes_finetune_flow.py --dataroot CITY_DATAROOT --flow_state FLOW_CHECKPOINT --exp_name EXP_NAME

Train LDN-121 on StreetHazards:

python ldn_streethazards.py --dataroot SH_DATAROOT --exp_name EXP_NAME

Fine-tune LDN-121 on StreetHazards with real negatives:

python ldn_streethazards_finetune.py --dataroot SH_DATAROOT --neg_dataroot ADE_DATAROOT --exp_name EXP_NAME --model MODEL_INIT

Train Segmenter on COCO20 dataset:

python segmenter_coco20.py --dataroot COCO_DATAROOT --exp_name EXP_NAME

Fine-tune Segmenter with real negatives on COCO20 dataset:

python segmenter_coco20_finetune.py --dataroot COCO_DATAROOT --model TRAINED_MODEL --neg_dataset --exp_name EXP_NAME

Fine-tune Segmenter with synthetic negatives on COCO20 dataset:

python segmenter_coco20_finetune_flow.py --dataroot COCO_DATAROOT --model TRAINED_MODEL --flow_state TRAINED_FLOW --exp_name EXP_NAME

Issues

If you encounter any issues with the code, please open an issue in this repository.

Citation

If you find this code useful in your research, please consider citing the following papers:

@inproceedings{grcic22eccv,
  author    = {Matej Grcic and
               Petra Bevandic and
               Sinisa Segvic},
  title     = {DenseHybrid: Hybrid Anomaly Detection for Dense Open-Set Recognition},
  booktitle = {17th European Conference on Computer Vision {ECCV} 2022},
  publisher = {Springer},
  year      = {2022}
}

@article{grcic24tpami,
  author={Grcić, Matej and Šegvić, Siniša},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={Hybrid Open-set Segmentation with Synthetic Negative Data}, 
  year={2024}
}