Jingkang50 / OpenOOD

Benchmarking Generalized Out-of-Distribution Detection
MIT License
885 stars 114 forks source link

[wiki] Summarize Implemented Methods #62

Open Jingkang50 opened 2 years ago

Jingkang50 commented 2 years ago

We plan to write a short summary for every implemented method in our wiki space. Please follow the template below and write the draft here.

Paper Title

paper   code

Method Description

  • Overview: A one-sentence summary
  • Model Architecture: special design for network architectures
  • Training: special design for training pipeline
  • Inference: special design for inference pipeline
  • Comments: say something more if any

Implementation (List all the related python files)

  • xxx.py: the function of this python file.

Script

# how to run the code
Zzitang commented 2 years ago

CutPaste: Self-Supervised Learning for Anomaly Detection and Localization

paper   code

Method Description

  • Overview: CutPaste is a data augmentation strategy that constructs anomaly patterns from normal ones. Input train data should be firstly processed with cutpaste, and a one-class classifier is built to distinguish anomalous data from normal ones in this method. Gaussian density estimator is used to compute the anomaly score.
  • Training: Use cutpaste preprocessor to generate anomalous images by cutting an image patch and pastes at a random location.
  • Inference: Use Gaussian density estimator to compute the anomaly score.

Implementation (List all the related python files)

  • train_ad_pipeline.py: train pipeline.
  • test_ad_pipeline.py: test pipeline.
  • cutpaste_trainer.py: trainer for cutpaste to train an one class classifier.
  • cutpaste_evaluator.py: evaluator.
  • cutpaste_preprocessor.py: preprocessor to perform CutPaste data augmentation.
  • cutpaste_postprocessor.py: postprocessor to calculate anomaly score by Gaussian density estimator.
  • cutpaste_recorder.py: recorder.

Script

# how to run the code
# cutpaste_train
sh scripts/_get_started/1_mnist_test.sh
# cutpaste_test
sh scripts/a_anomaly/2_cutpaste_train.sh
Zzitang commented 2 years ago

Detecting Out-of-Distribution Examples with In-distribution Examples and Gram Matrices

paper   code

Method Description

  • Overview: Gram detects OOD examples by identifying anomalies in the gram matrices. In gram, we detect out-of-distribution samples by comparing each value in the gram matrix with its respective range observed over the training data. In other words, we compute the minimum and maximum values of feature co-occurrences for each class, layer, and order and then calculate the deviations of each data.
  • Inference: Use Gram matrices to compute effective feature correlations.

Implementation

  • test_ood_pipeline.py: test pipeline.
  • ood_evaluator.py: evaluator.
  • gram_postprocessor.py: postprocessor to calculate gram metrics deviations from in-distribution data .

Script

# how to run the code
sh scripts/c_ood/7_cifar_test_ood_gram.sh
JediWarriorZou commented 2 years ago

DeepSVDD Method Description

  • Pretrain: During this stage, we pretrain a deep convolutional autoencoders (dcae) for anomaly detection. Each epoch we compute the sum of (input-ouput)^2 as score, then we compute mean of score as loss.
  • Train: During this stage, firstly we load the pretrained dcae weight into the net, then we proceed to train the model. We initaite the center C of the dataset. For "one class" mode. We compute the mean of square of distance between image and center as the loss.
  • Test: During this stage, we evaluate our method for OOD detection with metric auroc. The score is the square of distance between image and center.

OpenOOD Implementation

  • train_ad_pipeline.py: Pretrain the dcae
  • train_dsvdd_pipeline.py:Train our model and finally test our model
  • dsvdd_net.py: Define dcae net and dsvdd net
  • dsvdd_trainer.py: Trainer of dcae and dsvdd
  • dsvdd_evaluator.py: Evaluator of dcae and dsvdd

Script

  • pretrain dcase:sh scripts/a_anomaly/0_dsvdd_pretrain.sh
  • train dsvdd:sh scripts/a_anomaly/0_dsvdd_train.sh

Result

  • Note: In the original code of dsvdd, the train dataset is normalized with special means and stds. For example, when normal dataset is cifar10-3, normalization dict: [-31.7975, -31.7975, -31.7975], [42.8907, 42.8907, 42.8907]. Furthermore, a global_contrast_normalization method is used in the transform. So the ideal result is displayed below.
Normal class 3 3
Method DCAE DSVDD
Expected AUROC 58.40 59.10
AUROC 63.43 60.44
JediWarriorZou commented 2 years ago

KDAD Method Description:

  • train: During the training stage, we introduce two vgg networks, one of which called source network is pretrained. For each training epoch, when id data is input, the differences in special critical layers between clone nwtwork and source network are obtained and loss is computed. Loss includes MSE loss and direction loss which judges the similarity between the activation vectors of critical layers in these two networks. By SGD,the clone network is optimised.
  • test: During the testing stage, we evaluate the method of anomaly detection by roc_auc. We use sum of MSE loss and direction loss as the score of OOD. OOD image has a higher score. Keypoints:
  • train_ad_pipeline.py: training stage
  • ad_test_pipeline.py: testing stage
  • kdad_trainer.py: trainer
  • kdad_evaluator.py: evaluator
  • vggnet.py: source and clone network
  • kdad_recorder.py: recorder
  • kdad_losses.py: define loss function

Script

  • kdad_train: sh scripts/a_anomaly/1_kdad_train.sh
  • kdad_detection_test: sh scripts/a_anomaly/1_kdad_test_det.sh
Result Normal class 3
Expected AUROC 77.02
AUROC 86.08
JediWarriorZou commented 2 years ago

Confidence Branch Method Description

  • train: During the training stage, the wrn net output includes scores and confidence. The loss is composed of task loss and confidence loss. Task loss is sum of negative log likelihood based on softmax prediction probabilities and confidence loss is negative log of confidence. The softmax prediction probabilities are adjusted by interpolating between the original predictions and the target probability distribution , where the degree of interpolation is indicated by the network’s confidence.
  • test: During the testing stage, we evaluate the capability of OOD detection with metrcis AUROC, FPR95,AUPR and detection error. We use the confidence as the score of OOD. OOD image has a lower confidence.

Keypoints

  • conf_widernet.py: wideresnet that can output confidence
  • conf_esti_trainer.py: trainer
  • conf_esti_recorder.py: recorder
  • conf_esti_evaluator.py: evaluator
  • train_pipeline.py: train pipeline for classification
  • test_ad_pipeline.py: test pipeline for OOD detection

Script

  • Train: sh scripts/e_conf/train_conf_esti.sh
  • Test: sh scripts/e_conf/test_conf_esti.sh
Result dataset svhn
Eval Acc 0.966
fpr95 0.2516
auroc 0.9541
aupr_in 0.9824
aupr_out 0.8757
detection_err 0.1103
JediWarriorZou commented 2 years ago

VOS Method Description

  • train: During the training stage, virtual outliers are generated based on penultimate layer of the network. We assume the feature representation of object instances forms a class-conditional multivariate Gaussian distribution.We propose sampling the virtual outliers from the feature representation space, using the multivariate distributions. Sampled outliers are near class boundary .The model loss comprises classification loss and uncertainty loss based on the energy function, where the ID data has negative energy values and the synthesized outlier has positive energy.
  • test: During the testing stage, we evaluate the model for OOD detection with metrics AUROC, FPR95, AUPR and detection error. The maximum of softmax score is used as score of OOD. OOD image has a lower score.

Keypoints

  • wrn.py: wideresnet is used
  • vos_trainer.py: trainer
  • base_recorder.py: recorder
  • vos_evaluator.py: evaluator
  • train_pipeline.py: train pipeline for classification
  • test_ad_pipeline.py: test pipeline for OOD detection

Script

  • Train: sh scripts/c_ood/13_cifar_train_vos.sh
  • Test: sh scripts/c_ood/13_cifar_test_vos.sh
Result Metric Value/%
VAL_ACC 94.55
FPR95 47.71
AUROC 93.73
AUPR_IN 91.45
AUPR_OUT 96.40
DETECTION_ERR 11.56
OmegaDING commented 2 years ago

MOS: Towards Scaling Out-of-distribution Detection for Large Semantic Space

paper   code

Method Description

  • Overview: Obtain the ood scores of different categories, and then detect them separately
  • Model Architecture: Bit model with little difference
  • Training: need to calculate group softmax loss
  • Inference: need to load group slices while inference

Implementation (List all the related python files)

  • mos_evaluator.py: evaluator
  • mos_network.py: network
  • test_mos_pipeline.py: test pieline
  • train_pipeline.py: train pipeline
  • mos_trainer.py: trainer

Script

# train
sh scripts/c_ood/mos.sh
# test
sh scripts/c_ood/test_mos.sh
OmegaDING commented 2 years ago

Towards Total Recall in Industrial Anomaly Detection

paper   code

Method Description

  • Overview: Obtain the output of the last two layers of the network, put them into a bank through sub sampling, and judge whether it is abnormal by detecting the image similarity
  • Training: Obtain the last two layers of output through the pre-training model
  • Inference: Search for similarity with known anomalies in the bank

Implementation (List all the related python files)

  • patchcore_evaluator.py: evaluator
  • patchcore_net.py: network
  • test_ad_pipeline.py: pipeline
  • patchcore_postprocessor.py: postprocessor

Script

# test
sh scripts/a_anomaly/3_patchcore.sh
OmegaDING commented 2 years ago

Towards Total Recall in Industrial Anomaly Detection

paper   code

Method Description

  • Overview: Obtain the output of the last two layers of the network, put them into a bank through sub sampling, and judge whether it is abnormal by detecting the image similarity
  • Training: Obtain the last two layers of output through the pre-training model
  • Inference: Search for similarity with known anomalies in the bank

Implementation (List all the related python files)

  • patchcore_evaluator.py: evaluator
  • patchcore_net.py: network
  • test_ad_pipeline.py: pipeline
  • patchcore_postprocessor.py: postprocessor

Script

# test
sh scripts/a_anomaly/3_patchcore.sh
OmegaDING commented 2 years ago

Towards Total Recall in Industrial Anomaly Detection

paper   code

Method Description

  • Training: Obtain the last two layers of output through the pre-training model
  • Inference: Search for similarity with known anomalies in the bank

Implementation (List all the related python files)

  • openmax_evaluator.py: evaluator
  • openmax_network.py: network
  • openmax_trainer.py: trainer
  • train_pipeline.py: pipeline
  • test_ood_pipeline.py: pipeline

Script

# train
sh scripts/b_osr/0_openmax.sh
# test
sh scripts/b_osr/-1_test_openmax.sh