openvinotoolkit / anomalib

An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
https://anomalib.readthedocs.io/en/latest/
Apache License 2.0
3.4k stars 615 forks source link

[Bug]: EfficientAd strange anomaly map results #2139

Open CarlosNacher opened 3 weeks ago

CarlosNacher commented 3 weeks ago

Describe the bug

I know there was a PR that supposedly fixed generl performance problems with EfficientAd model (#2015). However, I am trying (with latest anomalib version) to train EfficientAd on MVTec bottle and I have found several problems.

I relate them sepparately below:

Dataset

MVTec

Model

Other (please specify in the field below)

Steps to reproduce the behavior

Train EfficientAd on MVTec bottle with default configurations.

OS information

OS information:

Expected behavior

EfficientAd to work well in MVTec

Screenshots

No response

Pip/GitHub

pip

What version/branch did you use?

1.2.0dev

Configuration YAML

data:
  class_path: anomalib.data.MVTec
  init_args:
    root: data/external/MVTec
    category: bottle
    train_batch_size: 1
    eval_batch_size: 1
    num_workers: 8
    task: segmentation
    transform: null
    train_transform: null
    eval_transform: null
    test_split_mode: from_dir
    test_split_ratio: 0.2
    val_split_mode: same_as_test
    val_split_ratio: 0.5
    seed: null
model:
  class_path: anomalib.models.EfficientAd
  init_args:
    imagenet_dir: data/external/imagenette
    teacher_out_channels: 384
    model_size: S
    lr: 0.0001
    weight_decay: 1.0e-05
    padding: false
    pad_maps: true
normalization:
  normalization_method: none
metrics:
  image:
  - F1Score
  - AUROC
  pixel:
  - F1Score
  - AUROC
  threshold:
    class_path: anomalib.metrics.F1AdaptiveThreshold
    init_args:
      default_value: 0.5
logging:
  log_graph: false
seed_everything: 6120
task: segmentation
default_root_dir: results
ckpt_path: null
trainer:
  accelerator: auto
  strategy: auto
  devices: 1
  num_nodes: 1
  precision: 32
  logger:
  - class_path: anomalib.loggers.AnomalibWandbLogger
    init_args:
      project: mvtec-bottle
  - class_path: anomalib.loggers.AnomalibMLFlowLogger
    init_args:
      experiment_name: mvtec-bottle
  callbacks:
  - class_path: anomalib.callbacks.checkpoint.ModelCheckpoint
    init_args:
      dirpath: weights/lightning
      filename: best_model-{epoch}-{image_F1Score:.2f}
      monitor: image_F1Score
      save_last: true
      mode: max
      auto_insert_metric_name: true
  - class_path: lightning.pytorch.callbacks.EarlyStopping
    init_args:
      patience: 5
      monitor: image_F1Score
      mode: max
  fast_dev_run: false
  max_epochs: 1000
  min_epochs: null
  max_steps: 70000
  min_steps: null
  max_time: null
  limit_train_batches: 1.0
  limit_val_batches: 1.0
  limit_test_batches: 1.0
  limit_predict_batches: 1.0
  overfit_batches: 0.0
  val_check_interval: 1.0
  check_val_every_n_epoch: 1
  num_sanity_val_steps: 0
  log_every_n_steps: 50
  enable_checkpointing: true
  enable_progress_bar: true
  enable_model_summary: true
  accumulate_grad_batches: 1
  gradient_clip_val: 0
  gradient_clip_algorithm: norm
  deterministic: false
  benchmark: false
  inference_mode: true
  use_distributed_sampler: true
  profiler: null
  detect_anomaly: false
  barebones: false
  plugins: null
  sync_batchnorm: false
  reload_dataloaders_every_n_epochs: 0
  default_root_dir: null

Logs

No

Code of Conduct

abc-125 commented 1 week ago

Hello, I will number the problems you have mentioned so it is easier to answer:

  1. False positive "circles" I will look into this. It seems like a corner case.

  2. Sparse distribution of scores As I understand it, score distribution in the case of EfficientAD can be very sparse because of the inner normalization of anomaly maps done on good images only, so some defects might cause very high scores.

  3. Heat maps look strange I have tried to run it with normalization_method: none, and I got this anomaly map which look as expected (trained for 5 epochs): example

With normalization_method: minmax, the maximum score for EfficientAD can be very high, as shown on a plot in question 2, so normalizing with such a high value can cause that strange anomaly map.

  1. pad_maps and padding hyperparameters It is normal, please use default values.