An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
First of all, thank you for the great repository and all the work that was put into it.
I've been using the GANomaly model on a custom dataset (sixray) and I'm having a hard time trying to analyse results through visualization.
As far as I understood, there are two options to set output scores.
Without normalization The model may predict scores bigger than 1 without normalization min_max. However I get always "abnormal" labels on my results. Probably because all of the images (normal and abnormal) are bigger that the adaptive threshold. Something that doens't make sense to me.
With normalization If I use normalization "min_max" the threshold becomes automatically 0.5 and the "adaptive_threshold" doens't do anything anymore. With this method I am getting F1 Score = 0 and all of the output images display "abnormal" labels. The "normal" have in fact lower scores (near 50%) but are stil above 0.5.
Dataset
Other (please specify in the text field below)
Model
GANomaly
Steps to reproduce the behavior
.
OS information
OS information:
OS: [e.g. Ubuntu 20.04]
Python version: [e.g. 3.8.10]
Anomalib version: [e.g. 0.3.6]
PyTorch version: [e.g. 1.9.0]
CUDA/cuDNN version: [e.g. 11.1]
GPU models and configuration: [e.g. 2x GeForce RTX 3090]
Any other relevant information: [e.g. I'm using a custom dataset]
Expected behavior
Ideally I would like to see in the image a result between 0 ad 1 (0% and 100%) but with an adaptive threshold as well. If the image was considered nomral it would appear "Normal: 'score lower than threshold'. If the image was in fact considered anomalous it would appear "Anomalous": 'score lower than threshold'.
In the article "GANomaly: Semi-Supervised Anomaly Detection via Adversarial Training" on image 6a, the results are between 0 and 1 and the threshold is not 0.5.
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
No response
Configuration YAML
dataset:
name: sixray
format: folder
path: ./datasets/sixray
normal_dir: normal
abnormal_dir: abnormal
task: classification
mask_dir: null
normal_test_dir: real
extensions: null
split_ratio: 0.2 # normal images ratio to create a test split
seed: 0
train_batch_size: 32
eval_batch_size: 32
inference_batch_size: 32
num_workers: 8
image_size: 256 # dimensions to which images are resized (mandatory)
center_crop: null # dimensions to which images are center-cropped after resizing (optional)
normalization: imagenet # data distribution to which the images will be normalized: [none, imagenet]
transform_config:
train: null
eval: null
test_split_mode: from_dir # options: [from_dir, synthetic]
test_split_ratio: 0.2 # fraction of train images held out testing (usage depends on test_split_mode)
val_split_mode: same_as_test # options: [same_as_test, from_test, synthetic]
val_split_ratio: 0.5 # fraction of train/test images held out for validation (usage depends on val_split_mode)
tiling:
apply: true
tile_size: 64
stride: null
remove_border_count: 0
use_random_tiling: False
random_tile_count: 16
model:
name: ganomaly
latent_vec_size: 100
n_features: 64
extra_layers: 0
add_final_conv: true
early_stopping:
patience: 3
metric: image_AUROC
mode: max
lr: 0.0002
beta1: 0.5
beta2: 0.999
wadv: 1
wcon: 50
wenc: 1
normalization_method: min_max
metrics:
image:
- F1Score
- AUROC
- AUPRO
threshold:
method: adaptive #options: [adaptive, manual]
manual_image: null
visualization:
show_images: False # show images on the screen
save_images: True # save images to the file system
log_images: True # log images to the available loggers (if any)
image_save_path: null # path to which images will be saved
mode: full # options: ["full", "simple"]
project:
seed: 42
path: ./results
logging:
logger: [] # options: [comet, tensorboard, wandb, csv] or combinations.
log_graph: true # Logs the model graph to respective logger.
optimization:
export_mode: null
# PL Trainer Args. Don't add extra parameter here.
trainer:
enable_checkpointing: true
default_root_dir: null
gradient_clip_val: 0
gradient_clip_algorithm: norm
num_nodes: 1
devices: 1
enable_progress_bar: true
overfit_batches: 0.0
track_grad_norm: -1
check_val_every_n_epoch: 2 # Don't validate before extracting features.
fast_dev_run: false
accumulate_grad_batches: 1
max_epochs: 100
max_steps: -1
min_steps: null
max_time: null
limit_train_batches: 1.0
limit_val_batches: 1.0
limit_test_batches: 1.0
limit_predict_batches: 1.0
val_check_interval: 1.0 # Don't validate before extracting features.
log_every_n_steps: 50
accelerator: auto # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">
strategy: null
sync_batchnorm: false
precision: 32
enable_model_summary: true
num_sanity_val_steps: 0
profiler: null
benchmark: false
deterministic: false
reload_dataloaders_every_n_epochs: 0
auto_lr_find: false
replace_sampler_ddp: true
detect_anomaly: false
auto_scale_batch_size: false
plugins: null
move_metrics_to_cpu: false
multiple_trainloader_mode: max_size_cycle
Logs
.
Code of Conduct
[X] I agree to follow this project's Code of Conduct
Describe the bug
Hello,
First of all, thank you for the great repository and all the work that was put into it.
I've been using the GANomaly model on a custom dataset (sixray) and I'm having a hard time trying to analyse results through visualization.
As far as I understood, there are two options to set output scores.
Without normalization The model may predict scores bigger than 1 without normalization min_max. However I get always "abnormal" labels on my results. Probably because all of the images (normal and abnormal) are bigger that the adaptive threshold. Something that doens't make sense to me.
With normalization If I use normalization "min_max" the threshold becomes automatically 0.5 and the "adaptive_threshold" doens't do anything anymore. With this method I am getting F1 Score = 0 and all of the output images display "abnormal" labels. The "normal" have in fact lower scores (near 50%) but are stil above 0.5.
Dataset
Other (please specify in the text field below)
Model
GANomaly
Steps to reproduce the behavior
.
OS information
OS information:
Expected behavior
Ideally I would like to see in the image a result between 0 ad 1 (0% and 100%) but with an adaptive threshold as well. If the image was considered nomral it would appear "Normal: 'score lower than threshold'. If the image was in fact considered anomalous it would appear "Anomalous": 'score lower than threshold'.
In the article "GANomaly: Semi-Supervised Anomaly Detection via Adversarial Training" on image 6a, the results are between 0 and 1 and the threshold is not 0.5.
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
No response
Configuration YAML
Logs
Code of Conduct