An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
I am attempting to reproduce the experimental results of the PatchCore, but I don't know how to train all class together.
When I set the "category" in config.yaml to "all", It starts downloading the whole dataset of MVTec.
So, could you please provide me with some suggestions about the problem?
Dataset
MVTec
Model
PatchCore
Steps to reproduce the behavior
No question
OS information
OS information:
OS: Windows 10
Python version: 3.8.16
Anomalib version: main
PyTorch version: 1.12.1
CUDA/cuDNN version: 11.3
GPU models and configuration: GeForce RTX 4080
Any other relevant information: I'm using the MVTec AD dataset
Expected behavior
Try to train all classes of MVTec using PatchCore
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
No response
Configuration YAML
dataset:
name: mvtec
format: mvtec
path: ./datasets/MVTec
task: segmentation
category: all
train_batch_size: 32
test_batch_size: 32
num_workers: 8
image_size: 256 # dimensions to which images are resized (mandatory)
center_crop: 224 # dimensions to which images are center-cropped after resizing (optional)
normalization: imagenet # data distribution to which the images will be normalized: [none, imagenet]
transform_config:
train: null
eval: null
test_split_mode: from_dir # options: [from_dir, synthetic]
test_split_ratio: 0.2 # fraction of train images held out testing (usage depends on test_split_mode)
val_split_mode: same_as_test # options: [same_as_test, from_test, synthetic]
val_split_ratio: 0.5 # fraction of train/test images held out for validation (usage depends on val_split_mode)
tiling:
apply: false
tile_size: null
stride: null
remove_border_count: 0
use_random_tiling: False
random_tile_count: 16
model:
name: patchcore
backbone: wide_resnet50_2
pre_trained: true
layers:
- layer2
- layer3
coreset_sampling_ratio: 0.1
num_neighbors: 9
normalization_method: min_max # options: [null, min_max, cdf]
metrics:
image:
- F1Score
- AUROC
pixel:
- F1Score
- AUROC
threshold:
method: adaptive #options: [adaptive, manual]
manual_image: null
manual_pixel: null
visualization:
show_images: False # show images on the screen
save_images: True # save images to the file system
log_images: True # log images to the available loggers (if any)
image_save_path: null # path to which images will be saved
mode: full # options: ["full", "simple"]
project:
seed: 0
path: ./results
logging:
logger: [] # options: [comet, tensorboard, wandb, csv] or combinations.
log_graph: false # Logs the model graph to respective logger.
optimization:
export_mode: null # options: onnx, openvino
# PL Trainer Args. Don't add extra parameter here.
trainer:
enable_checkpointing: true
default_root_dir: null
gradient_clip_val: 0
gradient_clip_algorithm: norm
num_nodes: 1
devices: 1
enable_progress_bar: true
overfit_batches: 0.0
track_grad_norm: -1
check_val_every_n_epoch: 1 # Don't validate before extracting features.
fast_dev_run: false
accumulate_grad_batches: 1
max_epochs: 1
min_epochs: null
max_steps: -1
min_steps: null
max_time: null
limit_train_batches: 1.0
limit_val_batches: 1.0
limit_test_batches: 1.0
limit_predict_batches: 1.0
val_check_interval: 1.0 # Don't validate before extracting features.
log_every_n_steps: 50
accelerator: gpu # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">
strategy: null
sync_batchnorm: false
precision: 32
enable_model_summary: true
num_sanity_val_steps: 0
profiler: null
benchmark: false
deterministic: false
reload_dataloaders_every_n_epochs: 0
auto_lr_find: false
replace_sampler_ddp: true
detect_anomaly: false
auto_scale_batch_size: false
plugins: null
move_metrics_to_cpu: false
multiple_trainloader_mode: max_size_cycle
Logs
2023-04-10 15:54:55,310 - anomalib.data - INFO - Loading the datamodule
2023-04-10 15:54:55,311 - anomalib.data.utils.transform - INFO - No config file has been provided. Using default transforms.
2023-04-10 15:54:55,311 - anomalib.data.utils.transform - INFO - No config file has been provided. Using default transforms.
2023-04-10 15:54:55,311 - anomalib.models - INFO - Loading the model.
2023-04-10 15:54:55,311 - anomalib.models.components.base.anomaly_module - INFO - Initializing PatchcoreLightning model.
2023-04-10 15:54:56,171 - pytorch_lightning.utilities.rank_zero - INFO - GPU available: True (cuda), used: True
2023-04-10 15:54:56,171 - pytorch_lightning.utilities.rank_zero - INFO - TPU available: False, using: 0 TPU cores
2023-04-10 15:54:56,171 - pytorch_lightning.utilities.rank_zero - INFO - IPU available: False, using: 0 IPUs
2023-04-10 15:54:56,171 - pytorch_lightning.utilities.rank_zero - INFO - HPU available: False, using: 0 HPUs
2023-04-10 15:54:56,172 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_train_batches=1.0)` was configured so 100% of the batches
per epoch will be used..
2023-04-10 15:54:56,172 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_val_batches=1.0)` was configured so 100% of the batches wi
ll be used..
2023-04-10 15:54:56,172 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_test_batches=1.0)` was configured so 100% of the batches w
ill be used..
2023-04-10 15:54:56,172 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_predict_batches=1.0)` was configured so 100% of the batche
s will be used..
2023-04-10 15:54:56,172 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(val_check_interval=1.0)` was configured so validation will run a
t the end of the training epoch..
2023-04-10 15:54:56,172 - anomalib - INFO - Training the model.
2023-04-10 15:54:56,176 - anomalib.data.utils.download - INFO - Downloading the mvtec dataset.
Code of Conduct
[X] I agree to follow this project's Code of Conduct
Describe the bug
I am attempting to reproduce the experimental results of the PatchCore, but I don't know how to train all class together. When I set the "category" in config.yaml to "all", It starts downloading the whole dataset of MVTec. So, could you please provide me with some suggestions about the problem?
Dataset
MVTec
Model
PatchCore
Steps to reproduce the behavior
No question
OS information
OS information:
Expected behavior
Try to train all classes of MVTec using PatchCore
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
No response
Configuration YAML
Logs
Code of Conduct