An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
When I use the format dataset below to change the config for the custom dataset in patchcore model as Readme.md
dataset:
name: sample_anomaly
format: folder
path: ./datasets/sample/abc
normal_dir: normal # name of the folder containing normal images.
abnormal_dir: abnormal # name of the folder containing abnormal images.
normal_test_dir: test # name of the folder containing normal test images.
task: segmentation # classification or segmentation
category: parts
mask: null #<path/to/mask/annotations> #optional
extensions: null
split_ratio: 0.2 # ratio of the normal images that will be used to create a test split
image_size: 224
train_batch_size: 32
test_batch_size: 32
num_workers: 8
transform_config:
train: null
val: null
create_validation_set: true
tiling:
apply: false
tile_size: null
stride: null
remove_border_count: 0
use_random_tiling: False
random_tile_count: 16
I got the error:
2022-07-12 06:03:10,143 - anomalib.data - INFO - Loading the datamodule
2022-07-12 06:03:10,144 - anomalib.models - INFO - Loading the model.
2022-07-12 06:03:10,148 - torch.distributed.nn.jit.instantiator - INFO - Created a temporary directory at /tmp/tmpjty8xwky
2022-07-12 06:03:10,148 - torch.distributed.nn.jit.instantiator - INFO - Writing /tmp/tmpjty8xwky/_remote_module_non_sriptable.py
2022-07-12 06:03:10,157 - anomalib.models.components.base.anomaly_module - INFO - Initializing PatchcoreLightning model.
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Torchmetrics v0.9 introduced a new argument class property called `full_state_update` that has
not been set for this class (AdaptiveThreshold). The property determines if `update` by
default needs access to the full metric state. If this is not the case, significant speedups can be
achieved and we recommend setting this to `False`.
We provide an checking function
`from torchmetrics.utilities import check_forward_no_full_state`
that can be used to check if the `full_state_update=True` (old and potential slower behaviour,
default for now) or if `full_state_update=False` can be used safely.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric `PrecisionRecallCurve` will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Torchmetrics v0.9 introduced a new argument class property called `full_state_update` that has
not been set for this class (AnomalyScoreDistribution). The property determines if `update` by
default needs access to the full metric state. If this is not the case, significant speedups can be
achieved and we recommend setting this to `False`.
We provide an checking function
`from torchmetrics.utilities import check_forward_no_full_state`
that can be used to check if the `full_state_update=True` (old and potential slower behaviour,
default for now) or if `full_state_update=False` can be used safely.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Torchmetrics v0.9 introduced a new argument class property called `full_state_update` that has
not been set for this class (MinMax). The property determines if `update` by
default needs access to the full metric state. If this is not the case, significant speedups can be
achieved and we recommend setting this to `False`.
We provide an checking function
`from torchmetrics.utilities import check_forward_no_full_state`
that can be used to check if the `full_state_update=True` (old and potential slower behaviour,
default for now) or if `full_state_update=False` can be used safely.
warnings.warn(*args, **kwargs)
2022-07-12 06:03:11,272 - anomalib.utils.loggers - INFO - Loading the experiment logger(s)
2022-07-12 06:03:11,272 - anomalib.utils.callbacks - INFO - Loading the callbacks
2022-07-12 06:03:11,305 - pytorch_lightning.utilities.rank_zero - INFO - GPU available: True, used: True
2022-07-12 06:03:11,305 - pytorch_lightning.utilities.rank_zero - INFO - TPU available: False, using: 0 TPU cores
2022-07-12 06:03:11,305 - pytorch_lightning.utilities.rank_zero - INFO - IPU available: False, using: 0 IPUs
2022-07-12 06:03:11,305 - pytorch_lightning.utilities.rank_zero - INFO - HPU available: False, using: 0 HPUs
2022-07-12 06:03:11,306 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_train_batches=1.0)` was configured so 100% of the batches per epoch will be used..
2022-07-12 06:03:11,306 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_val_batches=1.0)` was configured so 100% of the batches will be used..
2022-07-12 06:03:11,306 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_test_batches=1.0)` was configured so 100% of the batches will be used..
2022-07-12 06:03:11,306 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(limit_predict_batches=1.0)` was configured so 100% of the batches will be used..
2022-07-12 06:03:11,306 - pytorch_lightning.utilities.rank_zero - INFO - `Trainer(val_check_interval=1.0)` was configured so validation will run at the end of the training epoch..
2022-07-12 06:03:11,306 - anomalib - INFO - Training the model.
2022-07-12 06:03:11,313 - anomalib.data.folder - INFO - Setting up train, validation, test and prediction datasets.
/usr/local/lib/python3.7/dist-packages/anomalib/data/folder.py:228: UserWarning: Segmentation task is requested, but mask directory is not provided. Classification is to be chosen if mask directory is not provided.
"Segmentation task is requested, but mask directory is not provided. "
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: Metric `ROC` will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
warnings.warn(*args, **kwargs)
2022-07-12 06:03:15,144 - pytorch_lightning.accelerators.gpu - INFO - LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/optimizer.py:184: UserWarning: `LightningModule.configure_optimizers` returned `None`, this fit will run with no optimizer
"`LightningModule.configure_optimizers` returned `None`, this fit will run with no optimizer",
2022-07-12 06:03:15,150 - pytorch_lightning.callbacks.model_summary - INFO -
| Name | Type | Params
-------------------------------------------------------------------
0 | image_threshold | AdaptiveThreshold | 0
1 | pixel_threshold | AdaptiveThreshold | 0
2 | training_distribution | AnomalyScoreDistribution | 0
3 | min_max | MinMax | 0
4 | model | PatchcoreModel | 68.9 M
5 | image_metrics | AnomalibMetricCollection | 0
6 | pixel_metrics | AnomalibMetricCollection | 0
-------------------------------------------------------------------
68.9 M Trainable params
0 Non-trainable params
68.9 M Total params
275.533 Total estimated model params size (MB)
/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:490: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
cpuset_checked))
Epoch 0: 0% 0/2 [00:00<?, ?it/s] /usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/optimization/optimizer_loop.py:137: UserWarning: `training_step` returned `None`. If this was on purpose, ignore this warning...
self.warning_cache.warn("`training_step` returned `None`. If this was on purpose, ignore this warning...")
Validation: 0it [00:00, ?it/s]2022-07-12 06:03:15,619 - anomalib.models.patchcore.lightning_model - INFO - Aggregating the embedding extracted from the training set.
2022-07-12 06:03:15,619 - anomalib.models.patchcore.lightning_model - INFO - Applying core-set subsampling to get the embedding.
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 0: 100% 2/2 [00:00<00:00, 2.02it/s]Traceback (most recent call last):
File "tools/train.py", line 82, in <module>
train()
File "tools/train.py", line 71, in train
trainer.fit(model=model, datamodule=datamodule)
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 771, in fit
self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 723, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 811, in _fit_impl
results = self._run(model, ckpt_path=self.ckpt_path)
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1236, in _run
results = self._run_stage()
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1323, in _run_stage
return self._run_train()
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1353, in _run_train
self.fit_loop.run()
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/base.py", line 204, in run
self.advance(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/fit_loop.py", line 269, in advance
self._outputs = self.epoch_loop.run(self._data_fetcher)
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/base.py", line 205, in run
self.on_advance_end()
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 255, in on_advance_end
self._run_validation()
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 309, in _run_validation
self.val_loop.run()
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/base.py", line 211, in run
output = self.on_run_end()
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 188, in on_run_end
self._evaluation_epoch_end(self._outputs)
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 315, in _evaluation_epoch_end
self.trainer._call_lightning_module_hook("validation_epoch_end", output_or_outputs)
File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1595, in _call_lightning_module_hook
output = fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/anomalib/models/components/base/anomaly_module.py", line 133, in validation_epoch_end
self._collect_outputs(self.image_metrics, self.pixel_metrics, outputs)
File "/usr/local/lib/python3.7/dist-packages/anomalib/models/components/base/anomaly_module.py", line 159, in _collect_outputs
image_metric.update(output["pred_scores"], output["label"].int())
File "/usr/local/lib/python3.7/dist-packages/anomalib/utils/metrics/collection.py", line 37, in update
super().update(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torchmetrics/collections.py", line 183, in update
m.update(*args, **m_kwargs)
File "/usr/local/lib/python3.7/dist-packages/torchmetrics/metric.py", line 383, in wrapped_func
update(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torchmetrics/classification/stat_scores.py", line 189, in update
ignore_index=self.ignore_index,
File "/usr/local/lib/python3.7/dist-packages/torchmetrics/functional/classification/stat_scores.py", line 161, in _stat_scores_update
ignore_index=ignore_index,
File "/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/checks.py", line 417, in _input_format_classification
ignore_index=ignore_index,
File "/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/checks.py", line 271, in _check_classification_inputs
case, implied_classes = _check_shape_and_type_consistency(preds, target)
File "/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/checks.py", line 118, in _check_shape_and_type_consistency
"Either `preds` and `target` both should have the (same) shape (N, ...), or `target` should be (N, ...)"
ValueError: Either `preds` and `target` both should have the (same) shape (N, ...), or `target` should be (N, ...) and `preds` should be (N, C, ...).
Epoch 0: 100%|██████████| 2/2 [00:01<00:00, 1.04it/s]
Am I doing wrong something? It still works on the MVTec dataset. but for the custom dataset, it didn't work.
When I use the format dataset below to change the config for the custom dataset in patchcore model as Readme.md
I got the error:
Am I doing wrong something? It still works on the MVTec dataset. but for the custom dataset, it didn't work.