Closed kumarsyamala closed 1 year ago
I am unable to reproduce this. When I change the category
to leather
from the original config file, I can train a model. Here is the config file I used.
dataset:
name: mvtec
format: mvtec
path: ./datasets/MVTec
category: leather
task: segmentation
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
image_size: 256 # dimensions to which images are resized (mandatory)
center_crop: null # dimensions to which images are center-cropped after resizing (optional)
normalization: imagenet # data distribution to which the images will be normalized: [none, imagenet]
transform_config:
train: null
eval: null
test_split_mode: from_dir # options: [from_dir, synthetic]
test_split_ratio: 0.2 # fraction of train images held out testing (usage depends on test_split_mode)
val_split_mode: same_as_test # options: [same_as_test, from_test, synthetic]
val_split_ratio: 0.5 # fraction of train/test images held out for validation (usage depends on val_split_mode)
tiling:
apply: false
tile_size: null
stride: null
remove_border_count: 0
use_random_tiling: False
random_tile_count: 16
model:
name: padim
backbone: resnet18
pre_trained: true
layers:
- layer1
- layer2
- layer3
normalization_method: min_max # options: [none, min_max, cdf]
metrics:
image:
- F1Score
- AUROC
pixel:
- F1Score
- AUROC
threshold:
method: adaptive #options: [adaptive, manual]
manual_image: null
manual_pixel: null
visualization:
show_images: False # show images on the screen
save_images: True # save images to the file system
log_images: True # log images to the available loggers (if any)
image_save_path: null # path to which images will be saved
mode: full # options: ["full", "simple"]
project:
seed: 42
path: ./results
logging:
logger: [] # options: [comet, tensorboard, wandb, csv] or combinations.
log_graph: false # Logs the model graph to respective logger.
optimization:
export_mode: null # options: torch, onnx, openvino
# PL Trainer Args. Don't add extra parameter here.
trainer:
enable_checkpointing: true
default_root_dir: null
gradient_clip_val: 0
gradient_clip_algorithm: norm
num_nodes: 1
devices: 1
enable_progress_bar: true
overfit_batches: 0.0
track_grad_norm: -1
check_val_every_n_epoch: 1 # Don't validate before extracting features.
fast_dev_run: false
accumulate_grad_batches: 1
max_epochs: 1
min_epochs: null
max_steps: -1
min_steps: null
max_time: null
limit_train_batches: 1.0
limit_val_batches: 1.0
limit_test_batches: 1.0
limit_predict_batches: 1.0
val_check_interval: 1.0 # Don't validate before extracting features.
log_every_n_steps: 50
accelerator: auto # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">
strategy: null
sync_batchnorm: false
precision: 32
enable_model_summary: true
num_sanity_val_steps: 0
profiler: null
benchmark: false
deterministic: false
reload_dataloaders_every_n_epochs: 0
auto_lr_find: false
replace_sampler_ddp: true
detect_anomaly: false
auto_scale_batch_size: false
plugins: null
move_metrics_to_cpu: false
multiple_trainloader_mode: max_size_cycle
I am unable to reproduce this. When I change the
category
toleather
from the original config file, I can train a model. Here is the config file I used.dataset: name: mvtec format: mvtec path: ./datasets/MVTec category: leather task: segmentation train_batch_size: 32 eval_batch_size: 32 num_workers: 8 image_size: 256 # dimensions to which images are resized (mandatory) center_crop: null # dimensions to which images are center-cropped after resizing (optional) normalization: imagenet # data distribution to which the images will be normalized: [none, imagenet] transform_config: train: null eval: null test_split_mode: from_dir # options: [from_dir, synthetic] test_split_ratio: 0.2 # fraction of train images held out testing (usage depends on test_split_mode) val_split_mode: same_as_test # options: [same_as_test, from_test, synthetic] val_split_ratio: 0.5 # fraction of train/test images held out for validation (usage depends on val_split_mode) tiling: apply: false tile_size: null stride: null remove_border_count: 0 use_random_tiling: False random_tile_count: 16 model: name: padim backbone: resnet18 pre_trained: true layers: - layer1 - layer2 - layer3 normalization_method: min_max # options: [none, min_max, cdf] metrics: image: - F1Score - AUROC pixel: - F1Score - AUROC threshold: method: adaptive #options: [adaptive, manual] manual_image: null manual_pixel: null visualization: show_images: False # show images on the screen save_images: True # save images to the file system log_images: True # log images to the available loggers (if any) image_save_path: null # path to which images will be saved mode: full # options: ["full", "simple"] project: seed: 42 path: ./results logging: logger: [] # options: [comet, tensorboard, wandb, csv] or combinations. log_graph: false # Logs the model graph to respective logger. optimization: export_mode: null # options: torch, onnx, openvino # PL Trainer Args. Don't add extra parameter here. trainer: enable_checkpointing: true default_root_dir: null gradient_clip_val: 0 gradient_clip_algorithm: norm num_nodes: 1 devices: 1 enable_progress_bar: true overfit_batches: 0.0 track_grad_norm: -1 check_val_every_n_epoch: 1 # Don't validate before extracting features. fast_dev_run: false accumulate_grad_batches: 1 max_epochs: 1 min_epochs: null max_steps: -1 min_steps: null max_time: null limit_train_batches: 1.0 limit_val_batches: 1.0 limit_test_batches: 1.0 limit_predict_batches: 1.0 val_check_interval: 1.0 # Don't validate before extracting features. log_every_n_steps: 50 accelerator: auto # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto"> strategy: null sync_batchnorm: false precision: 32 enable_model_summary: true num_sanity_val_steps: 0 profiler: null benchmark: false deterministic: false reload_dataloaders_every_n_epochs: 0 auto_lr_find: false replace_sampler_ddp: true detect_anomaly: false auto_scale_batch_size: false plugins: null move_metrics_to_cpu: false multiple_trainloader_mode: max_size_cycle
hey hi
I am not sure why this is happening though, I tried with different formats of the dataset and different models also , I am getting the same error
You can refer this discussion which I started, Here I am trying train the model with custom dataset https://github.com/openvinotoolkit/anomalib/discussions/1117#discussion-5281049
@kumarsyamala, I cannot reproduce this. Using your configuration file I can train a model. Can you confirm you install anomalib on a freshly created virtual environment to ensure you are using the right anomalib version?
For the time being, I'm converting this to a Q&A since this is not an issue.
Describe the bug
I am trying to train the model with leather dataset
Then I am facing this error
Any suggestions how I can tackle this problem, I have searched and couldnt get the solution.
AttributeError: 'AnomalibMetricCollection' object has no attribute 'image_F1Score'
Dataset
MVTec
Model
PADiM
Steps to reproduce the behavior
!git clone https://github.com/openvinotoolkit/anomalib.git %cd anomalib !pip install -e . !python tools/train.py --config C:\anomalib.yaml
OS information
OS information:
Expected behavior
It should have run and get the results; I already trained the model with bottle and got the result when I am trying with different category like leather and wood giving the same error.
Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
No response
Configuration YAML
Logs
Code of Conduct