openvinotoolkit / anomalib

An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
https://anomalib.readthedocs.io/en/latest/
Apache License 2.0
3.78k stars 672 forks source link

[Bug]: i get this when i try to do train custom dataset #1461

Open anewworl opened 12 months ago

anewworl commented 12 months ago

Describe the bug

the config.py get sth wrong with it image

Dataset

Folder

Model

Efficient_ad

Steps to reproduce the behavior

1.install anomalib

  1. modify config.yaml file
  2. use train.py in tools

OS information

OS information:

Expected behavior

expect to get normal training

Screenshots

image

Pip/GitHub

pip

What version/branch did you use?

latest version

Configuration YAML

  name: sample
  format: folder
  path: ./datasets/sample
  category: laser_lid
  task: segmentation
  normal_dir: good # name of the folder containing normal images.
  abnormal_dir: bad # name of the folder containing abnormal images.
  task: segmentation # classification or segmentation
  mask_dir: null #optional
  normal_test_dir: null # optional
  extensions: null
  split_ratio: 0.2  # normal images ratio to create a test split
  image_size: 960
  train_batch_size: 32
  eval_batch_size: 32
  num_workers: 12
  normalization: imagenet # data distribution to which the images will be normalized
  test_split_mode: from_dir # options [from_dir, synthetic]
  val_split_ratio: 0.5 # fraction of train/test images held out for validation (usage depends on val_split_mode)
  transform_config:
      train: null
      eval: null
  val_split_mode: from_test # determines how the validation set is created, options [same_as_test, from_test]

model:
  name: padim
  backbone: resnet18
  pre_trained: true
  layers:
    - layer1
    - layer2
    - layer3
  normalization_method: min_max # options: [none, min_max, cdf]

metrics:
  image:
    - F1Score
    - AUROC
  pixel:
    - F1Score
    - AUROC
  threshold:
    method: adaptive #options: [adaptive, manual]
    manual_image: null
    manual_pixel: null

visualization:
  show_images: False # show images on the screen
  save_images: True # save images to the file system
  log_images: True # log images to the available loggers (if any)
  image_save_path: null # path to which images will be saved
  mode: full # options: ["full", "simple"]

project:
  seed: 42
  path: ./results

logging:
  logger: [] # options: [comet, tensorboard, wandb, csv] or combinations.
  log_graph: false # Logs the model graph to respective logger.

optimization:
  export_mode: null # options: torch, onnx, openvino

# PL Trainer Args. Don't add extra parameter here.
trainer:
  enable_checkpointing: true
  default_root_dir: null
  gradient_clip_val: 0
  gradient_clip_algorithm: norm
  num_nodes: 1
  devices: 1
  enable_progress_bar: true
  overfit_batches: 0.0
  track_grad_norm: -1
  check_val_every_n_epoch: 1 # Don't validate before extracting features.
  fast_dev_run: false
  accumulate_grad_batches: 1
  max_epochs: 1
  min_epochs: null
  max_steps: -1
  min_steps: null
  max_time: null
  limit_train_batches: 1.0
  limit_val_batches: 1.0
  limit_test_batches: 1.0
  limit_predict_batches: 1.0
  val_check_interval: 1.0 # Don't validate before extracting features.
  log_every_n_steps: 50
  accelerator: auto # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">
  strategy: null
  sync_batchnorm: false
  precision: 32
  enable_model_summary: true
  num_sanity_val_steps: 0
  profiler: null
  benchmark: false
  deterministic: false
  reload_dataloaders_every_n_epochs: 0
  auto_lr_find: false
  replace_sampler_ddp: true
  detect_anomaly: false
  auto_scale_batch_size: false
  plugins: null
  move_metrics_to_cpu: false
  multiple_trainloader_mode: max_size_cycle

Logs

C:\Users\phanvann\AppData\Local\Programs\Python\Python39\Lib\site-packages\torchmetrics\utilities\prints.py:36: UserWarning: Metric `ROC` will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
  warnings.warn(*args, **kwargs)
Traceback (most recent call last):
  File "C:\Users\phanvann\VSCode\anomalib\tools\train.py", line 79, in <module>
    train(args)
  File "C:\Users\phanvann\VSCode\anomalib\tools\train.py", line 64, in train
    trainer.fit(model=model, datamodule=datamodule)
  File "C:\Users\phanvann\AppData\Local\Programs\Python\Python39\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 700, in fit
    self._call_and_handle_interrupt(
  File "C:\Users\phanvann\AppData\Local\Programs\Python\Python39\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 654, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "C:\Users\phanvann\AppData\Local\Programs\Python\Python39\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 741, in _fit_impl
    results = self._run(model, ckpt_path=self.ckpt_path)
  File "C:\Users\phanvann\AppData\Local\Programs\Python\Python39\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1147, in _run
    self.strategy.setup(self)
  File "C:\Users\phanvann\AppData\Local\Programs\Python\Python39\Lib\site-packages\pytorch_lightning\strategies\single_device.py", line 74, in setup
    super().setup(trainer)
  File "C:\Users\phanvann\AppData\Local\Programs\Python\Python39\Lib\site-packages\pytorch_lightning\strategies\strategy.py", line 153, in setup
    self.setup_optimizers(trainer)
  File "C:\Users\phanvann\AppData\Local\Programs\Python\Python39\Lib\site-packages\pytorch_lightning\strategies\strategy.py", line 141, in setup_optimizers
    self.optimizers, self.lr_scheduler_configs, self.optimizer_frequencies = _init_optimizers_and_lr_schedulers(
  File "C:\Users\phanvann\AppData\Local\Programs\Python\Python39\Lib\site-packages\pytorch_lightning\core\optimizer.py", line 194, in _init_optimizers_and_lr_schedulers
    _validate_scheduler_api(lr_scheduler_configs, model)
  File "C:\Users\phanvann\AppData\Local\Programs\Python\Python39\Lib\site-packages\pytorch_lightning\core\optimizer.py", line 351, in _validate_scheduler_api
    raise MisconfigurationException(
pytorch_lightning.utilities.exceptions.MisconfigurationException: The provided lr scheduler `StepLR` doesn't follow PyTorch's LRScheduler API. You should override the `LightningModule.lr_scheduler_step` hook with your own logic if you are using a custom LR scheduler.

Code of Conduct

jeaho322 commented 11 months ago

I think you used wrong config file. go to efficientad page and get it