Closed haimat closed 3 weeks ago
Hi @haimat, this is because PADIM requires only 1 epoch to go through the entire dataset once. Increasing the number of epochs only replicates the same process and wouldn't improve the performance. That's why we hardcode the number of epochs to 1.
Describe the bug
I want to train a PADiM model via anomalib, however, it always stops after the first epoch. But when creating the
Engine()
object I passmax_epochs=100
, see below.Dataset
Other (please specify in the text field below)
Model
PADiM
Steps to reproduce the behavior
I use the following training script:
OS information
OS information:
anomalib.data.Folder
Expected behavior
Since I pass
max_epochs=100
I would expect the training not to stop after the first epoch with the message "max_epochs=1 reached."Screenshots
No response
Pip/GitHub
pip
What version/branch did you use?
No response
Configuration YAML
Logs
Code of Conduct