gudovskiy / cflow-ad

Official PyTorch code for WACV 2022 paper "CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows"
https://arxiv.org/abs/2107.12571
BSD 3-Clause "New" or "Revised" License
238 stars 59 forks source link

Tune parameters to get best results. #31

Open Tekno-H opened 1 year ago

Tekno-H commented 1 year ago

Hello Denis, I have been studying and adjusting this great repository to fit my needs in defect detection. I have trained several models successfully(using the mobilenetv3_large backbone) and achieved good results. However, I have eliminated some parts of your code, including the snippets that calculate the "seg_threshold" parameter from the ground truth. Therefore, I am choosing it by hand (through trial&error), and although the results are OK, I think they can be further improved.

My questions are: 1- How can I choose a value for "seg_threshold" reliably in my given case? 2- What parameters do you recommend fine-tuning when training a new model (knowing that I have a good and balanced dataset of the same product but with different colours)? 3- My last question is related to exporting the model to "onnx" format, do you have any comments on how to achieve that? Do you plan on adding that capability? 4- When should I stop training ?

Thank you in advance, your work is truly inspiring.

gudovskiy commented 1 year ago
  1. Could you take some part of you train data to calculate optimal threshold? or do cross-validation?
  2. I didn't try ONNX conversion. Did you look at https://github.com/openvinotoolkit/anomalib ?
  3. Overfitting can happen when training for a long time. May be it is related to #1, where you can select a subset of your data for validation to avoid overfitting
Tekno-H commented 1 year ago

Thank you for your reply,

  1. Could you take some part of you train data to calculate optimal threshold? or do cross-validation?

This is exacly what I ended up doing

  1. I didn't try ONNX conversion. Did you look at https://github.com/openvinotoolkit/anomalib ?

I checked it and replied on the other thread

  1. Overfitting can happen when training for a long time. May be it is related to where is checkpoint? pretrained weights to reproduce your results? #1, where you can select a subset of your data for validation to avoid overfitting Thanks I will check it.