An anomaly detection library comprising state-of-the-art algorithms and features such as experiment management, hyper-parameter optimization, and edge inference.
The normalization metrics are resetted before every validation. Is there any flaw in this logic or does it makes sense, so that models that have very different anomaly map and pred score magnitudes during training epochs are optimally normalized.
โจ Changes
Select what type of change your PR is:
[x] ๐ Bug fix (non-breaking change which fixes an issue)
[ ] ๐จ Refactor (non-breaking change which refactors the code base)
[ ] ๐ New feature (non-breaking change which adds functionality)
[ ] ๐ฅ Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] ๐ Documentation update
[ ] ๐ Security update
โ Checklist
Before you submit your pull request, please make sure you have completed the following steps:
[ ] ๐ I have summarized my changes in the CHANGELOG and followed the guidelines for my type of change (skip for minor changes, documentation updates, and test enhancements).
[ ] ๐ I have made the necessary updates to the documentation (if applicable).
[ ] ๐งช I have written tests that support my changes and prove that my fix is effective or my feature works (if applicable).
๐ Description
Fixes: https://github.com/openvinotoolkit/anomalib/discussions/2122#discussioncomment-9740488 and https://github.com/openvinotoolkit/anomalib/issues/2027 and https://github.com/openvinotoolkit/anomalib/issues/2139#issue-2352819822
The normalization metrics are resetted before every validation. Is there any flaw in this logic or does it makes sense, so that models that have very different anomaly map and pred score magnitudes during training epochs are optimally normalized.
โจ Changes
Select what type of change your PR is:
โ Checklist
Before you submit your pull request, please make sure you have completed the following steps:
For more information about code review checklists, see the Code Review Checklist.