@nepomnyi - your compare algorithms notebook triggered some thinking about the need of a sort of a benchmark for openpiv-python algorithms. Could you please check how it’s done in other projects and what’d be the best way:
Running your comparison over all the tests we have and comparing statistics, not a single case.
Creating some sort of a standard benchmark test, a subset of images with ground truth, that we will always compare to?
@nepomnyi - your compare algorithms notebook triggered some thinking about the need of a sort of a benchmark for openpiv-python algorithms. Could you please check how it’s done in other projects and what’d be the best way:
Thanks Alex