Closed dependabot[bot] closed 1 month ago
@SkafteNicki seems we have some compatibility issues, mind checking:
FAILED unittests/detection/test_map.py::TestMapProperties::test_segm_iou_empty_gt_mask[faster_coco_eval] - KeyError: 0
FAILED unittests/detection/test_map.py::TestMapProperties::test_average_argument[True-faster_coco_eval] - assert False
+ where False = <built-in method allclose of type object at 0x7fcd9ddc45c0>(tensor([ 0.7228, 0.8000, 0.4545, -1.0000, 0.6505, 0.5660]), tensor([ 0.7228, 0.8000, 0.4545, -1.0000, 0.6505, 0.5505]))
+ where <built-in method allclose of type object at 0x7fcd9ddc45c0> = torch.allclose
For my on sake of trying to remember what I did:
faster-coco-eval
just lead to an error whereas in older versions (and pycocotools) this just returns -1 for all scoresfaster-coco-eval
injects some attributes which messes with repeated evaluations.Attention: Patch coverage is 93.93939%
with 2 lines
in your changes missing coverage. Please review.
Project coverage is 36%. Comparing base (
151fef1
) to head (e022592
). Report is 1 commits behind head on master.:exclamation: There is a different number of reports uploaded between BASE (151fef1) and HEAD (e022592). Click for more details.
HEAD has 80 uploads less than BASE
| Flag | BASE (151fef1) | HEAD (e022592) | |------|------|------| |Windows|6|3| |python3.8|6|3| |cpu|40|20| |torch1.13.1+cpu|6|3| |macOS|8|4| |python3.10|10|5| |torch2.0.1|4|2| |torch2.4.0+cpu|2|1| |python3.11|6|3| |torch2.4.0|2|1| |torch1.12.1+cpu|2|1| |Linux|26|13| |python3.9|18|9| |torch2.0.1+cpu|6|3| |torch1.10.2+cpu|2|1| |torch1.11.0+cpu|2|1| |torch2.3.1+cpu|4|2| |torch2.2.2+cpu|4|2| |torch2.1.2+cpu|2|1| |torch2.4.0+cu121|2|1| |torch1.13.1|2|1|
Install of recent version of faster-coco-eval
does not seem to work on windows and mac.
Have opened issue: https://github.com/MiXaiLL76/faster_coco_eval/issues/39 for help
@SkafteNicki seems we have some compatibility issues, mind checking:
FAILED unittests/detection/test_map.py::TestMapProperties::test_segm_iou_empty_gt_mask[faster_coco_eval] - KeyError: 0 FAILED unittests/detection/test_map.py::TestMapProperties::test_average_argument[True-faster_coco_eval] - assert False + where False = <built-in method allclose of type object at 0x7fcd9ddc45c0>(tensor([ 0.7228, 0.8000, 0.4545, -1.0000, 0.6505, 0.5660]), tensor([ 0.7228, 0.8000, 0.4545, -1.0000, 0.6505, 0.5505])) + where <built-in method allclose of type object at 0x7fcd9ddc45c0> = torch.allclose
In my tests of version 1.6.2 I solved the problem with test_segm_iou_empty_gt_mask, but I couldn't solve the problems with test_average_argument in a simple way.
It lies in sorting score on the python/c++ side. I couldn't figure it out quickly.
https://github.com/Lightning-AI/torchmetrics/blob/master/tests/unittests/detection/test_map.py#L315 Here in the test the score values for elements 1 and 7 are identical, at the sorting stage there is confusion. After adding element 7 + 1e-8 (to get 0.204 + 1e-8) the error went away.
I will try to figure it out, but I am not sure if it is possible yet.
@SkafteNicki seems we have some compatibility issues, mind checking:
FAILED unittests/detection/test_map.py::TestMapProperties::test_segm_iou_empty_gt_mask[faster_coco_eval] - KeyError: 0 FAILED unittests/detection/test_map.py::TestMapProperties::test_average_argument[True-faster_coco_eval] - assert False + where False = <built-in method allclose of type object at 0x7fcd9ddc45c0>(tensor([ 0.7228, 0.8000, 0.4545, -1.0000, 0.6505, 0.5660]), tensor([ 0.7228, 0.8000, 0.4545, -1.0000, 0.6505, 0.5505])) + where <built-in method allclose of type object at 0x7fcd9ddc45c0> = torch.allclose
In my tests of version 1.6.2 I solved the problem with test_segm_iou_empty_gt_mask, but I couldn't solve the problems with test_average_argument in a simple way.
It lies in sorting score on the python/c++ side. I couldn't figure it out quickly.
https://github.com/Lightning-AI/torchmetrics/blob/master/tests/unittests/detection/test_map.py#L315 Here in the test the score values for elements 1 and 7 are identical, at the sorting stage there is confusion. After adding element 7 + 1e-8 (to get 0.204 + 1e-8) the error went away.
I will try to figure it out, but I am not sure if it is possible yet.
@MiXaiLL76 thanks for the new release of faster-coco-eval
. I had already found workarounds for both tests failing on out side so do not worry about that. Hopefully this PR will just pass without further problems now.
Thanks @MiXaiLL76 for the fast response!
@SkafteNicki I have additionally fixed the work with windows & macos. I think it should not cause problems, but in general, if you check it additionally, it will be good. Now in the new version 1.6.4, there is a build of windows & macos packages whl (earlier I only built ubuntu)
@SkafteNicki I have additionally fixed the work with windows & macos. I think it should not cause problems, but in general, if you check it additionally, it will be good. Now in the new version 1.6.4, there is a build of windows & macos packages whl (earlier I only built ubuntu)
Hi @MiXaiLL76 thanks for letting me know. I can see that we set our requirement file to
faster-coco-eval >=1.6.3, <1.7.0
which means that the new version should be auto tested for new commits. The latest commit to master seems to have passed no problems so everything should be good. But thanks for letting us know!
Updates the requirements on faster-coco-eval to permit the latest version.
Release notes
Sourced from faster-coco-eval's releases.
Changelog
Sourced from faster-coco-eval's changelog.
... (truncated)
Commits
af55180
fix61b086e
build testdff15bf
Update test_boundary.pyfd737d3
Update test_basic.pydb9c6dc
Update build.ymlf3009a4
Update build.yml56f6bcc
Update test_basic.py0aaed08
Update build.yml2f792e4
Update setup.py deprecated 3.6e9fcbab
Update build.yml deprecated 3.6Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show
📚 Documentation preview 📚: https://torchmetrics--2750.org.readthedocs.build/en/2750/