Lightning-AI / torchmetrics

Machine learning metrics for distributed, scalable PyTorch applications.
https://lightning.ai/docs/torchmetrics/
Apache License 2.0
2.15k stars 409 forks source link

build(deps): update faster-coco-eval requirement from ==1.5.* to ==1.6.* in /requirements #2750

Closed dependabot[bot] closed 1 month ago

dependabot[bot] commented 2 months ago

Updates the requirements on faster-coco-eval to permit the latest version.

Release notes

Sourced from faster-coco-eval's releases.

1.6.0

v1.6.0

  • [x] Rework mask_api with pybind11 C++ .
  • [x] Rework RLE support.
  • [x] Create test files for all components.
  • [x] The math_matches function has been reworked, with an emphasis on using C++ code.
  • [x] Added more documentation of functions. Optimized existing ones.
  • [x] Added rleToBoundary func with 2 backend ["mask_api", "opencv"]
  • [x] IoU type boundary support (further testing is needed)
  • [x] Create async rle and boundary comput discussion
Changelog

Sourced from faster-coco-eval's changelog.

v1.6.0

  • [x] Rework mask_api with pybind11 C++ .
  • [x] Rework RLE support.
  • [x] Create test files for all components.
  • [x] The math_matches function has been reworked, with an emphasis on using C++ code.
  • [x] Added more documentation of functions. Optimized existing ones.
  • [x] Added rleToBoundary func with 2 backend ["mask_api", "opencv"]
  • [x] IoU type boundary support (further testing is needed)
  • [x] Create async rle and boundary comput discussion

v1.5.7

  • [x] Compare COCOEval bug fix.

v1.5.6

  • [x] Replace CED MSE curve with MAE (px) curve
  • [x] Add CED examples
  • [x] Display IoU and MAE for keypoints
  • [x] Reworked eval._prepare to clear up the return flow
  • [x] Reworked the C++ part of COCOevalEvaluateImages and COCOevalAccumulate
    • [x] Add new COCOevalEvaluateAccumulate to combine these two calls. You can use old style separate_eval==True (default=False)
    • [x] COCOevalAccumulate & COCOevalEvaluateAccumulate -> COCOeval_faster.eval is now correctly created as numpy arrays.
  • [x] Append LVIS dataset support lvis_style=True in COCOeval_faster
cocoEval = COCOeval_faster(cocoGt, cocoDt, iouType, lvis_style=True, print_function=print)
cocoEval.params.maxDets = [300]

v1.5.5

  • [x] Add CED MSE curve
  • [x] Review tests
  • [x] Review COCOeval_faster.math_matches function and COCOeval_faster.compute_mIoU function

v1.5.3 - v1.5.4

  • [x] Worked out the ability to work with skeletons and various key points
  • [x] eval.state_as_dict Now works for key points

v1.5.2

  • [x] Change comparison to colab_example
  • [x] append utils with opencv conver_mask_to_poly (extra)
  • [x] append drop_cocodt_by_score for extra eval

v1.5.1

... (truncated)

Commits


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

📚 Documentation preview 📚: https://torchmetrics--2750.org.readthedocs.build/en/2750/

Borda commented 2 months ago

@SkafteNicki seems we have some compatibility issues, mind checking:

FAILED unittests/detection/test_map.py::TestMapProperties::test_segm_iou_empty_gt_mask[faster_coco_eval] - KeyError: 0
FAILED unittests/detection/test_map.py::TestMapProperties::test_average_argument[True-faster_coco_eval] - assert False
 +  where False = <built-in method allclose of type object at 0x7fcd9ddc45c0>(tensor([ 0.7228,  0.8000,  0.4545, -1.0000,  0.6505,  0.5660]), tensor([ 0.7228,  0.8000,  0.4545, -1.0000,  0.6505,  0.5505]))
 +    where <built-in method allclose of type object at 0x7fcd9ddc45c0> = torch.allclose
SkafteNicki commented 1 month ago

For my on sake of trying to remember what I did:

codecov[bot] commented 1 month ago

Codecov Report

Attention: Patch coverage is 93.93939% with 2 lines in your changes missing coverage. Please review.

Project coverage is 36%. Comparing base (151fef1) to head (e022592). Report is 1 commits behind head on master.

:exclamation: There is a different number of reports uploaded between BASE (151fef1) and HEAD (e022592). Click for more details.

HEAD has 80 uploads less than BASE | Flag | BASE (151fef1) | HEAD (e022592) | |------|------|------| |Windows|6|3| |python3.8|6|3| |cpu|40|20| |torch1.13.1+cpu|6|3| |macOS|8|4| |python3.10|10|5| |torch2.0.1|4|2| |torch2.4.0+cpu|2|1| |python3.11|6|3| |torch2.4.0|2|1| |torch1.12.1+cpu|2|1| |Linux|26|13| |python3.9|18|9| |torch2.0.1+cpu|6|3| |torch1.10.2+cpu|2|1| |torch1.11.0+cpu|2|1| |torch2.3.1+cpu|4|2| |torch2.2.2+cpu|4|2| |torch2.1.2+cpu|2|1| |torch2.4.0+cu121|2|1| |torch1.13.1|2|1|
Additional details and impacted files ```diff @@ Coverage Diff @@ ## master #2750 +/- ## ======================================== - Coverage 69% 36% -33% ======================================== Files 329 329 Lines 18085 18086 +1 ======================================== - Hits 12506 6567 -5939 - Misses 5579 11519 +5940 ```
SkafteNicki commented 1 month ago

Install of recent version of faster-coco-eval does not seem to work on windows and mac. Have opened issue: https://github.com/MiXaiLL76/faster_coco_eval/issues/39 for help

MiXaiLL76 commented 1 month ago

@SkafteNicki seems we have some compatibility issues, mind checking:

FAILED unittests/detection/test_map.py::TestMapProperties::test_segm_iou_empty_gt_mask[faster_coco_eval] - KeyError: 0
FAILED unittests/detection/test_map.py::TestMapProperties::test_average_argument[True-faster_coco_eval] - assert False
 +  where False = <built-in method allclose of type object at 0x7fcd9ddc45c0>(tensor([ 0.7228,  0.8000,  0.4545, -1.0000,  0.6505,  0.5660]), tensor([ 0.7228,  0.8000,  0.4545, -1.0000,  0.6505,  0.5505]))
 +    where <built-in method allclose of type object at 0x7fcd9ddc45c0> = torch.allclose

In my tests of version 1.6.2 I solved the problem with test_segm_iou_empty_gt_mask, but I couldn't solve the problems with test_average_argument in a simple way.

It lies in sorting score on the python/c++ side. I couldn't figure it out quickly.

https://github.com/Lightning-AI/torchmetrics/blob/master/tests/unittests/detection/test_map.py#L315 Here in the test the score values ​​for elements 1 and 7 are identical, at the sorting stage there is confusion. After adding element 7 + 1e-8 (to get 0.204 + 1e-8) the error went away.

I will try to figure it out, but I am not sure if it is possible yet.

SkafteNicki commented 1 month ago

@SkafteNicki seems we have some compatibility issues, mind checking:

FAILED unittests/detection/test_map.py::TestMapProperties::test_segm_iou_empty_gt_mask[faster_coco_eval] - KeyError: 0
FAILED unittests/detection/test_map.py::TestMapProperties::test_average_argument[True-faster_coco_eval] - assert False
 +  where False = <built-in method allclose of type object at 0x7fcd9ddc45c0>(tensor([ 0.7228,  0.8000,  0.4545, -1.0000,  0.6505,  0.5660]), tensor([ 0.7228,  0.8000,  0.4545, -1.0000,  0.6505,  0.5505]))
 +    where <built-in method allclose of type object at 0x7fcd9ddc45c0> = torch.allclose

In my tests of version 1.6.2 I solved the problem with test_segm_iou_empty_gt_mask, but I couldn't solve the problems with test_average_argument in a simple way.

It lies in sorting score on the python/c++ side. I couldn't figure it out quickly.

https://github.com/Lightning-AI/torchmetrics/blob/master/tests/unittests/detection/test_map.py#L315 Here in the test the score values ​​for elements 1 and 7 are identical, at the sorting stage there is confusion. After adding element 7 + 1e-8 (to get 0.204 + 1e-8) the error went away.

I will try to figure it out, but I am not sure if it is possible yet.

@MiXaiLL76 thanks for the new release of faster-coco-eval. I had already found workarounds for both tests failing on out side so do not worry about that. Hopefully this PR will just pass without further problems now.

SkafteNicki commented 1 month ago

Thanks @MiXaiLL76 for the fast response!

MiXaiLL76 commented 1 month ago

@SkafteNicki I have additionally fixed the work with windows & macos. I think it should not cause problems, but in general, if you check it additionally, it will be good. Now in the new version 1.6.4, there is a build of windows & macos packages whl (earlier I only built ubuntu)

SkafteNicki commented 4 weeks ago

@SkafteNicki I have additionally fixed the work with windows & macos. I think it should not cause problems, but in general, if you check it additionally, it will be good. Now in the new version 1.6.4, there is a build of windows & macos packages whl (earlier I only built ubuntu)

Hi @MiXaiLL76 thanks for letting me know. I can see that we set our requirement file to

faster-coco-eval >=1.6.3, <1.7.0

which means that the new version should be auto tested for new commits. The latest commit to master seems to have passed no problems so everything should be good. But thanks for letting us know!