Closed Justin900429 closed 1 month ago
@Justin900429 thanks a lot! py-motmetrics can cache intermediate results. If you look here https://github.com/cheind/py-motmetrics/blob/develop/motmetrics/metrics.py#L204 a cache is being setup for all intermediate metrics that are computed, which is then being populate here https://github.com/cheind/py-motmetrics/blob/develop/motmetrics/metrics.py#L354. So as long as your intermediate values become metrics themselves, you should be able to cache their results.
@cheind thanks for pointing out that, but we need to compute different threshold to obtain the $\text{HOTA}_\alpha$ before computing the final $\text{HOTA}$. Is this possible with cache?
@Justin900429 that is a good question. I haven't looked at HOTA yet, but if it the computation is similar to Average Precision (AP) metric, one could instead pre-compute a 'tensor' of values for which the individual AP thresholds become simple sums over individual axis.
Thresholds are fixed and should be [0.05, 0.1, ..., 0.95]
. The current PR supports all the needed predifined functions (added to metrics.py
). However, the current iimplementation is inefficient and not user-friendly, here are some issues:
update
for accumulator along with HOTA is not compatiable with "online" tracking, because we should have all the preds and gts to compute the global alignment score in advance.@Justin900429 thanks for your update! Would it be feasible to implement HOTA like AP is implemented for COCO datasets? See here https://github.com/ppwwyyxx/cocoapi/blob/4670067b35e7b65d618c9746c456fe2c1128049c/PythonAPI/pycocotools/cocoeval.py#L315 where in accumulate
a tensor with multiple axis is defined that carries all the information to compute AP(th) quickly as done in summarize
.
One question regarding the current state: does it make sense to merge now and optimize later? Should we update the docs?
@cheind i tried using the idea from COCO, but this might take more time to ensure this won't break all the others. It is ok to be merged but the doc should be well prepared (i'll finish this later). The usage will be totally different to every other metrics. At least it should be used only after all the predictions are done and is not compaitable to update
.
any news?
๐
Sorry for the late reply. I had updated the README for the HOTA. The "for-loop" now is a little bit annoying, I will merge it into mh.compute_many
in the future commit.
Thanks for the update @Justin900429! Really looking forward to this getting merged ๐
I figured out how to remove for loop
:) It looks simple now. Efficiency is the next step.
Hi @cheind, it can be merged now if the workflow is passed. I will switch back to this to improve efficiency in the future with a new PR. Thanks a lot ๐.
@Justin900429 amazing work - thanks a lot for your time and effort! Just merged into develop :)
I have been waiting for this PR for years @Justin900429. We can all now finally ditch the trackeval repo in favor of this package ๐. Thanks to everybody involved in the process of getting this merged.
As I'm not actively researching MOT topics anymore, I wonder if we should let any MOT challenge sites know about this particular addition?
Following the #183 to support the HOTA metrics and match the offical result.
Currently support only for specifiying a threshold. To compute the whole metrics, try the following sample:
The results will be (this is the same as official one):
Checklist
Note
The current implementation is totally NOT efficient enough, need the help from community to improve it.