Open JunqiaoLi opened 1 year ago
@JunqiaoLi Hi, I haven't met with this before. Perhaps you could run a sanity check to see where these two functions are different? According to my memory, they should perform the same on a short video clip (e.g. 3 frames) under the setup of multi-frame detection (multi-frame detection as in here).
@JunqiaoLi Hi, I haven't met with this before. Perhaps you could run a sanity check to see where these two functions are different? According to my memory, they should perform the same on a short video clip (e.g. 3 frames) under the setup of multi-frame detection (multi-frame detection as in here).
Hi, let me describe my operation in detail:
From my understanding, the forward_test will regard the model as a detection model, which means it will generate_empty_instance at every time. As for forward_track, it will only generate_empty_instance at the first frame, then keep some track instances and pass to next frame.
I'm not sure if there is any error in my steps described above? And could u plz tell me about how did you check that 'they should perform the same on a short video clip' ?
@JunqiaoLi I see. If you use the tracking evaluation for the detection evaluation, the phenomenon described above makes more sense now. (Actually, I don't recommend this. Please check out the reasons in here.)
Ok, now, potential solutions. There are a handful of differences between forward_test
and forward_track
. I cannot remember everything after several months, but here are two examples. (1) link test_tracking
filters out the categories not for tracking evaluation; (2) link More complicated update of active tracks is used for tracking. Perhaps aligning the behaviors is the key to getting good detection results.
Furthermore, how about getting the bounding boxes via the tracking results tools/test_track.py
then evaluate the mAP. You might need to align the format with detection, though.
@JunqiaoLi It seems this issue is no longer active. Thanks for the discussion! Would you mind close this issue?
Hi, sorry to bother you again since I have met another problem. I use the 'forward_track' function to get the result and then evaluate the detection performance, however, the MAP is not good (like 0.0232). When I use the same model, but use 'forward_test' function to get the result and then evaluate the detection performance, the performance is about 0.3191. (But forward_test won't output information related to track_ids)
Have you ever met this before? Is this within expectations?