buxiangzhiren / ContextLoc

Code for the paper "Enriching Local and Global Contexts for Temporal Action Localization", ICCV 2021
Apache License 2.0
27 stars 2 forks source link

the mAP #6

Open NooneOnlyOne opened 3 years ago

NooneOnlyOne commented 3 years ago

Hi ,I get the mAP (%) of tIoU thresholds from 0.1 to 0.9 the Flow result d0a2ab17b144604d94faa0857e7bfc1 the RGB result 8fea3ae3b3a3db9a4584e078cee835b and run test_two_stream.sh the result af79af73d131d6b74e2f68f1b96c456

Why is there a big gap between the results I got and the results of your paper, especially Flow result and the final result

buxiangzhiren commented 3 years ago

I think it is the problem of the epoch. You can try the checkpoint of the 45-th epoch.

NooneOnlyOne commented 3 years ago

I think it is the problem of the epoch. You can try the checkpoint of the 45-th epoch.

I used the checkpoint of the 45-th epoch of Flow and RGB and During flow and RGB training and testing, the ft_path in the PGCNDataSet class was changed. So I get the above resluts

buxiangzhiren commented 3 years ago

Yeah, your path is right. You can try the best checkpoint or other checkpoints. By the way, Are your parameter settings correct? Such as the pytorch version and the learning rate. If your answer is yes, you can run the code again. The results of others who run the code don’t have a such big performance gap like yours.

NooneOnlyOne commented 3 years ago

Thsnks,I think I have found the problem. I changed the batch_size to 128, so there is a big gap with your paper. However, I still have a question, why use checkpoint of the 45-th epoch of Flow to run test.sh multiple times and get different results 1633678677(1)

buxiangzhiren commented 3 years ago

I also found such a problem which may come from the proposal sampling of PGCN. And I think it is in normal variance.

NooneOnlyOne commented 3 years ago

I also found such a problem which may come from the proposal sampling of PGCN. And I think it is in normal variance.

I think your work is well done. I am very interested in your method. Can you give me the code for the Anet1.3 dataset? I want to do some work on your basis. If you can, my email is15163881521@163.com, Thanks

buxiangzhiren commented 3 years ago

Thank you for your attention to our work. Since some files in Anet1.3 involve my follow-up work, I cannot give you. But you can simply reproduce the code based on pgcn and a suitable video-level classification network.

buxiangzhiren commented 3 years ago

In addition, you also can get a stronger baseline by adding l-net and g-net into any open source two-stage method.

NooneOnlyOne commented 3 years ago

Thank you for your attention to our work. Since some files in Anet1.3 involve my follow-up work, I cannot give you. But you can simply reproduce the code based on pgcn and a suitable video-level classification network.

ok,thanks