Closed hero-y closed 2 years ago
Hey, sorry for the delayed response. One thing we advocate in the paper is to have multiple runs and compare the average performance and 95% confidence interval due to high variance in the few shot settings. We introduced a new benchmark with repeated runs and the variance interval. You may refer to the complete results of both base and novel classes on Pascal VOC and COCO can be found in Table 7 and Table 8 in the appendix of the arXiv version. https://arxiv.org/pdf/2003.06957.pdf
Hope it helps!
Thank you for your reply. In Tables 7 and 8, you run multiple times with different sample training shots. The fluctuations are normal. But I used the same training shots without changing the config, and I still saw a lot of volatility. I think this is abnormal.
Different random seeds might affect the results as well. That was the motivation for us to introduce new evaluation benchmark for reliable evaluation.
On Thu, May 13, 2021 at 9:00 PM hero-y @.***> wrote:
Thank you for your reply. In Tables 7 and 8, you run multiple times with different sample training shots. The fluctuations are normal. But I used the same training shots without changing the config, and I still saw a lot of volatility. I think this is abnormal.
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/ucbdrive/few-shot-object-detection/issues/107#issuecomment-840989916, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABVXTBCGCLED465BW7NU6OTTNSN7BANCNFSM43EIGKIA .
Hello, I use a GPU for training on voc split1 3shot. I reduce the learning rate by 8 times and increase the number of training iterations by 8 times. I use the same configuration to train twice, but the results of the two are very different (48.4 nAP50 and 45.6 nAP50). I think this is abnormal, do you know the reason? Thanks!