Closed wml666666 closed 4 months ago
Hello, it's glad to hear that you are interest in continuing this work. In the fine-tuning stage, we train more layers (NLF, FFA) compared with other methods (only the last layer). It's a unstable process due to the limited training samples. Please try to evaluate more frequently in case the model is overfitting. Besides, base training can also affect the fine-tuning performance, you can use the pretrained weights or retrain the base model. It may take a few more experiments to get the reported performance.
Okay, thank you for your prompt reply!
I want to continue working on your work, but I encountered some problems during the code reproduction process! In fpd_r101_c4_2xb4_voc-split_10shot fine-tuning, experiments were conducted according to the code you provided. The log file is shown in 10shot. log, and the optimal accuracy NOVEL-CLASSESPLIT1 mAP: 0.672 was achieved in 2400 iterations of testing results. However, upon observing the log file you provided, only 2000 iterations were performed, achieving an accuracy of 0.684. What is the main reason for the difference in experimental results? 10shot.log