botaoye / OSTrack

[ECCV 2022] Joint Feature Learning and Relation Modeling for Tracking: A One-Stream Framework
MIT License
402 stars 51 forks source link

Hyperparameter tuning #76

Open goutamyg opened 1 year ago

goutamyg commented 1 year ago

Hi! Thank you for publishing your code.

In your paper, you have mentioned various training-related choices/hyperparameters (e.g., learning rate, number of epochs, keep-ratio for candidate elimination module).

Can you please suggest which dataset was used to tune these hyperparameter values? Was it fine-tuned using the test-set itself?

botaoye commented 1 year ago

Hi, these hyperparameters are tuned on the LaSOT validation set. Besides, Table A5 in the appendix shows the effect of different keeping ratio.

goutamyg commented 1 year ago

Thank you for your response.

I see that there is no dedicated validation set provided by the authors of LaSOT, and Table A5 in the appendix of OSTrack chooses the keeping ratio based on LaSOT-test set results. Therefore, it seems like the best parameters are chosen based on the performance of OSTrack on the LaSOT-test set. Please correct me if I am wrong. Thank you.

botaoye commented 1 year ago

Thank you for your response.

I see that there is no dedicated validation set provided by the authors of LaSOT, and Table A5 in the appendix of OSTrack chooses the keeping ratio based on LaSOT-test set results. Therefore, it seems like the best parameters are chosen based on the performance of OSTrack on the LaSOT-test set. Please correct me if I am wrong. Thank you.

Yes, you are right. This is a common practice in recent tracking papers, although it might be better to use a validation set.