Open goutamyg opened 1 year ago
Hi, these hyperparameters are tuned on the LaSOT validation set. Besides, Table A5 in the appendix shows the effect of different keeping ratio.
Thank you for your response.
I see that there is no dedicated validation set provided by the authors of LaSOT, and Table A5 in the appendix of OSTrack chooses the keeping ratio based on LaSOT-test set results. Therefore, it seems like the best parameters are chosen based on the performance of OSTrack on the LaSOT-test set. Please correct me if I am wrong. Thank you.
Thank you for your response.
I see that there is no dedicated validation set provided by the authors of LaSOT, and Table A5 in the appendix of OSTrack chooses the keeping ratio based on LaSOT-test set results. Therefore, it seems like the best parameters are chosen based on the performance of OSTrack on the LaSOT-test set. Please correct me if I am wrong. Thank you.
Yes, you are right. This is a common practice in recent tracking papers, although it might be better to use a validation set.
Hi! Thank you for publishing your code.
In your paper, you have mentioned various training-related choices/hyperparameters (e.g., learning rate, number of epochs, keep-ratio for candidate elimination module).
Can you please suggest which dataset was used to tune these hyperparameter values? Was it fine-tuned using the test-set itself?