Closed sbelharbi closed 1 year ago
Yes, you are right. It's better to use a validation set to find the best model and test the performance on a separate test set. However, there is no validation set for the RAF-DB dataset. Thus, the community takes the validation set as the test set, and we follow their settings:
On the other hand, we adopt the Cosine decay strategy to adjust our learning rate, which makes the model weights update slowly in ending iterations, and the best model is usually produced during the ending iterations.
thanks. they could have taken some samples from the trainset as a validset. closing. thanks
hi, you set your
validset
as the samples of test set. https://github.com/youqingxiaozhua/APViT/blob/main/configs/_base_/datasets/RAF.pyhttps://github.com/youqingxiaozhua/APViT/blob/6c7b57614b8b81dcfd6939db0bcbb28a4e823e10/configs/_base_/datasets/RAF.py#L57
wouldnt this corrupt the measured performance on the test set since you are directly picking the model with the best performance on the testset?
usually, the validset is different and it is used to pick a model. the best picked model on validset is used to report the performance on the testset.
thanks