tobyperrett / trx

Temporal-Relational CrossTransformers (CVPR 2021)
108 stars 23 forks source link

How could we use validation set? #3

Closed ycbilge closed 3 years ago

ycbilge commented 3 years ago

Thanks for sharing the code. Could you please tell us how we could use validation set at the code? The code only reads train and test lists. I would like to select best model using the validation set and then test the best model in the test list. It would be greatly appreciated.

Anirudh257 commented 3 years ago

@ycbilge I also have the same doubt. Although, I have seen in different works that they are combining Train + Val and directly validating on the test set. (which is not correct). I think that if we can either use Train + Val as new Train data or train solely on Train/Val split and test one iteration on Test data.

tobyperrett commented 3 years ago

Hi everyone. The best way to do it is to have a separate split where you rename the validation split testlistXX.txt. You can specify the iterations on which to evaluate on this validation set. You can then retrain using the same training set and test on the test set, or use a saved checkpoint and resume to use that exact model. For example, if you decided on 20,000 iterations using the validation set, you could rename checkpoint-20000.pt to checkpoint.pt, set the number of test iterations to be 20,000 and resume (by passing -r).

I have seen some image-based works which find the best number of iterations using the validation set, then retrain on the train+val using the same number of iterations before testing, but I'm not aware of any video works which state they do. Hope that helps!

lovelyczli commented 3 years ago

I understand what you're saying.

But, in your code, the test set is used repeatedly.

        iteration = self.start_iteration
        for task_dict in self.video_loader:
            if iteration >= total_iterations:
                break
            iteration += 1
            task_loss, task_accuracy = self.train_task(task_dict)

            **accuracy_dict = self.test(session)**

            print(accuracy_dict)

And meanwhile, the val set is not reflected in the code, even in your data loader.

        def _select_fold(self):
            lists = {}
            **for name in ["train", "test"]:**
                fname = "{}list{:02d}.txt".format(name, self.args.split)
                f = os.path.join(self.annotation_path, fname)
            self.train_test_lists = lists

If you do not explain the specific usage, it is easy to misunderstand that you do not employ the val set.

So, it is more reasonable that you use all three splits, and use the val set for cross-validation, and test set only for the final results. And meanwhile, it is clear to check whether you obtain good performance on the val set on the 20,000th iterations

tobyperrett commented 3 years ago

You rename the val split to test and ignore the test completely during training. Once you know the hyperparameters, you can then use the original test set. I like doing it this way as it ensures none of the test data is touched at all during training, even accidentally, and allows me to load less data into RAM. If you want to change it to suit your preferences go ahead.