Closed fxl-fxl closed 3 years ago
Our online refinement keeps the same settings with other works, just like you described.
I think you are right, they (https://arxiv.org/abs/2104.14540) finetune models for every sample in a test set, which means one model for one sample. Finally, they will have 697 models to evaluate. But in this work, after TTR, there is a single model. There are two different TTR techniques.
Thank you for your excellent work. I have a question. Does this online refinement use the 697 test sets to train for 20 epochs and then test the accuracy on the 697 test sets? But in other work(https://arxiv.org/abs/2104.14540), they performed 20 iterations on a single sample of the test set one by one, and loaded the same trained model for the training each sample. Do you have any suggestions
You can reopen this issue when you have further problem.
Thank you for your excellent work. I have a question. Does this online refinement use the 697 test sets to train for 20 epochs and then test the accuracy on the 697 test sets? But in other work(https://arxiv.org/abs/2104.14540), they performed 20 iterations on a single sample of the test set one by one, and loaded the same trained model for the training each sample. Do you have any suggestions