In TFA/FSRW, FSOD fine-tune on both base and novel classes, using the same training examples provided by FSRW. And G-FSOD train for multiple runs on different random samples to obtain averages. Can you explain this? The results of only fine-tune on novel classes is not fair to compare with others
In TFA/FSRW, FSOD fine-tune on both base and novel classes, using the same training examples provided by FSRW. And G-FSOD train for multiple runs on different random samples to obtain averages. Can you explain this? The results of only fine-tune on novel classes is not fair to compare with others