Closed ws9797 closed 2 weeks ago
Maybe my explanations in the paper weren’t enough, so let me go into more detail here. For few-shot learning tasks, ensuring that the training and testing datasets have no overlapping classes is essential because the goal is to evaluate the model’s ability to generalize to unseen classes, and class overlap disrupts that goal.
In our setup, since many classes in Pascal are also present in COCO, training on COCO without modifications would create overlap with Pascal classes, invalidating the few-shot learning setup. To address this, we first remove any classes in COCO that are also in Pascal, maintaining a clear separation between the training and testing classes.
It's essential to consistently apply this filtering across your training dataset in any cross-domain few-shot learning projects. Without this separation, the model would see these classes during training, leading to artificially high IoU scores during testing. While the IoU might look impressive, it wouldn’t accurately reflect the model’s performance on unseen classes, making it an unfair evaluation.
Hello, I have a question regarding cross-domain research: how is the dataset processed? Are only specific classes from the COCO dataset selected to avoid overlap with classes in the Pascal dataset? Is this processing applied only to the COCO dataset,Why does it also mention that when testing on Pascal, we need to filter out any classes that were encountered during training?
Thank you for your reply.