fmcarlucci / JigenDG

Repository for the CVPR19 oral paper "Domain Generalization by Solving Jigsaw Puzzles"
GNU Affero General Public License v3.0
248 stars 45 forks source link

Doubt about experimental results #28

Closed hamanhbui closed 4 years ago

hamanhbui commented 4 years ago

Dear authors, Thank you for your contribution for a good baseline in DG. However, I have 2 questions about your reports. First, the dataset publishers have warned about different AGG results if you don't keep their splited train-val meta-files (http://www.eecs.qmul.ac.uk/~dl307/project_iccv2017). But in your code, I am seeing your training data didn't follow this instruction, this could lead to your results are more fancy than others paper? Second, I can't see your jigsaw classier is efficient in this task (I set it's lambda equals 0, the results didn't change anything). Does this mean that Self-Supervised Learning here is nothing to do, your positive results just come from the data augmentation with permutation?

fmcarlucci commented 4 years ago

Hello, the reason why we report our baseline is exactly to offer a fair comparison. Regardless of any specific data split, both the baseline and the method run in the same conditions and thus we can observe the relative improvement brought by the self-supervised task.

Regarding your second point, did you run multiple splits? In these experiments there is always a lot of variability, and thus to get meaningful results you need to repeat the experiment multiple times and consider the average result. The self-supervised task has a different impact depending on the dataset, but you should be able to see the difference.