Open heartInsert opened 5 years ago
I can't say for sure. I haven't tried similar settings. It's worth a try.
Thanks ,I will try them both later. There are two more questions .
1: In your paper section 3.4 SCALABILITY TEST ON THE IMAGENET DATASET
, when use UDA in full ImageNet ,you selected 1.3M images from JFT
.Sorry for English is not my mother tongue , so is 1.3M
stands for 1.3e+6 ?
2: I think I have read your paper carefully , but you didn't mention it that , when use UDA in full ImageNet , if the transfomations for supervised data is also just simple augmentation (with cropping and flipping )
like in cifar-10 ? Thanks.
Sorry for my noob question . I noticed in the train graph you decribe in the paper's Figure 1 . The labeled img didn't transfomed by Randaugment. I think maybe it is because the lacking of labeled data , we have to matain its fidelity . But now , I have at about 9k labeled data and 7k unlabeled data , and there is also very little gap between labeled and unlabeled , I want to confirm wherther it will cause side effect if I use Randaugment in labeled data . Thanks.