Open kleinzcy opened 3 years ago
Hi, thanks for your interest in our work.
It is hard to figure out why since your Fixmatch is not based on our implementation. But, possible reasons are batch-size of labeled and unlabeled data. We use smaller batch-size for unlabeled data to reduce training costs. This may explain the improved results you got though I am not confident at all.
Hi,
I also have the same issue reproducing the FixMatch results in the table. Using the code provided in this repo, the error rate is 19.82% while the AUROC using SoftMax is 30.44% on CIFAR-10 (50 labels), which is significantly different from the report numbers. @ksaito-ut Could you please share the code and the command to reproduce the FixMatch numbers in your table?
In my reproduction, I remove the OOD filtering of exclude_dataset
, the three detector losses (L_o
, L_oem
, L_socr
), and the warm-up for L_fix
. The AUROC is roc_soft
computed in test
.
This is the command I used: CUDA_VISIBLE_DEVICES=0 python main.py --dataset cifar10 --num-labeled 50 --out ./result --arch wideresnet --lambda_oem 0 --lambda_socr 0 --batch-size 64 --mu 2 --lr 0.03 --expand-labels --seed 0 --opt_level O2 --no-progress --threshold 0.95
Hi, this is great work. @ksaito-ut When I try to reproduce the Fixmatch result with this code, and I only remove SOCR, the error rate is 27% on CIFAR-10 (50 labels). Could you please provide your baseline settings for Fixmatch in Table 1?
Thanks for your great work!
I met some problems when I reimplemented the Fixmatch results in your table. The error rate of Fixmatch is 7.4% on CIFAR10(Known/Unknown 6/4, number of the labeled image is 50), which is lower than your reported results. I have not figured out why. Do you have any ideas?
I utilize the Fixmatch code in https://github.com/TorchSSL/TorchSSL