zju-vipa / CMI

[IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation
68 stars 17 forks source link

Failed to reproduce. #2

Closed Sharpiless closed 3 years ago

Sharpiless commented 3 years ago

I replayed the code from pre-training and got the following results:

Model Method Dataset top1-accuracy
ResNet34* Teacher Cifar-10 93.94(95.70)
wrn_40_2 Teacher Cifar-10 92.01(94.87)

For DFQ algorithm, the results are as follows:

Model Data-Free Method Student-Loss Generative-Loss Dataset top1-accuracy
ResNet34-ResNet18 DFQ(Baseline) KL adv+bn+oh Cifar-10 88.89(94.61)
ResNet34-ResNet18 DFQ(Baseline) KL adv+bn+oh Cifar-100 1.89(77.01)

For the above results, I reproduce DFQ based on your code for some reason. But the results are poor. I would be grateful if you could kindly give me your advice.

Sharpiless commented 3 years ago

Question about the pretrained model: How did you get the pretrained model (resnet34 on cifar-10)? Because if you train the teacher model in the way you give, the accuracy won't exceed 94%. But the accuracy of your model is 95.7%. I tried to use Fmix method and got the model weight with an accuracy of 95.7%, but its knowledge distillation effect was very poor (85.47%), far less than the result you gave. Now I'm wondering if you did something to the teacher model that wasn't mentioned in the code or the paper?

Sharpiless commented 3 years ago
Some of the other experimental results: Dataset Method ZSKT DAFL DFQ DeepInv
Cifar10 ResNet34-ResNet18 89.84 84.98 90.37 90.67
Cifar10 wrn40-2-wrn16-1 80.47 72.99
Cifar100 ResNet34-ResNet18 61.58 68.02 71.12
Cifar100 wrn40-2-wrn16-1 16.19

That's a big difference from the README.

liuhe1305 commented 2 years ago

Hi, did u solve the problem?

CHENBIN99 commented 1 year ago

Same question