LINs-lab / RDED

[CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm
https://arxiv.org/abs/2312.03526
Apache License 2.0
53 stars 3 forks source link

About the acc #4

Closed Jiacheng8 closed 2 months ago

Jiacheng8 commented 2 months ago

Hi, first of all, thanks for the great work you have done!

I have been working with the CIFAR-100 dataset using your script with the following command:

python ./main.py \
--subset "cifar100" \
--arch-name "resnet18_modified" \
--factor 1 \
--num-crop 5 \
--mipc 300 \
--ipc 50 \
--stud-name "resnet18_modified" \
--re-epochs 300

However, I noticed that the accuracy I obtained on the test set is 54.25%, which is significantly lower than the 62.6 ± 0.1% reported in your paper. I would greatly appreciate it if you could help clarify any potential reasons for this discrepancy.

Additionally, I wanted to ask whether the labels for the CIFAR-100 dataset in your experiment were the same as the original dataset. Also, could you confirm if the teacher model's performance on the training set and test set was similar to the following:

Train set: loss = 0.603734, Top-1 error = 17.14%, Top-5 error = 3.16% Test set: loss = 1.890325, Top-1 error = 41.41%, Top-5 error = 16.46% Thank you very much for your assistance, and I look forward to your response!

Jiacheng8 commented 2 months ago

It is the label issue, the writer's result is fully correct