GeorgeCazenavette / mtt-distillation

Official code for our CVPR '22 paper "Dataset Distillation by Matching Training Trajectories"
https://georgecazenavette.github.io/mtt-distillation/
Other
395 stars 55 forks source link

Reproduce cross-architecture performance #18

Closed NiaLiu closed 2 years ago

NiaLiu commented 2 years ago

Hi George, Thanks for your inspiring and great work.

I would like to reproduce the cross-architecture accuracy. But I'm having difficulty to have a accuracy which is comparable to the accuracy listed in the paper. I think I might be missing some details. Could you please type out the command you used to produce the cross-architecture performance of Cifar 10 with 10 img/cls?

Here is the command I used: First step: python buffer.py --dataset=CIFAR10 --model=ConvNet --train_epochs=50 --num_experts=100 --zca --buffer_path=buffer --data_path=data Second step: python3 distill.py --dataset=CIFAR10 --ipc=10 --syn_steps=30 --expert_epochs=2 --max_start_epoch=15 --zca --lr_img=1000 --lr_lr=1e-05 --lr_teacher=0.01 --buffer_path=buffer --data_path=data --eval_mode='M' --eval_it=1 --Iteration=300

Is there some thing I'm missing here? In addition, did you change the parser augment epoch_eval_train when you produce the SOTA cross-architecture results?

Thank you! Looking forward to your reply!

wxxhahaha commented 2 years ago

hi,I am not author,I met the same issue with you,but when I get rid of zca the acc grows up quickly,maybe you can have a try

python buffer.py --dataset=CIFAR10 --model=ConvNet --train_epochs=50 --num_experts=100 --buffer_path=buffer --data_path=data