issues
search
GeorgeCazenavette
/
mtt-distillation
Official code for our CVPR '22 paper "Dataset Distillation by Matching Training Trajectories"
https://georgecazenavette.github.io/mtt-distillation/
Other
395
stars
55
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Why grand_loss will gradually increase and become NaN
#42
ManZhao123
opened
3 months ago
1
Quick Question regarding class index mapping for the custom imagenet subsets
#41
meghbhalerao
closed
1 year ago
4
Buffer2
#40
xxxx-Bella
closed
1 year ago
0
Buffer2
#39
xxxx-Bella
closed
1 year ago
1
Checkpoints of models
#38
DeepOceanDeep
opened
1 year ago
1
Normalization of dataset
#37
f-amerehi
closed
1 year ago
4
Running this project with PCAM instead of CFAR
#36
wojo501
opened
1 year ago
0
details about the full real subsets of ImageNet
#35
zeyuanyin
closed
1 year ago
2
Could you please share the variance of the performance in the KIP to NN column?
#34
NiaLiu
opened
1 year ago
0
how to do distillation for a model other than VGG?
#33
ghost
opened
1 year ago
2
About Hyper-paramters
#32
maple-zhou
opened
1 year ago
1
CIFAR 10 ipc 10 Hyperparameters
#31
WeizhiGao
opened
1 year ago
3
ReparamModule Usage
#30
Sinp17
closed
1 year ago
2
Model training with the released synthetic CIFAR-10 (ipc=50)
#29
LeavesLei
opened
1 year ago
4
N << M or N >> M
#28
HaowenGuan
closed
1 year ago
7
Grand loss curve
#27
XuyangZhong-29
opened
1 year ago
1
What is ZCA
#26
youdutaidi
closed
1 year ago
2
question on Tiny imagenet ipc=50
#25
ghost
closed
1 year ago
2
question about learning rate
#24
harrylee999
closed
1 year ago
2
How did you get x̄ ± s in table 1
#23
NiaLiu
closed
1 year ago
2
Question about imagenette
#22
yaolu-zjut
closed
1 year ago
1
GPU requirement
#21
rave78
closed
1 year ago
2
have trouble at distillating with VGG networks
#20
ArmandXiao
closed
1 year ago
4
Update distill.py
#19
zhaoguangxiang
closed
2 years ago
0
Reproduce cross-architecture performance
#18
NiaLiu
closed
2 years ago
1
Unrolled optimization
#17
vadimkantorov
closed
1 year ago
1
Negative LR
#16
cliangyu
closed
2 years ago
3
A question about backbone networks
#15
imesu2378
closed
2 years ago
3
Experience on hyper-parameters
#14
Huage001
closed
2 years ago
1
The clip value
#13
tao-bai
closed
2 years ago
2
A question for the paper
#12
alittleCVer
closed
2 years ago
2
how to use images?
#11
Fduxiaozhige
closed
2 years ago
7
Where did you get the acc 36.1% from the paper Dataset Distillation with Infinitely Wide Convolutional Networks
#10
NiaLiu
closed
2 years ago
3
args.mom in buffer.py
#9
liuyugeng
closed
2 years ago
2
Expert trajactory performance
#8
1215481871
closed
2 years ago
3
distill.py loss = nan
#6
yangyangtiaoguo
closed
2 years ago
1
Buffer.py has no args.texture
#5
liuyugeng
closed
2 years ago
1
values for max_start_epoch
#4
ankanbhunia
closed
1 year ago
8
about hyperparameter: learning rate about updating condenses samples
#3
Alan-Qin
closed
2 years ago
5
how does it work on a new model arch?
#2
lucasjinreal
closed
2 years ago
1
Great work! What is the training time for the distillation stage?
#1
ankanbhunia
closed
2 years ago
2