MIV-XJTU / SPEED

PyTorch implementation of paper "Sparse Parameterization for Epitomic Dataset Distillation" in NeurIPS 2023.
18 stars 4 forks source link

Questions about the distilled images? #4

Open fofilix opened 3 weeks ago

fofilix commented 3 weeks ago

Thank U for opening source code. But I still got a few questions as follows.

  1. When I checked the the saved distilled images, I found that you make it into the vis.pdf, a CSS sprite, encompassed a lot of small distilled images. The number of these small distilled images exceeded the default IPC setting (CIFAR10_IPC10_freenet_patch4_dec96d1b3h(), i.e., IPC is10), Why?
  2. I'm using SPEED in my own dataset. If I want to save the distilled image into folders, not *.pdf format, I think I should change the code in https://github.com/MIV-XJTU/SPEED/blob/64c0fc4f0d677544e85107897be7806a0b8b7954/distill.py#L222-L231. But I'm still confused about how to change the code cuz I'm new to Dataset Distilled field. Can U show me how to accomplish the operation?
CAOANJIA commented 3 weeks ago
  1. Since SPEED is a synthetic data parameterization framework, more synthetic images can be produced under the same storage budget (i.e. more than 10 synthetic images per class under IPC 10).

  2. By default, we save SAET, SCM, and FReeNet, which conforms to Eq. (7). You can use the three to synthesize images, please refer to the synthesis process in 'distill.py' or 'eval.py', and the forward function of FReeNet in 'networks.py'. If you want to save the synthetic images directly, you can use 'torch.save()' to save the synthesized images (e.g. the 'pth' format). This should be easy to implement.