DataDistillation / DataDAM

[ICCV 2023] DataDAM: Efficient Dataset Distillation with Attention Matching
28 stars 4 forks source link

DataDAM: Efficient Dataset Distillation with Attention Matching

Official implementation of "DataDAM: Efficient Dataset Distillation with Attention Matching", published as a conference paper at ICCV 2023.

File Tree

This folder contains all neccesary code files and supplemental material for the main paper.

.
├── main_DataDAM.py         # Source Code for reproducing DataDAM results on behncmark datasets and IPCs
├── networks.py             # Defines all relevant network architectures, including cross-arch models
├── utils.py                # Defines all utility functions required for any task or ablation in main paper, inlcuding our attention module
├── distill_test_model.py   # Script to test the frozen models
├── requirements.txt        # Lists all related Python packages neccessary for reproducing our model results
├── Supplementary.pdf       # Supplementary pdf for our main paper -- DataDAM
└── README.md

HyperParameter Table

For reproducibility, we outline our associated hyperparameters below:

Distilled Datasets & Frozen Evaluation Models

We provide saved tensors of the dataset and frozen evaluation models trained on the respective distilled dataset on our HuggingFace Page: https://huggingface.co/datasets/uoft-dsp-lab/DataDAM

Additionally these frozen models can be tested with "distill_test_model.py"