ssnl / dataset-distillation

Open-source code for paper "Dataset Distillation"
https://ssnl.github.io/dataset_distillation
MIT License
778 stars 115 forks source link

Compare to training on randomly selected samples #27

Closed bennyguo closed 5 years ago

bennyguo commented 5 years ago

Do you guys try to compare the results of distilled data to the ones trained on randomly selected samples of the dataset? For example, if I randomly select 10 images from the MNIST dataset (1 for each category) and train the network on them, how would the results be? I think it's a fundamental thing to compare with.

Very interesting work by the way!

ssnl commented 5 years ago

This comparison is in the paper. Thanks for your interest in our work. :)

bennyguo commented 5 years ago

Oh I must have missed it. Thanks.