pytorch / vision

Datasets, Transforms and Models specific to Computer Vision
https://pytorch.org/vision
BSD 3-Clause "New" or "Revised" License
16.25k stars 6.95k forks source link

Add tests for datasets #963

Closed fmassa closed 3 years ago

fmassa commented 5 years ago

We should add unit tests for datasets. This will enable the codebase to be more reliable, and I expect that adding new datasets to torchvision will be a more straightforward procedure for the reviewer, as he will be able to rely on CI for a lot of the code checking.

We need to define how we will structure the tests for the datasets in order to avoid having to download the full datasets in memory, which will probably make the CI flaky due to network issues.

Here are a few alternatives:

1 - Add dummy test files for each dataset

Every dataset that has a download option should use a (newly introduced) function that performs the downloading and the extraction. The dummy test files should have the same folder structure as the original files after extraction. This way, any postprocessing of the extracted files and be tested by the testing suite.

The dummy files would be fake images (e.g., of 2x3 pixels), but that enables to test both the __init__ and the __getitem__ of the dataset in a straightforward way, which are the only functions that we guarantee that should be implemented for datasets.

Drawback

While this is a very flexible alternative, adding a new dataset adds an extra burden, as now the contributor also need to create a set of fake data that should be commited into the repo.

2 - Add an option to load images / data from a buffer

There is a common pattern in torchvision datasets. 1 - download / extract the original dataset 2 - get a mapping of image_id -> image_path / class_id / etc 3 - load image (either from memory or from the image_path) and annotations

the download / extract parts should be factored out into a common function, which handles zip / tar / etc.

If we add an extra method to the datasets get_paths_for_idx that gives the mapping image_id -> image_path / class_id / etc, we could then patch this method during testing so that instead of returning an image_path, we return instead a file-like object, which can be read by PIL, scipy.io.loadmat, xml.etree.ElementTree and other Python libs without problems. This will require that we specify somewhere what is the type of values that get_paths_for_idx return, so that we how how to generate random data that fits to what the dataset expect.

Drawback

We might end up adding extra complexity while defining a dataset, and plus we might not even test all possible codepaths.

3 - Factor out dataset-specific functions and test only those

This would be an indirect testing, where we would split the dataset into standalone functions and test each function with some dummy inputs. For example, in https://github.com/pytorch/vision/blob/d4a126b6e9bbf7e86082eb8760e12e15c09831ac/torchvision/datasets/mnist.py#L311-L328 we could generate on-the-fly a dummy object that can test those functions, without going into testing all MNIST itself.

Drawback

We wouldn't be having end-to-end tests, but only be testing parts of the functionality. Having the tests.

Wrapping up

There are other potential ways of implementing unit tests for the datasets that I'm potentially missing in this list.

One thing that is pretty clear to me is that we should provide a function to perform the download + extract for zip / tar / etc that is tested, and use it everywhere in the codebase.

I'm currently inclined towards going for approach number 1, but it will involve adding a number of (dummy and very small) files to the repo. It's currently the only one that tests end-to-end all the functionality that is required by the dataset API, and thus is the most robust IMO.

Thoughts?

soumith commented 5 years ago

you can do things like mocking, where a fake OS / fake filesystem / fake files are read and processed, and we assert that the right things happened when interacting with this fake OS.

See https://www.toptal.com/python/an-introduction-to-mocking-in-python for example and look for rm and how the test for file removal

soumith commented 5 years ago

alternatively,

  1. for small datasets, full download and test
  2. for large datasets, tiny fake dataset

i think that's reasonable compromise

pmeier commented 5 years ago

@soumith Do you have a specific size in mind up to that we consider a dataset to be small?

fmassa commented 5 years ago

I'm currently implementing the "small" dataset downloading locally, and for MNIST / EMNIST / KMNIST / FashionMNIST, it takes 3 min to run a trivial test on my laptop.

I think that the small-dataset size might need to be smaller than EMNIST for tests to run reasonably fast.

I'll soon push a PR to test how long it takes CI to download it.

fmassa commented 5 years ago

We are going for an approach that generates random dataset files on-the-fly, so that they can be tested more easily.

Here is an example: https://github.com/pytorch/vision/blob/master/test/test_datasets.py

We have context-managers that generate datasets on the fly for the testing, and helps us catch bugs https://github.com/pytorch/vision/blob/4886ccc84b3ce5b9595abb993747e99801521bea/test/fakedata_generation.py#L18-L46

wolterlw commented 4 years ago

Have you reached consensus on how to test contributed datasets yet?

I would like to contribute wrappers for several hand pose estimation datasets, like RHD, but haven't found a distinctive guide on how to structure a dataset wrapper or a transform.
In practice I usually structure data access as follows:

  1. Dataset subclass that basically takes care of indexing. it's __getitem__() returns {'img_': Path, 'anno': annotation}
  2. Transforms that take sample and load necessary files be it images, videos or files in any other format

This approach has following benefits:

Writing tests would also be separated into Database tests that interact with the directory system and separate Transform tests that need a single dummy sample.

wolterlw commented 4 years ago

resumed in #1080

fmassa commented 4 years ago

@andfoy could you work on this? The first dataset that would be good to have is UCF101 for https://github.com/pytorch/vision/pull/2475

Let me know if you have questions

pmeier commented 3 years ago

Tests for datasets and extraction logic with mocked archives


Download tests