The goal of this issue is to discuss ideas for making it easier to specify
custom dataset, or to build up documentation that explains why the existing
process is already minimal.
I find myself frustrated that I can't just point VISSL to a custom dataset on
the command line, and I think I have ideas on how to make this process easier,
but at the same time, I only have shallow familiarity with this project, and if
I'm missing something (perhaps related to distributed processing, with which
I'm less familiar), I'd like to help build docs to make the rational for the
existing process clearer.
🚀 Feature
Currently, using VISSL on a custom dataset requires 3 steps, and includes
manually editing a registry file [1, 2, 3].
The proposed feature is to allow specification of custom datasets in a single
step (not counting the creation of the data manifest file) via pointing to a
path to a data manifest file.
A data manifest is a single path on disk that contains all information about a
specific dataset split. In its simplest form, a manifest may simply contain a
list of image file paths (or uris) --- it contains pointers to the data for
which it is concerned. However, it can be more complex, and also contain
information about annotations within each image, and category labels relevant
to the dataset.
A data manifest file has several advantages over predefined folder structures
and npy files. It is a single file that contains both paths to images and
labels, instead of having to shuffle around your data, you can write a
human-readable file that contains exactly what subset of images you want (so
your train and validation images can live in the same folder).
Given some data manifest file, it should be possible to run SSL without any
extra preprocessing, registration, or code modification. For example:
The specific format of a data manifest file is an important design choice.
The MS-COCO [4] format is a well-known and natural choice for a basic dataset
manifest because it provides a single file that points to a list of images, and
an optional set of annotations and categories associated with those images.
However, there are shortcomings in the MS-COCO format, including: lack of
support support for polygons with holes and keypoints that to not conform to a
class skeleton. Furthermore, the official pycocotools API is difficult to
install and lacks support for dynamic modification of the dataset, among other
issues. I've been working on a backwards-compatible extension called KW-COCO
(KW stands for Kitware, which is my employer) [5]. This addresses
the aforementioned issues and also provides an extension to multispectral
imagery, grouping of sequences of images into videos, and can be combined with
a secondary library ndsampler [6], to easily create data loaders that operate
on subregions within images and videos.
It is important to note that VISSL does support the coco2014 dataset, but the
way it is specified ignores the fact that the annotation file specifies paths
to the images in the annotation file [7]. Instead of simply pointing to an
annotation/manifest file and assuming that the images/assets are relative to
the manifest file itself (unless otherwise specified) VISSL assumes that there
is a folder structure with very specific and inflexible path names.
Albeit this is also a problem with the way that COCO-2014 dataset is
distributed. The "file_name" in the image dictionary does not specify a
relative path to the expected file. In contrast, the kwcoco-spec does specify
that the "file_name" attribute should be a relative or absolute path to an
image file. It also provides a method and a command-line tool kwcoco reroot
[8] to help ensure relative or absolute paths are correct before you pass the
data to an algorithm. This works similarly to REMOVE_IMG_PATH_PREFIX and
NEW_IMG_PATH_PREFIX in the vissl config [8], but instead of coupling this
step with the SSL training, using kwcoco allows this process to be decoupled
from training and prediction.
Proposal
I would like to propose adding a either an mscoco or kwcoco data source to
vissl. I would recommend kwcoco over mscoco because it is easier to install,
has a command line tool and Python API such that developers can do
preprocessing in a way that is decoupled from the training code that it will be
fed to, and it comes with its own synthetic toydata, which I think
What I would like to do is use kwcoco for generating toydata:
# Autogenerates a toy train and test dataset
pip install kwcoco
kwcoco toydata --key=shapes32 --bundle_dpath=my_train_bundle
kwcoco toydata --key=shapes16 --bundle_dpath=my_test_bundle
This results in the following data structure:
my_train_bundle
├── _assets
│  └── images
│  ├── img_00001.png
│  ├── img_00002.png
│ ...
│  ├── img_00029.png
│  ├── img_00030.png
│  ├── img_00031.png
│  └── img_00032.png
└── data.kwcoco.json
And the data manifest file (my_train_bundle/data.kwcoco.json) looks like this (truncated for clarity):
Because all the information about my dataset is contained in the manifest file,
I there should be a way to just point whatever learning algorithm I want to use
at the manifest (assuming the information required by the learning algorithm
exists in the dataset).
I've forked VISSL to work on a proof of concept and added a kwcoco data source.
But the current structure seems like it is forcing me to add these datasets to
a registry file, and I don't see the rational for why that would be necessary.
KeyError: "Dataset 'kwcoco' is not registered! Available datasets are: airstore_imagenet, ... google-imagenet1k-per10"
I have to add a line to dataset_catelog.json and then give extra information
like a dataset name and source:
I would think it would be possible to make something like DATASET_NAMES and
DATA_SOURCES coercible for certain types of data sources. For instance,
because the path specifies a kwcoco file, it is possible for the program to
realize it's pointing to a kwcoco source, and there does exist the concept of a
kwcoco dataset name, so it could pull that if it wasn't specified.
Note, if the dataset paths were passed to kwcoco.CocoDataset.coerce instead
of kwcoco.CocoDataset itself, it would also be possible to specify a path
like special:shapes8, which kwcoco coerce interprets as a command to generate
or use previously cached generated toy data. This would be an extension of the
currently existing "synthetic" argument in use by vissl.
Also note, I'm aware of memory-leark issues (i.e.
https://github.com/pytorch/pytorch/issues/13246) and pickle overhead that can
stem from loading a very large manifest of images into memory as an attribute
of a dataloader, which is why kwcoco supports a (currently experimental, but
functional) SQLite view of the underlying dataset. This caches a read-only
sqlite3 view of the data so when the torch dataset is pickled, only the path to
the database file is actually copied between processes. A new connection is
opened in each forked process.
Required Changes
I think the general idea in this proposal is probably too big for a single
reviewable PR, and I also think it might require some design forethought.
For basic support of kwcoco, it is clear from the docs as to how I can make a
kwcoco data source and how this can be registered as a data source.
What is not clear is what the dataset_catelog.json is doing here and what
roll it would play, if any at all. Currently, just to make the simclr example run
on my data I added in a item:
But I really don't have a good understanding of what this is trying to do. Is
this registering a specific dataset or a dataset format? The docs seem to
indicate it is a dataset name, (which seems not extensible if every new dataset
needs a new entry even if it is in a known format). But the docs I've seen [3]
aren't very clear on this.
Ultimately, It would be really cool if I could slim down the monster of a command
I've written up there by 4 lines and write something like:
where the only dataset lines are simply specifying the paths, and the "source
type" and "name" are introspected if not specified.
Summary
In short, I'm looking to improve the data loading for VISSL, ideally using the
kwcoco library and manifest format I've been developing. Given something in
this format it should be possible to "just run" self-supervised learning by
pointing a command line tool at the dataset. Thus the onus is no longer to
"conform to a structure specified by VISSL to use your data", instead it becomes
"VISSL supports datasets that conform to this standardized structure".
I'm wondering what thoughts on this proposal are, and what challenges I might
be overlooking.
Preface
The goal of this issue is to discuss ideas for making it easier to specify custom dataset, or to build up documentation that explains why the existing process is already minimal.
I find myself frustrated that I can't just point VISSL to a custom dataset on the command line, and I think I have ideas on how to make this process easier, but at the same time, I only have shallow familiarity with this project, and if I'm missing something (perhaps related to distributed processing, with which I'm less familiar), I'd like to help build docs to make the rational for the existing process clearer.
🚀 Feature
Currently, using VISSL on a custom dataset requires 3 steps, and includes manually editing a registry file [1, 2, 3].
The proposed feature is to allow specification of custom datasets in a single step (not counting the creation of the data manifest file) via pointing to a path to a data manifest file.
A data manifest is a single path on disk that contains all information about a specific dataset split. In its simplest form, a manifest may simply contain a list of image file paths (or uris) --- it contains pointers to the data for which it is concerned. However, it can be more complex, and also contain information about annotations within each image, and category labels relevant to the dataset.
A data manifest file has several advantages over predefined folder structures and npy files. It is a single file that contains both paths to images and labels, instead of having to shuffle around your data, you can write a human-readable file that contains exactly what subset of images you want (so your train and validation images can live in the same folder).
Given some data manifest file, it should be possible to run SSL without any extra preprocessing, registration, or code modification. For example:
Choice of Manifest Format
The specific format of a data manifest file is an important design choice.
The MS-COCO [4] format is a well-known and natural choice for a basic dataset manifest because it provides a single file that points to a list of images, and an optional set of annotations and categories associated with those images.
However, there are shortcomings in the MS-COCO format, including: lack of support support for polygons with holes and keypoints that to not conform to a class skeleton. Furthermore, the official
pycocotools
API is difficult to install and lacks support for dynamic modification of the dataset, among other issues. I've been working on a backwards-compatible extension called KW-COCO (KW stands for Kitware, which is my employer) [5]. This addresses the aforementioned issues and also provides an extension to multispectral imagery, grouping of sequences of images into videos, and can be combined with a secondary libraryndsampler
[6], to easily create data loaders that operate on subregions within images and videos.It is important to note that VISSL does support the coco2014 dataset, but the way it is specified ignores the fact that the annotation file specifies paths to the images in the annotation file [7]. Instead of simply pointing to an annotation/manifest file and assuming that the images/assets are relative to the manifest file itself (unless otherwise specified) VISSL assumes that there is a folder structure with very specific and inflexible path names.
Albeit this is also a problem with the way that COCO-2014 dataset is distributed. The
"file_name"
in the image dictionary does not specify a relative path to the expected file. In contrast, the kwcoco-spec does specify that the"file_name"
attribute should be a relative or absolute path to an image file. It also provides a method and a command-line toolkwcoco reroot
[8] to help ensure relative or absolute paths are correct before you pass the data to an algorithm. This works similarly toREMOVE_IMG_PATH_PREFIX
andNEW_IMG_PATH_PREFIX
in the vissl config [8], but instead of coupling this step with the SSL training, using kwcoco allows this process to be decoupled from training and prediction.Proposal
I would like to propose adding a either an mscoco or kwcoco data source to vissl. I would recommend kwcoco over mscoco because it is easier to install, has a command line tool and Python API such that developers can do preprocessing in a way that is decoupled from the training code that it will be fed to, and it comes with its own synthetic toydata, which I think
What I would like to do is use kwcoco for generating toydata:
This results in the following data structure:
And the data manifest file (
my_train_bundle/data.kwcoco.json
) looks like this (truncated for clarity):Because all the information about my dataset is contained in the manifest file, I there should be a way to just point whatever learning algorithm I want to use at the manifest (assuming the information required by the learning algorithm exists in the dataset).
I've forked VISSL to work on a proof of concept and added a kwcoco data source.
I would think I could do something as simple as:
But the current structure seems like it is forcing me to add these datasets to a registry file, and I don't see the rational for why that would be necessary.
I have to add a line to
dataset_catelog.json
and then give extra information like a dataset name and source:I would think it would be possible to make something like
DATASET_NAMES
andDATA_SOURCES
coercible for certain types of data sources. For instance, because the path specifies a kwcoco file, it is possible for the program to realize it's pointing to a kwcoco source, and there does exist the concept of a kwcoco dataset name, so it could pull that if it wasn't specified.Note, if the dataset paths were passed to
kwcoco.CocoDataset.coerce
instead ofkwcoco.CocoDataset
itself, it would also be possible to specify a path likespecial:shapes8
, which kwcoco coerce interprets as a command to generate or use previously cached generated toy data. This would be an extension of the currently existing "synthetic" argument in use by vissl.Also note, I'm aware of memory-leark issues (i.e. https://github.com/pytorch/pytorch/issues/13246) and pickle overhead that can stem from loading a very large manifest of images into memory as an attribute of a dataloader, which is why kwcoco supports a (currently experimental, but functional) SQLite view of the underlying dataset. This caches a read-only sqlite3 view of the data so when the torch dataset is pickled, only the path to the database file is actually copied between processes. A new connection is opened in each forked process.
Required Changes
I think the general idea in this proposal is probably too big for a single reviewable PR, and I also think it might require some design forethought.
For basic support of kwcoco, it is clear from the docs as to how I can make a kwcoco data source and how this can be registered as a data source.
I'm currently working on a fork branch here: https://github.com/Erotemic/vissl to build a proof-of-concept.
What is not clear is what the
dataset_catelog.json
is doing here and what roll it would play, if any at all. Currently, just to make the simclr example run on my data I added in a item:But I really don't have a good understanding of what this is trying to do. Is this registering a specific dataset or a dataset format? The docs seem to indicate it is a dataset name, (which seems not extensible if every new dataset needs a new entry even if it is in a known format). But the docs I've seen [3] aren't very clear on this.
Ultimately, It would be really cool if I could slim down the monster of a command I've written up there by 4 lines and write something like:
where the only dataset lines are simply specifying the paths, and the "source type" and "name" are introspected if not specified.
Summary
In short, I'm looking to improve the data loading for VISSL, ideally using the kwcoco library and manifest format I've been developing. Given something in this format it should be possible to "just run" self-supervised learning by pointing a command line tool at the dataset. Thus the onus is no longer to "conform to a structure specified by VISSL to use your data", instead it becomes "VISSL supports datasets that conform to this standardized structure".
I'm wondering what thoughts on this proposal are, and what challenges I might be overlooking.
References:
Also working on this with me is @cfunk1210