I added a dataloader for the OmniPrint dataset similar to the Omniglot dataset, does this conform to the pytorch-meta code structure sufficiently? Currently, the dataset is hosted on Google Drive personally to prevent any extra dependencies of Kaggle where it's hosted. The training split for every print split (meta1, meta2, ...) is the same for all splits as used in the OmniPrint source code.
I added a dataloader for the OmniPrint dataset similar to the Omniglot dataset, does this conform to the pytorch-meta code structure sufficiently? Currently, the dataset is hosted on Google Drive personally to prevent any extra dependencies of Kaggle where it's hosted. The training split for every print split (meta1, meta2, ...) is the same for all splits as used in the OmniPrint source code.