vijaydwivedi75 / lrgb

Long Range Graph Benchmark, NeurIPS 2022 Track on D&B
MIT License
149 stars 18 forks source link

Adding LRGB to the HuggingFace hub #10

Open clefourrier opened 1 year ago

clefourrier commented 1 year ago

Hi! @migalkin suggested on Twitter adding your datasets to the HuggingFace hub, which I think is a super cool idea, so I'm opening this issue to see if you need any help with that!

Here is the step by step tutorial on how to do so. Ping me if you need anything in the process :hugs:

SauravMaheshkar commented 1 year ago

PascalVOC-SP

clefourrier commented 1 year ago

Amazing! Do you want me to create an LRBG org on the hub so all datasets can be under the same namespace?

SauravMaheshkar commented 1 year ago

Amazing! Do you want me to create an LRBG org on the hub so all datasets can be under the same namespace?

Yes thank you, that would be great. I also realized I uploaded .pickle files without processing. Will preprocess and update the dataset.

My HF Username is : SauravMaheshkar

clefourrier commented 1 year ago

I did and I added you to it! Once the datasets are correctly processed, feel free to transfer them to the org namespace!

@rampasek @vijaydwivedi75 Would one of you want to be an admin of that? (If yes I would need your HuggingFace hub username)

SauravMaheshkar commented 1 year ago

I pre-processed and added all the PascalVOC datasets to the organization.

vijaydwivedi75 commented 1 year ago

Thanks a lot @SauravMaheshkar @clefourrier!

@clefourrier, sure. My username is vijaypradwi. [I will check the steps linked in above comments for the HF datasets, as I haven't used before :')]

clefourrier commented 1 year ago

@vijaydwivedi75 Added you as admin!

Feel free to ask any questions you need here :)

SauravMaheshkar commented 1 year ago

I pre-processed and added all the COCO-SP datasets to the organization.

SauravMaheshkar commented 1 year ago

I pre-processed and added the peptides-functional dataset to the organization.

SauravMaheshkar commented 1 year ago

I pre-processed and added the peptides-structural dataset to the organization.

SauravMaheshkar commented 1 year ago

I pre-processed and add the PCQM-Contact dataset to the organization.

That's all the datasets done ✅ .

@clefourrier @vijaydwivedi75 can you folks go through the datasets and make sure they look good ? Maybe then we can close this issue.

clefourrier commented 1 year ago

Thank you very much for your work! I think we're very close to being good, just 2 last points:

SauravMaheshkar commented 1 year ago

1) I ran the pre-processing scripts on all the datasets and they outputted *.pt files. I assumed we wanted to upload the pre-processed datasets instead of raw files, right ?

2) How would you propose we work on the loading script ? Is there any pre-existing scripts that can be used as reference in the lrgb repository.

clefourrier commented 1 year ago

Regarding 1, I might be missing context, since I don't know LRGB that well: which preprocessing scripts did you use? (We do usually want the pre-processed datasets.)

Regarding 2, it depends on 1, I'd need to understand better what the preprocessing does to give you a hand :)

SauravMaheshkar commented 1 year ago

All the datasets have a process function in their respective classes within the dataset/ dir. I simply ran that script and uploaded the processed datasets obtained as the output.

For 2, I'll refer to @vijaydwivedi75 for more context.

clefourrier commented 1 year ago

Ok, that's exactly what I needed, thank you!

I'm a bit in a rush today but I'll take the time to look at this in more depth on Monday (CET).

clefourrier commented 1 year ago

Hi @SauravMaheshkar ! After talking a bit internally, simplest way to convert the files will be to apply the following to the pytorch files (it will upload them as Datasets objects of similar properties automatically)

from datasets import Dataset
import torch

torch_dataset_info, torch_dataset = torch.load(<local path to pt file>)
# A torch dataset is a tuple which describes the contents shape, then stores the contents - we want the actual contents 
hf_dataset = Dataset.from_dict(torch_dataset)

# This command will require you to be connected, but will send the datasets automatically
hf_dataset.push_to_hub("LRGB/<dataset name>", split=<dataset split>)

I very sorry I did not notice earlier that the files were saved as pytorch objects. We could also develop loading scripts, but it's not the preferred solution in this case, as it would here 1) require people wanting to use the datasets to have Pytorch and 2) to unpickle files on their machines.

clefourrier commented 1 year ago

Do you want to split, you convert half of them, and I convert the other half?

SauravMaheshkar commented 1 year ago

Sure, Thanks a lot ! I can take up the VOC superpixels and maybe you can take up COCO superpixels

clefourrier commented 1 year ago

Perfect!

SauravMaheshkar commented 1 year ago
```python
it=<dataset split>)

I very sorry I did not notice earlier that the files were saved as pytorch objects. We could also develop loading scripts, but it's not the preferred solution in this case, as it would here 1) require people wanting to use the datasets to have Pytorch and 2) to unpickle files on their machines.

Ran into the following Error

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/saurav/github/data/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 5245, in push_to_hub
    repo_info = dataset_infos[next(iter(dataset_infos))]
StopIteration
clefourrier commented 1 year ago

Hi @SauravMaheshkar , could you provide the full stack trace of the error, tell me on which dataset this occur, and maybe print the hf_dataset object?

SauravMaheshkar commented 1 year ago

Hi @SauravMaheshkar , could you provide the full stack trace of the error, tell me on which dataset this occur, and maybe print the hf_dataset object?

Sadly that is the entire stack trace (apart from the progress bar)

Pushing dataset shards to the dataset hub: 100%|████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 4609.13it/s]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/saurav/github/data/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 5245, in push_to_hub
    repo_info = dataset_infos[next(iter(dataset_infos))]
StopIteration
clefourrier commented 1 year ago

Hi again! I pinged people working on datasets, and your error message allowed to identify a corner case when pushing to an already existing repo without dataset_info in the YAML tags, so thank you! :hugs:

A fix is being merged, once it's in datasets, you'll just have to update the lib and try again and it should work seamlessly.

SauravMaheshkar commented 1 year ago

Hi again! I pinged people working on datasets, and your error message allowed to identify a corner case when pushing to an already existing repo without dataset_info in the YAML tags, so thank you! 🤗

A fix is being merged, once it's in datasets, you'll just have to update the lib and try again and it should work seamlessly.

Oh great, glad to help ig 😅

clefourrier commented 1 year ago

Coming back to this! @SauravMaheshkar if you want to try again, the conversion script works now :smiley:

I've converted the coco datasets with datasets 2.11.0, using:

from datasets import Dataset
import torch

dataset_names = [your dataset names]

for dataset in dataset_names:
    for split in ["train", "val", "test"]:
        torch_dataset_info, torch_dataset = torch.load(
            f"/{path_to_your_folder}/{dataset}/{split}.pt"
        )
        hf_dataset = Dataset.from_dict(torch_dataset)
        hf_dataset.push_to_hub(f"LRGB/{dataset}", split=split)
clefourrier commented 1 year ago

Hi @SauravMaheshkar, did you have the time to look at this?