-
Thanks for sharing your excellent work!
I am a newcomer in the field of multi-label text classification.
I don’t know where to download the train_data.json and test_data.json of Reuters-21578, also …
-
Hi,
I have carefully read your paper and am highly interested in your excellent work. I have a question regarding the dataset preparation.
In the data preparation step, I downloaded the Sceneve…
-
Hi, there:
Can I ask you about how you deal with the AID datasets from changed it from 600px to 512px, and how to make its LR data?
-
Hello, your work is so nice and I'd like to reproduce it with the provided jupyter notebooks.
Currently, I've downloaded Lakh Pianoroll datasets (cleaned-5), cleansed_ids.txt, LMD-matched midi and …
-
First of all, congrats on your excellent work!
Since you did not mention any detailed instructions for In The Wild Videos Data Preparation/Inference, could you please confirm that whether the repo co…
-
I have with me news articles and their abstracts in the form of strings in python (without the `, and ` tags). How do I convert this dataset to the one that is needed to train the model?
-
Hi,
How are labels created for the datasets used in the work? The data generation code seems to have only inputs preparation and some of them have code to test labels, but not to create them.
-
Thanks a lot for your excellent work.
Can you upload your datasets and the corresponding preparation code?
Thanks in advance.
-
I just realized that of the 20000 datasets we advertise, only about 2500 are available, the rest are in preparation.
It's unclear what that means, given that they are mostly months or years old uploa…
-
@walzimmer
Hi, thank you for your great work.
I want to do data preparation code for Intersection dataset.
I succeeded in splitting the intersection dataset into train and val as README in tum-tra…