-
**Is your feature request related to a problem? Please describe.**
I work with wandb artifacts to manage my data for some time now. I highly appreciate efficient work with large datasets (x10Gbs), AP…
-
For inclusion:
* [ ] CDIAC 1995-2020: [Zenodo](https://zenodo.org/records/10607765) Check if CRF data are used as input for the sectoral data. Also find out how this relates to the 1751-2020 data ava…
-
Hi,
I was checking your paper and would be interested in running experiments for the other datasets. In the provided link, I can find only the IMDBc dataset. Could you please provide other datasets…
-
I saw the code below:
Train_file = 'Summary_tacred/Tacred_train_augMented.csv'
test_file = 'Summary_tacred/tacred_test_augmented.csv'
The problem is that these two CSV files have not been uploaded.…
-
-
-
Hi, I'd like to have a few questions on the workflow combining [llm-recipes](https://github.com/Nicolas-BZRD/llm-recipes) with [llm-distillation](https://github.com/Nicolas-BZRD/llm-distillation) to c…
-
This would be an easy way for allowing the user to create a sort of template dataset which contains all of the information that is usually common to all datasets created in an organisation or of one b…
-
Hi, thank you for your contribution.
I recently tried to reproduce the results of your code, could you plz publish the data used in your experiments? It will help me a lot if you public datasets. T…
-
How can I train it with my own datasets?
Do I need to split my data into separate folders like in GoPro Large?
In the GoPro Large folders, there are text file `frames n offset 0.txt`, is this fi…