Open mam10eks opened 4 months ago
Dear all, I would be open to make a first proposal for an implementation here.
Dear all, I started a draft pull request (only to indicate that there is some progress): https://github.com/allenai/ir_datasets/pull/269
Mainly documentation todos are pending, but as the deadline is close, this might be already useful for others even when the documentation is not yet finalized.
I.e., the main thing for iterating over documents could be already done via (e.g., as covered in the unit tests):
for doc in ir_datasets.load('msmarco-document-v2.1/segmented').docs_iter():
print(doc)
break
Awesome, thanks! I'll take a look at it tomorrow and see if I can tick some of the other tasks :)
Dataset Information:
It would be awesome to have the document corpus (and the segmented counterpart) used in TREC RAG 2024 as integration to ir_datasets. From the description on the web page, it should be no problem to add this, random access to documents should also be very efficient as the file and byte offset are already encoded in the document identifiers, so I think there should be no problem.
The only question that I would have is: As the document identifiers contain the offsets in the file where a document starts (but not the end), is there maybe already a functionality that seeks to the start and readys the json entry until the closing bracket? If not, I could add this as well with unit tests, should be no problem.
Links to Resources:
Dataset ID(s) & supported entities:
msmarco-document-v2.1
: for the original documentsmsmarco-document-v2.1/segmented
: for the segmented documentsChecklist
Mark each task once completed. All should be checked prior to merging a new dataset.
ir_datasets/datasets/[topid].py
)tests/integration/[topid].py
)ir_datasets generate_metadata
command, should appear inir_datasets/etc/metadata.json
)ir_datasets/etc/[topid].yaml
)ir_datasets/etc/downloads.json
).github/workflows/verify_downloads.yml
). Only one needed pertopid
.downloads.json
.Additional comments/concerns/ideas/etc.