Open jon-bell opened 5 years ago
In my opinion, if authors do not submit the complete dataset, they should be required to justify that decision. For very large datasets, they can always upload them to archive.org and additionally provide a subset that allows reviewers to test provided scripts. We used the following formation in the MSR 2019 Mining Challenge CfP:
Already upon submission, authors can privately share their anonymized data and software on preserved archives such as Zenodo or Figshare (tutorial available here). Zenodo accepts up to 50GB per dataset (more upon request). There is no need to use Dropbox or Google Drive. After acceptance, data and software should be made public so that they receive a DOI and become citable. Zenodo and Figshare accounts can easily be linked with GitHub repositories to automatically archive software releases. In the unlikely case that authors need to upload terabytes of data, Archive.org may be used.
tutorial available here
It’s here on my website: https://ineed.coffee/5205/how-to-disclose-data-for-double-blind-review-and-make-it-archived-open-data-upon-acceptance/
Thanks Daniel, forgot to copy the link.
Must the artifact provide data and tools to replicate ALL experiments in a paper, or is it allowable to scope an artifact to consider only part of the claims?
By default yes, unless it is not allowed due to other issues (e.g. IP). The burden should be on the authors to justify why some parts of the artefact where not made public.
Who decides (authors, reviewers or chairs) what claims in a given paper should be supported by the artifact?
All claims in a paper must be supported by the artefact. The authors must be responsible to either ensure this or explain why they cannot. There is no need for someone to decide.
What should we consider "too much data" or "too long of an experiment" that can’t be submitted in full for artifact evaluation? For instance, one researcher might consider a 2GB dataset too large to submit in full, while another might submit a 2TB dataset.
We could have a round 2^32 bytes maximum artefact size :-) Seriously, that should be up to the author.
Whatever the criteria for "too big" is - what process should authors follow if their artefact is too big to submit some subset of their artefact for evaluation?
A representative sample must be extracted from the full dataset. The statistical techniques must be documented by the authors and be subject to evaluation during artefact review. The tools that compose the artefact must be able to work with the sample and produce similar results, within ranges that the authors must describe and explain.
Must the artifact provide data and tools to replicate ALL experiments in a paper, or is it allowable to scope an artifact to consider only part of the claims?
Who decides (authors, reviewers or chairs) what claims in a given paper should be supported by the artifact?
What should we consider "too much data" or "too long of an experiment" that can’t be submitted in full for artifact evaluation? For instance, one researcher might consider a 2GB dataset too large to submit in full, while another might submit a 2TB dataset.
Whatever the criteria for "too big" is - what process should authors follow if their artifact is too big to submit some subset of their artifact for evaluation?