Open davclark opened 8 years ago
I tend to think about the broad class of computational research as covering Simulation & Modeling to Data Analysis.
Maybe we could change the phrase to "(ideally applied to solve at least part of some real-world research problems)"?
On the other issue: verification is always going to be an issue when some resource needed as part of the processing is hard to access - I think all we can do is to verify what we can, and report what we have verified, and perhaps add any opinions we have on the parts we can't verify.
Thanks @danielskatz - I like simple.
You're proposing one end of a spectrum on the verification issue. Intermediate would be - submit a video of the functioning software. Perhaps harder would be - please get another person to build and use your software and attest that they have done so on the issue.
I'd lean towards the "low bar" as well - particularly because a corrigendum can be added in the (vanishingly unlikely) case of misrepresentation.
openjournals/joss-reviews#47 illustrates another example of where guidelines may be misleading even if there's not as much of a "problem" with the guidelines as in openjournals/joss-reviews#40. In the first case, a potential reviewer got the impression that they'd need the hardware to fully evaluate the submission.
Also, to be clear, openjournals/joss-reviews#40 was resolved using the "low bar" criterion. So, we're at least developing a body of case law even if there's no formal policy change ;)
Generally, I think we can accept a publication when the author states that his/her contribution is significant and valuable to research. Reviewers do not always understand the value of a work and time will weed out the useless contributions. With JOSS open review model I think it is sufficient to have the author confirm that it is of value and next let others cite the work. We have a truly open review model here and the main and essential job of reviewers is to check whether the claims that are made are true. Also any improvements on code, deployment and license are very valuable.
I brought this point up in openjournals/joss-reviews#40, and @arfon suggested I open an issue here.
Currently, review guidelines include the phrasing "(ideally to solve real-world analysis problems)". More broadly, the guidelines are clearly tuned to an analysis. There are definitely high-impact software systems for data collection: ODK, Purple Robot, funf... in addition to the novel software in the above issue.
Most problematic is that full verification of functioning software may not be possible in these cases. This calls for some consideration of policy in these cases, and more specifically / immediately, a decision about how to handle the above issue.
I would propose a (partially) separate set of guidelines for analysis vs. "physical world" data collection software. I'll hold off on writing more specifics until I hear at least some buy-in on this idea.