openjournals / joss

The Journal of Open Source Software
https://joss.theoj.org
MIT License
1.54k stars 187 forks source link

Reviewer checklist vs. JOSS acceptance criteria | (When) Can reviewers recommend rejection? #1395

Open dataspider opened 2 days ago

dataspider commented 2 days ago

I am currently reviewing two submissions in the computational social sciences and digital humanities, edited by @sappelhoff and @sbenthall (adding you here because I wasn't sure where to ask the question), which led me to a more general question.

The JOSS homepage includes the following as submission requirements:

- Be feature-complete (no half-baked solutions) and be designed for maintainable extension (not one-off modifications).
- Minor 'utility' packages, including 'thin' API clients, and single-function packages are not acceptable.

I am also aware that the JOSS review is checklist-based, and that we are happy to give authors as much time as they need to implement the improvements needed to satisfy the checklist requirements.

But how should reviewers handle submissions that match one of the exclusion criteria stated above? Where are these exclusion criteria reflected in the reviewer checklist?

The perhaps best-matching criterion in the current checklist is "Substantial scholarly effort" (although that is not even listed in the current checklist on the JOSS website?!), but that would discount the fact that some software might be easy to write for some but hard to write for others (especially those without any training in computer science or software development). At the same time, software that doesn't meet the requirements stated above probably really shouldn't get the approval stamp of a software journal, and I am a bit concerned that we will inadvertently lower the software-quality bar, incentivize "cheap" publications, and prevent the emergence of better engineering practices in fields where computational methods are rather new.

I am generally happy to help people improve their software and their software-development practices, and I want to be supportive of researchers who are just picking up computational methods, but I also feel that we should be able to draw a line somewhere. (For example, the code I am currently reviewing strongly reminds me of the "throwaway" code I regularly write as part of my own research [dressed up with some documentation], and which is published in reproducibility packages for individual papers, but which I would never consider submitting to JOSS, precisely because it meets the exclusion criteria highlighted above.) Any guidance or thoughts on this are much appreciated!

sappelhoff commented 15 hours ago

Thanks for raising your concerns @dataspider.

But how should reviewers handle submissions that match one of the exclusion criteria stated above?

Reviewers should voice their concerns like you do (either on GitHub directly, or privately to the editor), and then the respective editor would confer with the track editor and they then may trigger a "scope review". Usually this scope review happens before passing the submission on to peer reviewers, but of course more details can arise during review, so a scope review may also be triggered later on. In this scope review, other JOSS editors will have a look at the project and give a vote (optionally providing reasons) for "in scope" or "out of scope".

If a submission is voted "in scope", there is no way to reject it. Reviewers and editors should instead take on the task of discussing with the author(s) about which steps need to be taken in order to check off the items on the reviewer lists.

Submissions that are voted "out of scope" are usually closed.

For the submission that I am editing and that you likely refer to, @samhforbes is the track editor. I have reached out to them on another channel, too, and hopefully we can soon come back to you with more information on how to proceed.