Open kjappelbaum opened 2 years ago
Here's a tool that might already address some of the checkmarks https://github.com/fair-software/howfairis
I saw that. It doesn’t do what I want, and I don’t like how it does it. With this kind of stuff I would much rather rebuild it from scratch.
I feel it could be similar to checkcif, i.e. highlight potential issues for an expert reviewer (and required by journals).
I think one could check off many of the items on the JOSS reviewer checklist automatically.
Additionally, one might try:
As you suggested, one might want to also add subject-specific checks. For instance, for ML
Quite domain specific would then be to check if the train/test set are indeed independent.