Closed maelle closed 3 years ago
I think we may want to keep mention of it in the reviewing guide, but only to optionally run - esp if the maintainer/submitter makes changes and then the reviewer may want to run gp again to see how the results change, yes?
Running and deploying gp on travis is good to go https://github.com/ropensci/travis-goodpractice - and updating a Heroku bot to run the checks and ping results back - will be asking editors for feedback
Alternative approach to running goodpractice
on Travis in Locke Data's pRojects
.
goodpractice
, devtools::spell_check
pRojects
)Pros: easier than using AWS I think? Cons: maybe it's a bit too much to have the report built at every commit to master, and besides, would we ask authors to set this up before submitting?
maybe it's a bit too much to have the report built at every commit to master
yes, i think that's a bit too much.
would we ask authors to set this up before submitting?
seems a bit cumbersome. i'd like it if we could just run it automatically on submission, or at some point in submission, e,.g., when we apply the editor checks label or similar
We should stop asking reviewers to run goodpractice.
fixed in dev
Currently we recommend the use of
goodpractice
in the reviewing guideIf we start having
goodpractice
reports from Travis hooked to the submission, do we expect reviewers to rungoodpractice::gp
? Moreover, isn't it already a duplication of efforts since we postgoodpractice
results in the submission thread?I understand that we want reviewers to use e.g.
devtools::check
because of different OS/locale/etc. but does this apply togoodpractice
? It takes a while to run and here might be useless?