ropensci / unconf18

http://unconf18.ropensci.org/
44 stars 4 forks source link

Discussion: Expanding peer review of code #37

Open noamross opened 6 years ago

noamross commented 6 years ago

rOpenSci has long been interested in incubating projects that adopt our approach to open peer review of code in areas outside our scope, such as in other languages or, especially for implementations of statistical algorithms. A few unconf attendees (inc. @mmulvahill, @dynamicwebpaige, @jenniferthompson) have expressed interest in this, so it would be good to set aside time to discuss prospects for new code review projects. I suggest this would be a second-day 60-90 minute lunch discussion rather than a full two-day project, but depending on people's interest some of us could run with it!

goldingn commented 6 years ago

πŸ™‹ I also am very interested in this, for software review at Methods in Ecology and Evolution and other domain-specific journals publishing software descriptors. The main problem I've faced handling these papers at MEE is that the pool of potential reviewers has very little experience reviewing code and usability.

Lunch discussion sounds like a great idea!

jenniferthompson commented 6 years ago

Sounds like a plan! πŸŽ‰

mmulvahill commented 6 years ago

A colleague and I have been looking into ways to improve the availability/quality/dissemination of new biostats tools & methods. We started with a series of 30+ interviews ~2 yrs ago with biostatisticians and others with related experience (including @karthik). The overall outcome was that there's room for biostats-trained software devs within departments with some forethought (if funding is included in grants), and that rOpenSci had already made great progress on the curation and quality issues we also came across.

We've been working on the software dev side but not the curation/quality, so hearing you're interested in incubating other domains made my ears perk up.

Our focus has been on methods developed within CTSA's and the Biostats, Epi, & Research Design (BERD) arms of these institutional grants, for no other reason that's been our funding source. Combining forces with others from other domains would be πŸ‘ πŸ‘ πŸ‘

Look forward to talking more -- lunch would be great!

seaaan commented 6 years ago

I would be interested in this too.

lauracion commented 6 years ago

I am also interested in this discussion - mostly as a listener I believe.

jhollist commented 6 years ago

While I won't be in Seattle, I am very interested in the outcome of this discussion.

We are currently working on implementing some level of code review in our internal EPA review process. My plan was to borrow heavily from rOpenSci onboarding. Will anxiously watch this issue over the next few weeks!

maurolepore commented 6 years ago

Count me in. I would love to learn more about the current review process and its potential for expansion.

boshek commented 6 years ago

@goldingn: I think this is quite a good problem statement.

The main problem I've faced handling these papers at MEE is that the pool of potential reviewers has very little experience reviewing code and usability.

I get the feeling that this is a bigger task that I realize but I wonder about the feasibility of a suite of code review tools. Off the top of my head I can think of the following packages that would be useful when evaluating code:

I wonder if it would be possible to develop a sort of devtools for reviewers (revtools?) whereby someone approaching a potential review project could expand on the tools available in pkgreviewr. If one could develop a clear API, it could help a possible reviewer. The idea here is that it would give a possible reviewer the flexibility to ask some questions of the code itself. For example lobstr::cst() comes to mind as a useful function to provide a reviewer when you have function call upon function call. The idea here would be to provide flexible, rather than prescriptive, tools.

maelle commented 6 years ago

@stephlocke started working on sthg similar in https://github.com/lockedata/PackageReviewR

stefaniebutland commented 6 years ago

@annakrystalli pkgreviewr mentioned πŸ‘†

goldingn commented 6 years ago

@boshek +1 for the name revtools!

Yeah, something like that would be super helpful (though even just a written guide/rubric to get people started would help people in our case).

I had checking out pkgreviewer on my to do list, but PackageReviewR looks awesome too!

noamross commented 6 years ago

Just bookmarking this: we had a conversation a bit ago with an organization interested in our review process. It wasn't public, but they asked good questions about initiating the process, so I just (very roughly) edited the notes down to our one-sided responses: https://docs.google.com/document/d/14m1Rkp4WKPpGn585r21g3xcb0mVDHy3Ov9N2qUdqetA/edit

noamross commented 6 years ago

And, many might have already seen these, but some relevant posts:

@goldingn we feel you, building and maintaining the reviewer base is one of the biggest challenges, one I'm happy to chat more about. From the second post:

One of the core challenges and rewards of our work has been developing a community of reviewers. Reviewing is a high-skill activity. Reviewers need expertise in the programming methods used in a software package and also the scientific field of its application. (β€œFind me someone who knows sensory ecology and sparse data structures!”) They need good communications skills and the time and willingness to volunteer. Thankfully, the open-science and open-source worlds are filled with generous, expert people. We have been able to expand our reviewer pool as the pace of submissions and the domains of their applications have grown.

Developing the reviewer pool requires constant recruitment. Our editors actively and broadly engage with developer and research communities to find new reviewers. We recruit from authors of previous submissions, co-workers and colleagues, at conferences, through our other collaborative work and on social media. In the open-source software ecosystem, one can often identify people with particular expertise by looking at their published software or contribution to other projects, and we often will cold-email potential reviewers whose published work suggests they would be a good match for a submission.

We cultivate our reviewer pool as well as expand it. We bring back reviewers so that they may develop reviewing as a skill, but not so often as to overburden them. We provide guidance and feedback to new recruits. When assigning reviewers to a submission, we aim to pair experienced reviewers with new ones, or reviewers with expertise on a package’s programming methods with those experienced in its field of application. These reviewers learn from each other, and diversity in perspectives is an advantage; less experienced developers often provide insight that more experienced ones do not on software usability, API design, and documentation. More experienced developers will more often identify inefficiencies in code, pitfalls due to edge-cases, or suggest alternate implementation approaches.

I remember when JOSS launched we both wanted to help and were worried about cannibalizing our own reviewer pool, but it's grown organically, and I think we could definitely use our own reviewer pool to help incubate a new organization again.

noamross commented 6 years ago

Expanding pkgreviewer andPackageReviewR, and perhaps integrating them, might be a full-blown unconf project if folks want to take it up.

stephlocke commented 6 years ago

I'm happy for PackageReviewR to be cannibalised into ropensci or worked on standalone. I'm not at the unconf this year but can facilitate/assist/mentor remotely.

annakrystalli commented 6 years ago

Sorry to arrive late to this party. Very happy for pkgreviewr to be integrated into a more comprehensive suite of tools. And absolutely love revtools as a name! πŸ’œπŸ’―

noamross commented 6 years ago

I've been working on a tool to create a package diagnostic report. I had originally thought of it as giving a quick scan for us rOpenSci editors to use, and also provide a standardized build environment, but I've expanded its scope to try to cover numerous things that reviewers could use. Working on this here (draft report here, but 100% intend to fold it into pkgreviewr or its successors. The thing I've gotten into recently is trying to provide a report about package functions, their [relative complexity and relationships amongst each other using static code analysis]().

Anyway, it sounds like we have two potential ideas here:

I'll open a new issue for the 2nd bullet.

laderast commented 6 years ago

I did mention the rOpenSci reviewing guidelines to the software working group in our center for data to health (CD2H) project. This is a group that is encouraging software best practices in CTSA (Clinical & Translational Sciences Award) centers, and they are looking for a similar kind of review process.

They seemed very interested, and I'm happy to facilitate any conversations rOpenSci people might want to have with them.

maelle commented 6 years ago

First rOpenSci review thread ever https://github.com/ropensci/onboarding/issues/6

maelle commented 6 years ago

Package categories by @ldecicco https://owi.usgs.gov/R/packages.html

maelle commented 6 years ago

Bookdown (WIP) repo cc @jenniferthompson https://github.com/ropenscilabs/dev_guide suggestions welcome as issues (or in person)

lauracion commented 6 years ago

Hey folks, I totally missed this discussion during the unconf :cry: Is there any summary of what was discussed or any follow up for this? I am still interested in learning more and be in the loop for this type of discussion. Thank you!