openplantpathology / Reproducibility_in_Plant_Pathology

A systematic/quantitative review of articles, which provides a basis for identifying what has been done so far in the field of plant pathology research reproducibility and suggestions for ways to improving it.
https://openplantpathology.github.io/Reproducibility_in_Plant_Pathology
Other
23 stars 6 forks source link

Dividing up the work #3

Closed adamhsparks closed 4 years ago

adamhsparks commented 8 years ago

How to organise our efforts for writing this manuscript?

adamhsparks commented 8 years ago

I've started the outline, (finally) sorry for the lengthy delay, and added a few issues for comment.

To start with, can everyone comment on the document structure and proposed sections? https://github.com/adamhsparks/Reproducible-Research-in-Plant-Pathology/issues/1

Then, once we've agreed on that, let's divide the tasks for writing from there, using this Issue #3.

adamhsparks commented 7 years ago

How would you guys like to divide up the work?

I'd like to just assign a category to each of us and then we can get started looking for articles. We should have a list to make sure that if there is overlap, we're capturing all aspects.

emdelponte commented 7 years ago

@adamhsparks yes, good idea. Which ones are mine? how many do we have? we need to move this forward. Need a list of other variables to capture from the journals (IF, country, page charges, Open/restricted access, issues/year, presence/absence of instructions encouraging reproducibility, presence/absence of supplementary material section, etc) and from the articles. Are we going to focus on the reproducibility aspect of the analysis (data, codes, etc) or technical aspects of the methodological framework (field/lab methods described earlier by @zachary-foster)? I would vote for leaving the latter out of this work.

zachary-foster commented 7 years ago

@adamhsparks

I'd like to just assign a category to each of us and then we can get started looking for articles.

I liked how you randomly selected articles here. With a few modifications, I think this is the best way to select articles. I think we should then randomly assign papers to each of us. If we each looked at a category, then the person looking could be a confounding factor with category.

@emdelponte

how many do we have?

I think we decided on 200

Need a list of other variables to capture from the journals

Yea, this is the main thing we need to decide on to move forward.

I would vote for leaving the latter out of this work.

Fine with me. This will not be applicable to all studies anyway. So, are we specifically focusing on computational reproducibility then?

Before we can divide things up, I think we need to make sure we agree on a procedure for randomly sampling articles. @adamhsparks had some code for that here that is a great start. If we are not trying to characterize specific journals (issue #2) and consider a wider range of IFs, maybe we should not take the top 20 journals, but instead a random selection of journals or the top 100 if we want to leave out really low IF journals.

I have added issues that address these specific goals. Lets discuss this further there. Once they are decided, we can assign each of us our set of articles here.

Paper/journals attributes: issue #5

Paper selection methods: issue #6

grunwald commented 7 years ago

I think we should select a random subset of 200 papers including terms 'plant pathology'?

emdelponte commented 7 years ago

@grunwald the original idea was to get a sample (20 articles) from all journals that focus on plant pathogens/diseases - assuming we will be targeting Phytopathology. When using Google Scholar, how to decide on what to pick in view of the huge number of results that we will get? Results are provided and ordered by google's algorithm and so biased towards impact, relevance, etc. The 20 journals are representative of geographic regions, impact, specialized/general, etc. Some correlations could be explored better perhaps with a fixed set of journals.

grunwald commented 7 years ago

@emdelponte, I like your approach given that making sure we focus heavily on traditional plant pathology journals will cover our community of peers and provide impetus to become more open. My approach would dilute this. Let's go with your list of journals ...

adamhsparks commented 7 years ago

See Article Selection Methods and Evaluator Assignments

adamhsparks commented 4 years ago

Need to revise this section (https://github.com/openplantpathology/Reproducibility_in_Plant_Pathology/blob/83e631f28c89b03b3039d7166f581cdfd66c275b/vignettes/b1_assigning_articles.Rmd#L14) to cover the new articles for two more years from @emdelponte and @AlvesKS that I've integrated into the spreadsheet.

adamhsparks commented 4 years ago

Oops, I already covered that here, https://github.com/openplantpathology/Reproducibility_in_Plant_Pathology/blob/master/vignettes/b2_assigning_articles.Rmd. I do need to update this document though.