codecheckers / paper

Manuscript about CODECHECK
https://codecheck.org.uk
2 stars 0 forks source link

Later works that are related #10

Open nuest opened 2 years ago

nuest commented 2 years ago

https://www.nature.com/articles/s41597-022-01143-6

image

sje30 commented 2 years ago

oo, great find!

On Thu, Feb 24 2022, Daniel Nüst wrote:

https://www.nature.com/articles/s41597-022-01143-6

image

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.

nuest commented 2 years ago

Peer review: a flawed process at the heart of science and journals

image

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1420798/

nuest commented 2 years ago

The Peer-Review Crisis By Colleen Flaherty June 13, 2022;

The peer-review system, which relies on unpaid volunteers, has long been stressed. COVID-19 is making it worse—a lot worse. Possible solutions include paying reviewers or limiting revise-and-resubmits, but are these just Band-Aids on bigger structural problems?

https://www.insidehighered.com/news/2022/06/13/peer-review-crisis-creates-problems-journals-and-scholars

nuest commented 2 years ago

image

Reproducibility standards for machine learning in the life sciences - Benjamin J. Heil, Michael M. Hoffman, Florian Markowetz, Su-In Lee, Casey S. Greene & Stephanie C. Hicks - Nature Methods (2021)

To make machine-learning analyses in the life sciences more computationally reproducible, we propose standards based on data, model and code publication, programming best practices and workflow automation. By meeting these standards, the community of researchers applying machine-learning methods in the life sciences can ensure that their analyses are worthy of trust.

https://www.nature.com/articles/s41592-021-01256-7

Comment on this by Ben M.:

The authors propose three standards, gold, silver, and bronze, based on data, code and model availability. Looks like a useful tool to help with doing peer review of papers using ML:

IMO: the classification could be useful to clearly communicate what a CODECHECK did (not) check.

nuest commented 1 year ago

https://www.brainiacsjournal.org/arc/pub/Craig2022MMEEPR

image

Scientists who engage in science and the scientific endeavor should seek truth with conviction of morals and commitment to ethics. While the number of publications continues to increase, the number of retractions has increased at a faster rate. Journals publish fraudulent research papers despite claims of peer review and adherence to publishing ethics. Nevertheless, appropriate ethical peer review will remain a gatekeeper when selecting research manuscripts in scholarly publishing and approving research applications for grant funding. However, this peer review must become more open, fair, transparent, equitable, and just with new recommendations and guidelines for reproducible and accountable reviews that support and promote fair citation and citational justice. We should engineer this new peer-review process with modern informatics technology and information science to provide and defend better safeguards for truth and integrity, to clarify and maintain the provenance of information and ideas, and to rebuild and restore trust in scholarly research institutions. Indeed, this new approach will be necessary in the current post-truth era to counter the ease and speed with which mis-information, dis-information, anti-information, caco-information, and mal-information spread through the internet, web, news, and social media. The most important question for application of new peer-review methods to these information wars should be ‘Who does what when?’ in support of reproducible and accountable reviews. Who refers to the authors, reviewers, editors, and publishers as participants in the review process. What refers to disclosure of the participants' identities, the material content of author manuscripts and reviewer commentaries, and other communications between authors and reviewers. When refers to tracking the sequential points in time for which disclosure of whose identity, which content, and which communication at which step of the peer-review process for which audience of readers and reviewers. We believe that quality peer review, and peer review of peer review, must be motivated and maintained by elevating their status and prestige to an art and a science. Both peer review itself and peer review analyses of peer reviews should be incentivised by publishing peer reviews as citable references separately from the research report reviewed while crossreferenced and crosslinked to the report reviewed.

nuest commented 1 year ago

https://www.biorxiv.org/content/10.1101/2022.10.11.511695v2.full

image

A workflow reproducibility scale for automatic validation of biological interpretation results Hirotaka Suetake, Tsukasa Fukusato, Takeo Igarashi, Tazro Ohta https://doi.org/10.1101/2022.10.11.511695

nuest commented 1 year ago

https://www.nature.com/articles/s43588-022-00388-w

image

nuest commented 1 year ago

https://blogs.lse.ac.uk/impactofsocialsciences/2022/03/24/there-are-four-schools-of-thought-on-reforming-peer-review-can-they-co-exist/

image


https://researchonresearch.org/tpost/4vn3fxo0d1-the-future-of-peer-review-is-being-pulle


https://doi.org/10.31235/osf.io/8hdxu

image

Kaltenbrunner, W., Pinfield, S., Waltman, L., Woods, H. B., & Brumberg, J. (2022, January 22). Innovating peer review, reconfiguring scholarly communication: An analytical overview of ongoing peer review innovation activities. https://doi.org/10.31235/osf.io/8hdxu

nuest commented 1 year ago

https://www.nature.com/articles/d41586-022-03791-5

image

nuest commented 1 year ago

https://doi.org/10.1101/2022.08.08.503174

image

nuest commented 1 year ago

https://theplosblog.plos.org/2021/04/the-importance-of-early-career-researchers-for-promoting-open-research/

Another key finding was differences in levels of acceptance of a mandatory code sharing policy between research fields, and career stage (as determined by the number of previous publications respondents had published). Medical researchers reported being less likely to submit to the journal if it had a mandatory code sharing policy, as did researchers with more than 100 publications. Whereas, researchers with fewer than 20 published papers showed more positive responses towards submitting to the journal if it implemented a code sharing policy. Other studies have found greater affinity for open research amongst early career researchers, including a 2021 peer-reviewed survey of Early Career Researchers (ECRs) within the Max Planck society, which concluded ECRs seem to hold a generally positive view toward open research practices.

nuest commented 1 year ago

When should data and code be made available? Rachel HeyardLeonhard Held https://doi.org/10.1111/1740-9713.01623

image

nuest commented 1 year ago

image

Crüwell, S., Apthorp, D., Baker, B. J., Colling, L., Elson, M., Geiger, S. J., Lobentanzer, S., Monéger, J., Patterson, A., Schwarzkopf, D. S., Zaneva, M., & Brown, N. J. L. (2023). What’s in a Badge? A Computational Reproducibility Investigation of the Open Data Badge Policy in One Issue of Psychological Science. Psychological Science, 0(0). https://doi.org/10.1177/09567976221140828

nuest commented 8 months ago

https://doi.org/10.1162/dint_a_00133

image

Limor Peer, Claudia Biniossek, Dirk Betz, Thu-Mai Christian; Reproducible Research Publication Workflow: A Canonical Workflow Framework and FAIR Digital Object Approach to Quality Research Output. Data Intelligence 2022; 4 (2): 306–319. doi: https://doi.org/10.1162/dint_a_00133