While some the suggestions aren't necessarily relevant to EMCRs in academia, we could list some of these ideas as things to consider asking a peer reviewer to assess — in particular, high-level questions such as are there any opportunities here to teach / learn?
For example, the top-rated comment (by iaalaughlin) suggests:
Does the code do what it’s supposed to do?
Are the tests accurate and do they work?
Are the assumptions that are made accurate?
Is the code commented and formatted to standard?
Is the logic sound?
Does the code (and test cases) cover the variety of use cases?
Other suggestions include:
How readable is the code, is it readily understandable? Are variable and function names helpful? Are there useful comments?
Is there a better way (e.g., more efficient, less code, use an existing module) to solve the problem?
What happens when the code doesn't work?
Are there written (and successful) unit tests? If the code can't be tested, can it be refactored to make it testable?
We can also expand this to reproducible/open science concerns:
Can the reviewer easily set up a compatible environment and run the code?
Are the necessary instructions/details included, and are they explained clearly?
Can the reviewer reproduce key results or verify that test cases pass?
As we flesh out other sections of the book, we can link these suggestions to the relevant section(s).
I came across an interesting discussion: What are some common things you look at while reviewing Pull requests?
While some the suggestions aren't necessarily relevant to EMCRs in academia, we could list some of these ideas as things to consider asking a peer reviewer to assess — in particular, high-level questions such as are there any opportunities here to teach / learn?
For example, the top-rated comment (by
iaalaughlin
) suggests:Other suggestions include:
We can also expand this to reproducible/open science concerns:
As we flesh out other sections of the book, we can link these suggestions to the relevant section(s).