Closed whedon closed 2 years ago
Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @cmbiwer, @mikapfl it looks like you're currently assigned to review this paper :tada:.
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
:star: Important :star:
If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿
To fix this do the following two things:
For a list of things I can do to help you, just type:
@whedon commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@whedon generate pdf
Software report (experimental):
github.com/AlDanial/cloc v 1.88 T=0.22 s (59.3 files/s, 31563.5 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 9 275 220 5642
Qt 1 0 0 732
Markdown 1 10 0 27
DOS Batch 2 0 0 19
-------------------------------------------------------------------------------
SUM: 13 285 220 6420
-------------------------------------------------------------------------------
Statistical information for the repository '7cb6826d67a65cd13483005c' was
gathered on 2021/07/29.
The following historical commit information, by author, was found:
Author Commits Insertions Deletions % of changes
Frederik 1 0 6702 14.27
Frederik Holm Gjørup 17 11786 13655 54.18
fgjorup 6 14761 53 31.55
Below are the number of rows from each author that have survived and are still
intact in the current revision:
Author Rows Stability Age % in comments
Frederik Holm Gjørup 6137 52.1 1.5 2.70
PDF failed to compile for issue #3546 with the following error:
Can't find any papers to compile :-(
@fgjorup Could you describe how your co-authors contributed to the work? The git history doesn't show contributions from other people than you, but git histories of course only show one side of a larger work.
@whedon generate pdf from branch JOSS-submission
Attempting PDF compilation from custom branch JOSS-submission. Reticulating splines etc...
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@fgjorup, reminder to answer @mikapfl's question when you get a chance, as well as the issue he opened on your repository: https://github.com/fgjorup/Reel/issues/3
@cmbiwer, please reach out if you have any questions about getting your review started.
@fgjorup Could you describe how your co-authors contributed to the work? The git history doesn't show contributions from other people than you, but git histories of course only show one side of a larger work.
@mikapfl of course.
Professor Mogens Christensen is the supervisor of the project and has primarily contributed with scientific background, funding, and proof-reading of the paper.
Mathias Mørch has taken part in designing the layout of the interface and has been the primary tester of the program. He has also taken part in defining the format of the .xyy input files and has provided the test files available in the repository.
Naturally, the above contributions have not been logged by Github.
I hope that answers your question, otherwise I will gladly elaborate.
@fgjorup, reminder to answer @mikapfl's question when you get a chance, as well as the issue he opened on your repository: fgjorup/Reel#3
@cmbiwer, please reach out if you have any questions about getting your review started.
@rkurchin, sorry for the delay, I am finally back in the office from my vacation. Please let me know if there is anything else I need to address.
Thanks for checking in! I think you're all set, we're just waiting for @mikapfl and @cmbiwer to continue their reviews, so do be on the lookout for any further comments/questions from them either here or as issues/PR's in your repo.
I will be take a deeper look this Friday.
:wave: @mikapfl, please update us on how your review is going (this is an automated reminder).
:wave: @cmbiwer, please update us on how your review is going (this is an automated reminder).
@fgjorup Could you provide a set of example files which I could use to run the program? I unfortunately don't have a set of XRD data files which would lend itself to an analysis with Reel.
@mikapfl There should already be a test_files folder in the Joss-submission repo, I hope those will do the trick.
I have finished taking a look at the software and can type up the checklist and my comments about the functionality, documentation, paper, and software sometime this week.
Though, there were a few crashes/errors I encountered. Either myself or @mikapfl have already made issues on the GitHub repo. I'd suggest those are address (i.e. issues #4 and #6).
Hi there @fgjorup, just checking in on this. Looks like @cmbiwer and @mikapfl have opened some issues in your repository. On one (https://github.com/fgjorup/Reel/issues/3), you responded but it doesn't look like that change has been made, and the others have no replies as yet...
Hi all, I see some issues have been closed. @cmbiwer and @mikapfl, are those resolved to your satisfaction? Don't forget to progress through the reviewer checklists!
@fgjorup, there's at least one issue still open as of now.
Review checklist for @cmbiwer
Conflict of interest
- [X] I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.
Code of Conduct
- [X] I confirm that I read and will adhere to the JOSS code of conduct.
General checks
- [X] Repository: Is the source code for this software available at the repository url?
- [X] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
- [X] Contribution and authorship: Has the submitting author (@fgjorup) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
- [X] Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
Functionality
- [X] Installation: Does installation proceed as outlined in the documentation?
- [X] Functionality: Have the functional claims of the software been confirmed?
- [X] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)
Documentation
- [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
- [X] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
- [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
- [X] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
- [X] Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
- [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
Software paper
- [X] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
- [x] A statement of need: Does the paper have a section titled 'Statement of Need' that clearly states what problems the software is designed to solve and who the target audience is?
- [x] State of the field: Do the authors describe how this software compares to other commonly-used packages?
- [X] Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
- [X] References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
I've filled out the checklist above. Overall, I believe the software fills a need and it is easy to imagine the software to be useful to scientists. Several other projects have developed or are developing similar 2D heatmap visualizations in some fashion. So I would say the functionality is recognized as an important development by the community already. I liked the user interface to Reel and I would say the author has created a nice-looking software product. However, this project is still missing some key items that are explicitly outlined in the checklist. Notably, there is no no community guidelines and the paper could give a bit clearer picture of Reel to the reader.
There is no community guidelines for contribution. The lack of community guidelines and no documentation of the code itself or inline comments will probably not invite much open-source contributions.
I would say the JOSS paper manuscript itself doesn't give a non-familiar reader a clear image of what Reel does. I think the text in the summary is very nice. However, an image of the GUI (like in their documentation) would give a reader of the actual paper a much better idea what Reel is in its statement of need. There also isn't a discussion of example usage or what real problems could be solved (for example identifying where a phase change happens, etc. or more generally what unique thing does Reel allow the user to see that they couldn't otherwise). The authors mention refinement programs but don't really state much about the current state of visualization in the field which is the intent of Reel.
The instructions do show the users to manually load up the software and verify its functionality, so it does meet the JOSS requirements. However, there were a number of issues already and perhaps some kind of automated testing would be beneficial to maintaining this project.
I'd say Reel is a nice program, but its still missing some of the review checklist items. The author has already address several issues in the repo, perhaps they could address these final missing items as well.
EDIT: One inline edit above.
Thanks @cmbiwer for these detailed comments. @fgjorup, please do feel free to respond in a comment here to discuss anything, and let us know what changes you make in response.
@mikapfl, let us know your thoughts on any of this and also reminder to work through the rest of your own checklist when you have some time!
Hi @rkurchin and @cmbiwer I have added a line about contact on contribution in the README.md, I hope that satisfies the requirements.
Regarding the manuscript, I would be happy to include an image of the GUI, perhaps with a comparison to other common visualization tools in the community. I will also expand a bit on the statement of need, how I expect the community to use the software, and how it compares to what is already available.
Should I already now revise the manuscript, or should I wait for feedback from @mikapfl ?
Up to you! @mikapfl, I think it would be useful for @fgjorup to have some of your feedback as well, here...any thoughts?
Hi,
regarding the manuscript, I think the summary is already good and for me there is no need to replicate things already found in the docs. However, if I understand correctly, Reel is rather one building block in a "pipeline" - while it is possible to view raw measurements, it really shines when viewing refinement results. In the docs, there is a list of supported file formats, maybe you could explain in the paper the ecosystem more, in particular which refinement software is "supported" by Reel and which hole Reel fills in the whole pipeline.
I will write a bit more regarding the other (non-paper) parts of the checklist and/or open more issues later.
Cheers,
Mika
Regarding your changes in the JOSS-submission
branch, do you intend to merge this into master? At the moment, people just landing on Reel's github presence don't get the various enhancements you already implemented.
Hi,
I've updated my checklist and reported some installation problems that should be rather straight-forward to fix.
I think Reel 1.0 is a fairly intuitively usable software which can be useful for analysis of preliminary results at the beamline as well as later on in the refinement process. However, I think to make Reel a solid foundation for future updates and maintenance, it needs automated tests. I don't agree with @cmbiwer that the instructions provided in the documentation fulfil the criteria for either automated tests or even clearly described steps for manual testing. A key functionality of Reel is that, according to the documentation, it is possible to analyse data supplied in 6 different formats, but at the moment, only the Reel custom format can be tested using the given instructions and data. Even there, expected test results are not clearly defined, so that more subtle errors can't be detected. As proper reading of file formats is often a source of errors, but also crucial for users, I think that testing should be done (preferably automatically) using all supported file formats. This is even possible with standard python tools for unit testing by testing the reading functions directly, so that more advanced testing tools for graphical user interfaces are not necessary.
Additionally, either automated tests or clear descriptions how to perform manual tests should be provided for testing Reel in a defined python environment. Some libraries Reel uses are under quite active development, and issues like https://github.com/fgjorup/Reel/issues/11 can be caught and fixed in time using testing strategies with clearly defined python environments.
Cheers,
Mika
👋 @fgjorup, reminder to respond to these when you have some time!
I will be out of country most of next week, but I will try to find some spare time to look at it!
@fgjorup, hope you had a nice trip. Here's a little reminder to respond to the comments above! :)
Thank you @rkurchin , we had a very productive trip to Sweden, where we managed to do some very exciting science at the new DanMAX beamline.
Inspired by both @cmbiwer and @mikapfl, I have updated the paper.md, so it now includes a figure of the interface, a flowchart-figure, and a figure comparing other visualization option in the field. I have also expanded on the Statement of Need and added a Statement of Field.
I will try to get around to addressing the remaining issues and look in to testing, as @mikapfl points out.
Regarding your changes in the
JOSS-submission
branch, do you intend to merge this into master? At the moment, people just landing on Reel's github presence don't get the various enhancements you already implemented.
I intend to merge all the changes related to the program once the review process is over, however, I am considering removing all files related to the JOSS paper (paper.md, paper.bib, figures, etc.).
I have added several sets of test files with various formats in the __testfiles folder. I have also added an option to open all test files as a "debug" function, by running Refinement_evaluator_ver1.0.py -debug
. I have added a line about the new option in the readme.md
. I believe @mikapfl had a bit more elaborate testing strategy in mind, but I am not sure how to implement that.
I have also chosen to remove .par files as a visible option, as I am unable to find sufficient documentation about the format from MAUD.
Hi Fredrik,
Thank you for your revisions, which tick all left-over checkmarks in this review for me. I have updated the checklist accordingly. :+1:
Regarding testing, I think you have a good way to exercise the full suite of reading functions now. For the future, you might want to look into pytest to also define the expected outputs, which will guard against many more errors than just seeing if it runs.
Let me elaborate with an example:
If I add the following in the Reel1.0
folder in a file named test_reading.py
:
import pytest
import os
import numpy as np
from _lib.ReelRead import readCSV
def test_read_csv():
"""Test that an example file can be read correctly."""
tth, im = readCSV(os.path.join('_test_files', 'csv', 'neutron_powder_diffraction.csv'))
assert len(tth) == 1151
# first element, some middle element, and last element
np.testing.assert_allclose(tth[[0, 100, -1]], [9.97, 19.97, 124.97])
assert im.shape == (1151, 50)
# first element, some middle element, and last element
np.testing.assert_allclose(im[[0, 100, -1], [0, 10, -1]], [742.43, 789., 828.])
def test_read_csv_wrong():
"""Ensure that non-CSV files throw an error instead of reading garbage."""
with pytest.raises(ValueError):
readCSV(os.path.join('_test_files', 'dat', 'neutron_powder_diffraction_0001.dat'))
I can now run the two tests I defined using pytest
(remember to pip install pytest
beforehand):
$ pytest
=========================================== test session starts ===========================================
platform linux -- Python 3.9.5, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /home/pflueger/work/Reel/Reel1.0
collected 2 items
test_reading.py .. [100%]
============================================ 2 passed in 0.16s ============================================
Of course, this looks like busy-work at first, especially considering that I used the output of readCSV
to find out the right values to assert
in the test in the first place. However, consider that you might be interested in later re-working the readCSV function. Let's say we want to take advantage of numpy's loadtxt
function, so I change readCSV:
def readCSV(fname):
- with open(fname,'r') as f:
- content = f.read()
- c = content.strip().split('\n')
- rows = len(c)
- tth = np.array(c[0].strip().split(','), dtype=float)
- im = np.array(','.join(c[1:]).strip().split(','), dtype = float).reshape(rows-1,len(tth))
- im = np.rot90(im,k=-1)
+ raw = np.loadtxt(fname, delimiter=',')
+ tth = raw[0]
+ im = raw[1:]
return tth, im
However, I forgot to rotate im
. This is something which might be easy to miss! Fortunately, running pytest
again highlights the problem rather well:
$ pytest
=========================================== test session starts ===========================================
platform linux -- Python 3.9.5, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /home/pflueger/work/Reel/Reel1.0
collected 2 items
test_reading.py F. [100%]
================================================ FAILURES =================================================
______________________________________________ test_read_csv ______________________________________________
def test_read_csv():
tth, im = readCSV(os.path.join('_test_files', 'csv', 'neutron_powder_diffraction.csv'))
assert len(tth) == 1151
# first element, some middle element, and last element
np.testing.assert_allclose(tth[[0, 100, -1]], [9.97, 19.97, 124.97])
> assert im.shape == (1151, 50)
E assert (50, 1151) == (1151, 50)
E At index 0 diff: 50 != 1151
E Use -v to get the full diff
test_reading.py:15: AssertionError
========================================= short test summary info =========================================
FAILED test_reading.py::test_read_csv - assert (50, 1151) == (1151, 50)
======================================= 1 failed, 1 passed in 0.21s =======================================
With the correct patch:
def readCSV(fname):
- with open(fname,'r') as f:
- content = f.read()
- c = content.strip().split('\n')
- rows = len(c)
- tth = np.array(c[0].strip().split(','), dtype=float)
- im = np.array(','.join(c[1:]).strip().split(','), dtype = float).reshape(rows-1,len(tth))
- im = np.rot90(im,k=-1)
+ raw = np.loadtxt(fname, delimiter=',')
+ tth = raw[0]
+ im = np.rot90(raw[1:], k=-1)
return tth, im
the tests will run fine, and I can be reasonably sure that I didn't break reading CSV files with my changes.
I hope that motivates somewhat why I think Reel would be more stable in the longer term with automated tests for the most important functionality. For me, adding rather comprehensive tests to a project really changed my mood when doing changes from "I hope I don't screw anything up" :grimacing: to be much more relaxed/focused on the failing tests. Of course, errors can still slip through, but many errors can be caught.
Cheers,
Mika
The aesthetics of the visualizations are of course subjective, however, it has been shown that certain colormaps, such as “rainbow” maps, can be misleading and in the worst case, result in scientifically wrong conclusions.[@nunez2018]
Also, :+1: for spreading the word about scientifically correct colormaps.
Thanks @mikapfl for your thorough and thoughtful review! Just to explicitly confirm, are you now happy to accept this submission as it currently stands?
@cmbiwer, how are things looking on your end?
Thanks @mikapfl for your thorough and thoughtful review! Just to explicitly confirm, are you now happy to accept this submission as it currently stands?
yes
Thank you @mikapfl for the great suggestion for a testing strategy and for taking the time to elaborate! I will definitely add your suggestion to my Reel to-do list.
The recent changes have addressed my comments. I've updated my checklist above to check the items, and I would accept the submission.
@whedon check references
@whedon generate pdf from branch JOSS-submission
Attempting PDF compilation from custom branch JOSS-submission. Reticulating splines etc...
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@whedon check references
@whedon check references from branch JOSS-submission
Attempting to check references... from custom branch JOSS-submission
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1107/S1600576718000183 is OK
- 10.1016/0921-4526(93)90108-I is OK
- 10.1524/9783486992540-020 is OK
- 10.1371/journal.pone.0199239 is OK
MISSING DOIs
- None
INVALID DOIs
- None
🎉 Congrats @fgjorup, we're almost ready to accept this! Putting my editorial hat on to make a couple of comments on the manuscript itself 🤠 :
Next steps here will be to make these tweaks, regenerate the proof (you should be able to do this yourself in a comment starting with @whedon generate pdf from branch JOSS-submission
), and check that everything looks okay. Then we'll make sure there's an archival version of the software and a tagged release and be just about ready to go!
Do you intend to merge in the JOSS-submission branch prior to final acceptance?
Great news @rkurchin ! My colleague, @moerch, should have taken care of the "editorial feedback", so the manuscript should be ready. Regarding merging, which makes the most sense? I would like the archived and tagged release to include the changes in the JOSS branch, so would you prefer I merged the branch before acceptance?
@whedon generate pdf from branch JOSS-submission
Attempting PDF compilation from custom branch JOSS-submission. Reticulating splines etc...
Submitting author: @fgjorup (Frederik Holm Gjørup) Repository: https://github.com/fgjorup/Reel/ Version: v1.2.0 Editor: @rkurchin Reviewer: @cmbiwer, @mikapfl Archive: 10.6084/m9.figshare.16817929
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@cmbiwer & @mikapfl, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @rkurchin know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Review checklist for @cmbiwer
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
Review checklist for @mikapfl
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper