Closed editorialbot closed 1 year ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.27 s (320.4 files/s, 45870.5 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Jupyter Notebook 14 0 3896 5081
Python 44 467 721 1256
reStructuredText 10 88 40 221
Markdown 3 62 0 115
TeX 1 12 0 95
TOML 1 6 0 45
DOS Batch 1 8 1 26
JSON 10 0 0 10
make 1 4 7 9
-------------------------------------------------------------------------------
SUM: 85 647 4665 6858
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 1409
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1145/1081870.1081891 is OK
- 10.25080/Majora-7b98e3ed-003 is OK
- 10.1038/s41746-022-00643-4 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1145/3371425.3371427 is OK
MISSING DOIs
- None
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
👋 @lucask07, @jodemaey
Thank you both for volunteering to review this paper! Please type and comment @editorialbot generate my checklist
, this will generate a checklist with each of the aspects for you to review regarding the software, documentation, and the paper. Please let me know if you have any questions and feel free to post them directly in this issue.
I have concerns regarding substantial scholarly effort and if the contributions are accurately communicated.
Minor 'utility' packages, including 'thin' API clients, and single-function packages are not acceptable.
The functions I have checked seem to be a 'thin' API client. For example: audio_sample_rate is 2 lines of code
Other functions are unfamiliar and the mathematics are not defined in the documentation. For example the SCR calculation. I understand from the paper that the acronym is the sensor channel ratio. The mathematics and purpose of the sensor channel ratio needs to be defined.
Metrics make unsubstantiated claims. For example, video artifacts states to detect video frames of artifacts including "motion blur, too grainy, static". However, the software only calculates the frame histogram and then determines if more than 50% of the pixels in a frame lie within 5% to 100% of the histogram span. This is not sufficient to determine if artifacts are present in an image.
Generally sensor evaluation needs a calibrated and defined input. For example, the spatial resolution of a camera is assessed by imaging a resolution chart; the conversion factor, SNR, and linearity of an accelerometer are measured by applying known accelerations. It is not clear how sensors can be assessed without known inputs.
@editorialbot add @sidihamady as reviewer
@sidihamady added to the reviewers list!
@sidihamady Please type and comment @editorialbot generate my checklist, this will generate a checklist with each of the aspects for you to review regarding the software, documentation, and the paper. Please let me know if you have any questions and feel free to post them directly in this issue.
@lucask07 I'm reviewing your comment above and will reply soon. Thank you!
I would like to request some clarifications regarding the Contribution and authorship point. I noticed that the main/first author of the paper has only made a single commit with one line. It seems that all the authors are roughly from the same lab so it might be ok, but please clarify it here.
Regarding the scholarly effort and documentation, I also noticed that the package uses by default pre-trained models to classify audio (yamnet) and detect objects in video (yolov5) while this is barely documented and linked.
Also, the functions of the package being involved with these feature mainly channel the result of these models to the users. Therefore I concur with @lucask07 that this is a concern regarding the Substantial scholarly effort and thin API point.
I would like to request some clarifications regarding the Contribution and authorship point. I noticed that the main/first author of the paper has only made a single commit with one line. It seems that all the authors are roughly from the same lab so it might be ok, but please clarify it here.
That is correct. I am the first author of this paper, and I am the main contributor to the design and coding of the data quality functions, with emphasis on sensor data quality metrics, as well as drafting the manuscript. We distributed the tasks amongst ourselves to submit to JOSS, and the most repository-contributing co-author to push and publish a clean version of the code to GitHub is Zixiong Lin. I had already written and implemented the functions on a large database for another paper, so have all other members at different contribution levels that may not be measured using the number of Github commits. @jodemaey I hope this answers your concern, and I'll be glad to provide feedback whenever needed.
Yes, thanks for the clarification.
I agree with my colleagues @lucask07 and @jodemaey that there is a concern about "substantial scientific effort and thin API point". A a reminder, the JOSS guide details this item:
https://joss.readthedocs.io/en/latest/submitting.html#substantial-scholarly-effort
We have to check if each of the points in the list is fulfilled.
@galessiorob - I read the reviewers' comments on our codes and paper, and I was wondering whether we are expected to interact and provide answers/clarifications at this stage. It would be very helpful to know this before we address the raised concerns. Thanks in advance.
Hi @RamziHalabi , please feel free to comment directly here and interact with the reviewers.
I've reached out to @arfon about the concerns @sidihamady @lucask07 and @jodemaey raised on the level of effort and substantial contribution, and reviewed the comments and package myself. I'll come back with some feedback soon after I reach out to the Editors in Chief.
Continuing this review, I noticed that there is no mention of other similar packages in the paper. According to the review guidelines:
State of the field: Do the authors describe how this software compares to other commonly-used packages?
My question for the authors is thus the following: Is QPrism rather unique or are there other packages providing the same kind of functionalities?
About the installation, I successfully installed the software, but I do not understand its complexity. Why not simply propose to the user to do a
$ pip install QPrism
?
I guess this is related to https://github.com/aid4mh/QPrism/issues/1 .
On another note, about the notebooks providing examples, their folder is labelled as tests
which is confusing. Jupyter notebook is not specified in the requirements, such that pip does not install it in the virtual env. Consequently, the user is left without a clue about how to run the tests/examples. The only way is then to click on the 'Open in Colab' link in the notebook on GitHub (or install Jupyter notebook in the venv, but you assume then that they will find out by themselves).
Also, these are not automated tests (e.g. using an automated software test suite), and as such - in the present state - I cannot check the mark about it in the review:
Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
Please see the corresponding issue: https://github.com/aid4mh/QPrism/issues/2
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@RamziHalabi I did a thorough assessment of the review comments and the submission, please see below:
The main topic, assessing sensor data quality, is an important and impactful one, and focusing on data correctness and completeness is a very interesting angle to take. The proposed DQM battery that QPrism provides is enough material to compose a robust OS package, however, the current submission falls short of this on several fronts.
First, while this submission aggregates 21 metrics and their underlying functions, the code for these are short API wrappers, which does not comply with JOSS’s requirement of “substantial scholarly effort.” For example, both the audio sample rate and the audio rms are four and seven lines of code respectively. Several of the Sensor and Video files present the same issue. The author commented that he had already written and implemented these functions for another paper, however, if this was so then this paper might be a small extension of the original one and potentially not enough to grant an individual publication of this work.
Second, while the supplemental material does explain the math behind nine metrics of the DQM battery, twelve additional ones should be included to provide a complete explanation and transparency.
Finally, there’s an opportunity to refine the user guide to demonstrate how this package meets JOSS’s “demonstration of the tool’s usefulness” criterion. As presented right now the guide is mostly a list of code snippets with little explanation of how they can be orchestrated to provide the incremental value of the metric aggregation that QPrism proposes.
Also, of concern, as @lucask07 points out, the lack of calibration parameters requirements is not fully explained or substantiated. This statement would require full proof or citation at least.
I have shared these thoughts and the paper with @gkthiruvathukal (editor), to make a final decision on this submission.
Thank you, @RamziHalabi, for your submission to JOSS. Thank you, @galessiorob, for this detailed report.
I have done an independent review of the code and the previous comments from this review issue thread and concur that submission does not meet our threshold for substantial scholarly effort. This is an overarching criterion for a JOSS submission and, therefore, grounds for rejection. However, combined with the other major issues in @galessiorob's summary, it would require substantial rework for this submission to proceed regardless of the scholarly effort clause. I will proceed with the rejection of this submission.
@editorialbot reject
Paper rejected.
Submitting author: !--author-handle-->@RamziHalabi<!--end-author-handle-- (Ramzi Halabi) Repository: https://github.com/aid4mh/QPrism Branch with paper.md (empty if default branch): Version: v.0.3.3 Editor: !--editor-->@galessiorob<!--end-editor-- Reviewers: @lucask07, @jodemaey, @sidihamady Archive: Pending
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@lucask07 & @jodemaey, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @galessiorob know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @lucask07
📝 Checklist for @jodemaey
📝 Checklist for @sidihamady