openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
721 stars 38 forks source link

[REVIEW]: QPrism: A Python Library for Quality Assessment of Sensor Data Collected in Real-world Settings #4839

Closed editorialbot closed 1 year ago

editorialbot commented 2 years ago

Submitting author: !--author-handle-->@RamziHalabi<!--end-author-handle-- (Ramzi Halabi) Repository: https://github.com/aid4mh/QPrism Branch with paper.md (empty if default branch): Version: v.0.3.3 Editor: !--editor-->@galessiorob<!--end-editor-- Reviewers: @lucask07, @jodemaey, @sidihamady Archive: Pending

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/98bd3b1f7a60613fb61f70d4a3092a42"><img src="https://joss.theoj.org/papers/98bd3b1f7a60613fb61f70d4a3092a42/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/98bd3b1f7a60613fb61f70d4a3092a42/status.svg)](https://joss.theoj.org/papers/98bd3b1f7a60613fb61f70d4a3092a42)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@lucask07 & @jodemaey, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @galessiorob know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @lucask07

📝 Checklist for @jodemaey

📝 Checklist for @sidihamady

editorialbot commented 2 years ago

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf
editorialbot commented 2 years ago
Software report:

github.com/AlDanial/cloc v 1.88  T=0.27 s (320.4 files/s, 45870.5 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Jupyter Notebook                14              0           3896           5081
Python                          44            467            721           1256
reStructuredText                10             88             40            221
Markdown                         3             62              0            115
TeX                              1             12              0             95
TOML                             1              6              0             45
DOS Batch                        1              8              1             26
JSON                            10              0              0             10
make                             1              4              7              9
-------------------------------------------------------------------------------
SUM:                            85            647           4665           6858
-------------------------------------------------------------------------------

gitinspector failed to run statistical information for the repository
editorialbot commented 2 years ago

Wordcount for paper.md is 1409

editorialbot commented 2 years ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1145/1081870.1081891 is OK
- 10.25080/Majora-7b98e3ed-003 is OK
- 10.1038/s41746-022-00643-4 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1145/3371425.3371427 is OK

MISSING DOIs

- None

INVALID DOIs

- None
editorialbot commented 2 years ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

galessiorob commented 2 years ago

👋 @lucask07, @jodemaey

Thank you both for volunteering to review this paper! Please type and comment @editorialbot generate my checklist, this will generate a checklist with each of the aspects for you to review regarding the software, documentation, and the paper. Please let me know if you have any questions and feel free to post them directly in this issue.

jodemaey commented 2 years ago

Review checklist for @jodemaey

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

lucask07 commented 2 years ago

Review checklist for @lucask07

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

lucask07 commented 2 years ago

I have concerns regarding substantial scholarly effort and if the contributions are accurately communicated.

  1. The JOSS scope and submission requirements state:

    Minor 'utility' packages, including 'thin' API clients, and single-function packages are not acceptable.

The functions I have checked seem to be a 'thin' API client. For example: audio_sample_rate is 2 lines of code

  1. Other functions are unfamiliar and the mathematics are not defined in the documentation. For example the SCR calculation. I understand from the paper that the acronym is the sensor channel ratio. The mathematics and purpose of the sensor channel ratio needs to be defined.

  2. Metrics make unsubstantiated claims. For example, video artifacts states to detect video frames of artifacts including "motion blur, too grainy, static". However, the software only calculates the frame histogram and then determines if more than 50% of the pixels in a frame lie within 5% to 100% of the histogram span. This is not sufficient to determine if artifacts are present in an image.

  3. Generally sensor evaluation needs a calibrated and defined input. For example, the spatial resolution of a camera is assessed by imaging a resolution chart; the conversion factor, SNR, and linearity of an accelerometer are measured by applying known accelerations. It is not clear how sensors can be assessed without known inputs.

galessiorob commented 2 years ago

@editorialbot add @sidihamady as reviewer

editorialbot commented 2 years ago

@sidihamady added to the reviewers list!

galessiorob commented 2 years ago

@sidihamady Please type and comment @editorialbot generate my checklist, this will generate a checklist with each of the aspects for you to review regarding the software, documentation, and the paper. Please let me know if you have any questions and feel free to post them directly in this issue.

galessiorob commented 2 years ago

@lucask07 I'm reviewing your comment above and will reply soon. Thank you!

jodemaey commented 2 years ago

I would like to request some clarifications regarding the Contribution and authorship point. I noticed that the main/first author of the paper has only made a single commit with one line. It seems that all the authors are roughly from the same lab so it might be ok, but please clarify it here.

jodemaey commented 2 years ago

Regarding the scholarly effort and documentation, I also noticed that the package uses by default pre-trained models to classify audio (yamnet) and detect objects in video (yolov5) while this is barely documented and linked.

Also, the functions of the package being involved with these feature mainly channel the result of these models to the users. Therefore I concur with @lucask07 that this is a concern regarding the Substantial scholarly effort and thin API point.

RamziHalabi commented 2 years ago

I would like to request some clarifications regarding the Contribution and authorship point. I noticed that the main/first author of the paper has only made a single commit with one line. It seems that all the authors are roughly from the same lab so it might be ok, but please clarify it here.

That is correct. I am the first author of this paper, and I am the main contributor to the design and coding of the data quality functions, with emphasis on sensor data quality metrics, as well as drafting the manuscript. We distributed the tasks amongst ourselves to submit to JOSS, and the most repository-contributing co-author to push and publish a clean version of the code to GitHub is Zixiong Lin. I had already written and implemented the functions on a large database for another paper, so have all other members at different contribution levels that may not be measured using the number of Github commits. @jodemaey I hope this answers your concern, and I'll be glad to provide feedback whenever needed.

jodemaey commented 2 years ago

Yes, thanks for the clarification.

sidihamady commented 2 years ago

Review checklist for @sidihamady

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

sidihamady commented 2 years ago

I agree with my colleagues @lucask07 and @jodemaey that there is a concern about "substantial scientific effort and thin API point". A a reminder, the JOSS guide details this item:

https://joss.readthedocs.io/en/latest/submitting.html#substantial-scholarly-effort

We have to check if each of the points in the list is fulfilled.

RamziHalabi commented 2 years ago

@galessiorob - I read the reviewers' comments on our codes and paper, and I was wondering whether we are expected to interact and provide answers/clarifications at this stage. It would be very helpful to know this before we address the raised concerns. Thanks in advance.

galessiorob commented 2 years ago

Hi @RamziHalabi , please feel free to comment directly here and interact with the reviewers.

I've reached out to @arfon about the concerns @sidihamady @lucask07 and @jodemaey raised on the level of effort and substantial contribution, and reviewed the comments and package myself. I'll come back with some feedback soon after I reach out to the Editors in Chief.

jodemaey commented 2 years ago

Continuing this review, I noticed that there is no mention of other similar packages in the paper. According to the review guidelines:

State of the field: Do the authors describe how this software compares to other commonly-used packages?

My question for the authors is thus the following: Is QPrism rather unique or are there other packages providing the same kind of functionalities?

jodemaey commented 2 years ago

About the installation, I successfully installed the software, but I do not understand its complexity. Why not simply propose to the user to do a

$ pip install QPrism

?

I guess this is related to https://github.com/aid4mh/QPrism/issues/1 .

jodemaey commented 2 years ago

On another note, about the notebooks providing examples, their folder is labelled as tests which is confusing. Jupyter notebook is not specified in the requirements, such that pip does not install it in the virtual env. Consequently, the user is left without a clue about how to run the tests/examples. The only way is then to click on the 'Open in Colab' link in the notebook on GitHub (or install Jupyter notebook in the venv, but you assume then that they will find out by themselves).

Also, these are not automated tests (e.g. using an automated software test suite), and as such - in the present state - I cannot check the mark about it in the review:

Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?

Please see the corresponding issue: https://github.com/aid4mh/QPrism/issues/2

galessiorob commented 2 years ago

@editorialbot generate pdf

editorialbot commented 2 years ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

galessiorob commented 1 year ago

@RamziHalabi I did a thorough assessment of the review comments and the submission, please see below:

Also, of concern, as @lucask07 points out, the lack of calibration parameters requirements is not fully explained or substantiated. This statement would require full proof or citation at least.

I have shared these thoughts and the paper with @gkthiruvathukal (editor), to make a final decision on this submission.

gkthiruvathukal commented 1 year ago

Thank you, @RamziHalabi, for your submission to JOSS. Thank you, @galessiorob, for this detailed report.

I have done an independent review of the code and the previous comments from this review issue thread and concur that submission does not meet our threshold for substantial scholarly effort. This is an overarching criterion for a JOSS submission and, therefore, grounds for rejection. However, combined with the other major issues in @galessiorob's summary, it would require substantial rework for this submission to proceed regardless of the scholarly effort clause. I will proceed with the rejection of this submission.

gkthiruvathukal commented 1 year ago

@editorialbot reject

editorialbot commented 1 year ago

Paper rejected.