openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
715 stars 38 forks source link

[REVIEW]: The Neuroimaging Data Model (NIDM) Linear Regression Tool #3578

Closed whedon closed 2 years ago

whedon commented 3 years ago

Submitting author: @AKUMAR0019 (Ashmita Kumar) Repository: https://github.com/incf-nidash/PyNIDM Version: v.3.8.2 Editor: @osorensen Reviewer: @htwangtw, @robbisg Archive: Pending

:warning: JOSS reduced service mode :warning:

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/194b71c9943d2550be0df98ce62373df"><img src="https://joss.theoj.org/papers/194b71c9943d2550be0df98ce62373df/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/194b71c9943d2550be0df98ce62373df/status.svg)](https://joss.theoj.org/papers/194b71c9943d2550be0df98ce62373df)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@htwangtw & @robbisg, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @osorensen know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Review checklist for @htwangtw

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

Review checklist for @robbisg

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

whedon commented 3 years ago

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @htwangtw, @robbisg it looks like you're currently assigned to review this paper :tada:.

:warning: JOSS reduced service mode :warning:

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf
whedon commented 3 years ago

Wordcount for paper.md is 1611

whedon commented 3 years ago
Software report (experimental):

github.com/AlDanial/cloc v 1.88  T=0.23 s (474.3 files/s, 152950.1 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          75           2778           4037          10605
JavaScript                       8           2166           2178           8618
CSS                              4            323             42           1026
reStructuredText                 3            416            672            992
HTML                             3            340              6            543
TeX                              1             21              0            391
YAML                             4              8              2            349
Markdown                         6             41              0            115
DOS Batch                        1              8              1             27
Dockerfile                       1             12              0             26
Bourne Shell                     4              4             17             10
make                             1              4              6             10
-------------------------------------------------------------------------------
SUM:                           111           6121           6961          22712
-------------------------------------------------------------------------------

Statistical information for the repository '89755e3997fe1e0259a1a675' was
gathered on 2021/08/06.
The following historical commit information, by author, was found:

Author                     Commits    Insertions      Deletions    % of changes
AKUMAR0019                      54          6006           3744           16.69
Al                               2           326             39            0.62
Al Crowely                       2           451            144            1.02
Al Crowley                      27          4134            958            8.71
Ashmita Kumar                   16          1615            329            3.33
David Keator                   295         28591           7923           62.49
Dorota Jarecka                  25           436            971            2.41
Michael Hanke                    1            19             12            0.05
Nazek Queder                     1             1              1            0.00
Sanu Ann                         6          2380            113            4.27
Satrajit Ghosh                   5            75             23            0.17
Tom Gillespie                    1             2              1            0.01
Tristan Glatard                  5            65              7            0.12
dorota                           1             4              3            0.01
maxbachmann                      1             2              2            0.01
natachaperez                     1            45              2            0.08
tiborauer                        1             2              2            0.01

Below are the number of rows from each author that have survived and are still
intact in the current revision:

Author                     Rows      Stability          Age       % in comments
AKUMAR0019                 3055           50.9          2.0                6.51
Al                           94           28.8         21.4                1.06
Al Crowley                 3661           88.6         16.0                7.27
David Keator              20815           72.8         32.2               20.12
Dorota Jarecka              346           79.4         45.2               12.72
Sanu Ann                   2297           96.5         38.5               10.19
Satrajit Ghosh               54           72.0         44.0               11.11
Tristan Glatard              26           40.0         45.8               23.08
dorota                        4          100.0         35.9               25.00
tiborauer                     2          100.0         23.6                0.00
whedon commented 3 years ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

whedon commented 3 years ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1038/sdata.2016.102 is OK
- 10.1016/j.neuroimage.2013.05.094 is OK

MISSING DOIs

- None

INVALID DOIs

- None
robbisg commented 3 years ago

Dear all,

Thanks for writing this tool (NIDM Linear Regression Tool) and submitting to JOSS!

I will write my review here since it seems to me that the link of the repository is a fork of the main repo. I can post the issue on the main repo but, as far as I understood, the submission refers to the linear regression tool which is included in the pyNIDM package.

Code review I found difficult to review this submission since I cannot figure out what are the changes in terms of lines of code and commit history as requested by JOSS (https://joss.readthedocs.io/en/latest/submitting.html#substantial-scholarly-effort). The fork, as far as I understood, contained only the JOSS submission paper and had any code proper for the tool. Given that, many of the points on the review checklist are difficult to check, for example automated tests, installation and examples are included in the main repository and are not proper of the tool.

Paper review The paper indicates why this linear regression tool is needed with an example. Tthe tool seems to be easy to use and an explanation is provided (Figure 2), I'd suggest introducing a much more detailed explanation of the string needed in the command since for a non-expert of semantic web specification this can be difficult to understand. Moreover, another use case of the software can be useful to fully understand the strength of it. I suppose that no other tool like this is available, so stressing this aspect can be a point in favor of pyNIDM regression tool. Some edits to the paper and a careful review of references is needed.

kthyng commented 3 years ago

@robbisg I am the rotating managing editor this week and I happened across your note here. Can you explain why you need to look at the code from a certain fork or part of the code? I want to make sure there isn't a misunderstanding.

robbisg commented 3 years ago

Hi @kthyng , sorry for being unclear. This paper is about a linear regression tool which is only a portion of the main package. The repository link (https://github.com/AKUMAR0019/PyNIDM) is a fork of the main repository. Unfortunately, this fork doesn't contain the linear regression tool but only the JOSS paper files, so it is difficult to check whether the tool has tests, enough lines of code and so on. If I did something uncorrectly, I apologize.

osorensen commented 3 years ago

Thanks for clarifying @robbisg. If I understand you correctly, the overview of contributors given in the source repository (https://github.com/AKUMAR0019/PyNIDM/graphs/contributors) shows all contributions to a larger package, and I understand that this makes it hard to assess the point on the review checklist regarding Contribution and authorship.

@AKUMAR0019, could you please clarify this issue. Are the contributions to the repository https://github.com/incf-nidash/PyNIDM directly related to the development of the software described in this submission?

whedon commented 3 years ago

:wave: @robbisg, please update us on how your review is going (this is an automated reminder).

whedon commented 3 years ago

:wave: @htwangtw, please update us on how your review is going (this is an automated reminder).

osorensen commented 3 years ago

👋 @robbisg, please update us on how your review is going (this is an automated reminder).

@robbisg, sorry, I can't turn off these automated reminders.

kthyng commented 3 years ago

@AKUMAR0019, @osorensen — If this review is for a package called PyNIDM, I would also suggest having that name in the title of this submission and the paper.

AKUMAR0019 commented 3 years ago

Hi @robbisg and @osorensen. Thank you for your review! Please see my point by point responses below.

  1. I worked on this project in a fork of the main PyNIDM repository and once the code functioned correctly it was merged into the main pynidm repository via a pull request. The main repository does contain the regression tool, found at this location: https://github.com/incf-nidash/PyNIDM/tree/master/nidm/experiment/tools. All PyNIDM tools are contained here and integrated with the overall project using the Click python library. This means anyone who installs the PyNIDM toolbox gets access to all of the tools that are supported for NIDM via the command line pynidm [tool]. With respect to the linear regression too, it is run using pynidm linear-regression. The commit history of the tool can be traced in my fork of pynidm (https://github.com/AKUMAR0019/PyNIDM ).

  2. The PyNIDM linreg tool is related to the whole toolbox found in PyNIDM. It uses functions in the PyNIDM library, specifically the RestParser to query the data the RestParser is over 700 lines of code as is nidm_linreg.py so in total the tool is approximately 1400 lines of code. Since the linear regression tool works on the graphs supported by this package, the contributions in the repository are directly related to the development of the software. It is one of the many tools supporting NIDM graphs, all of which are included in this integrated PyNIDM toolbox.

  3. Thank you for your suggestion to improve our description of the command line parameters and how to format the modal definitions. We are working on revising this section of the manuscript.

  4. Thank you for your suggestion to better describe the novelty of the tool. We are working on revising this text in the manuscript.

  5. The PyNIDM toolbox has lost of test cases. We are currently integrating the test cases for the linear regression tool and get back to you with a link to those specific to the linear regression tool. The test cases for the RestParser (a functional component of the linear regression tool) can be found here: https://github.com/incf-nidash/PyNIDM/blob/master/nidm/experiment/tools/tests/test_rest.py

@kthyng This paper is for the linear regression tool found in the PyNIDM package. I appreciate your feedback, and I will revise the title in the manuscript.

osorensen commented 3 years ago

Thanks for clarifying, @AKUMAR0019.

Regarding point 1, I see that the fork reports that it's 49 commits ahead of master. Does this mean that you are still developing the code in this branch, and create pull requests onto the repository incf-nidash/PyNIDM when done?

Second, and this may be a question to @kthyng or @openjournals/joss-eics, since the final package will be available in the repository incf-nidash/PyNIDM, I wonder if we should eventually set the repository for this submission to incf-nidash/PyNIDM, rather than the fork AKUMAR0019/PyNIDM?

robbisg commented 3 years ago

Thanks for clarifying @AKUMAR0019.

I have a suggestion given your explanation of point 2. To my point of view, I think that a more detailed description of the input structure can be very helpful to clarify the tool, for example you mentioned that linear regression is performed on graph, so what are these graphs representing and which information is coded in these graphs?

AKUMAR0019 commented 3 years ago

Thank you for your responses @osorensen and @robbisg. Please see my point by point responses below.

  1. @osorensen The 49 commits are associated with a different tool I am completing that is separate from nidm_linreg.py. In other words, those commits are not related to the paper. I will eventually create pull requests onto the repository incf-nidash/PyNIDM when done, but the complete linear regression tool is already in the main repository incf-nidash/PyNIDM.

  2. @robbisg Thank you for your suggestion to include a more detailed description of the graphs and what is coded in them. We are working on revising this part in the manuscript.

osorensen commented 3 years ago

Thanks @AKUMAR0019, that sounds absolutely fine regarding point 1. However, if repository incf-nidash/PyNIDM is the one where users can contribute to the code, we should reference that repository in the paper (if accepted) rather than AKUMAR0019/PyNIDM which is currently stated as the repository for this submission. Does that make sense?

AKUMAR0019 commented 3 years ago

Hi @osorensen. I apologize for the confusion. I agree, incf-nidash/PyNIDM should be the repository for this submission. I put AKUMAR0019/PyNIDM because that was the location of the paper files, but it is not the main repository and should be changed. Please let me know how I can do so, and thank you for the feedback.

osorensen commented 3 years ago

@AKUMAR0019, I have now changed the repository to https://github.com/incf-nidash/PyNIDM. @htwangtw and @robbisg, please note this, and place any issues in that repository.

@openjournals/dev, if necessary, could you please replace the source repository in the database?

htwangtw commented 3 years ago

Sorry I only just noticed the notification of GitHub has some update and muted all notification from this repo for me. (The instruction of the bot might need some update.) I am back to work now and should start the review this week!

osorensen commented 3 years ago

Sorry about that @htwangtw, but good to hear that you're back at work. It should be possible to turn on notifications for this issue by clicking the "Notications" button on the right hand side if you scroll up to the top of the page.

arfon commented 3 years ago

@openjournals/dev, if necessary, could you please replace the source repository in the database?

Done.

htwangtw commented 3 years ago

Hi all,

Thanks for writing this tool and invite me to review! This is my first JOSS review so please let me know

Since the submission refers to the linear regression tool in the pyNIDM package, I am not sure what is the most appropriate way to address some of the points. If anything is out of scope, please feel free to point it out @osorensen.

Paper

The manuscript explains the purpose of the linear regression tool in PyNIDM with good clarity and examples. The software provides an unique application on NIDM.

More descriptions on the NIDM graph would be beneficial to the general reader. Ideally, the paper should be a self-contained document for the user to understand the software and usage without researching on other source materials.

Typesetting of the reference is not consistent with JOSS guideline, such as in Line 16, the correct way to type set it is (NIDM; Keator et al., 2013; NIDM Working Group; Maumet et al., 2016). The markdown syntax is [@author1:2001; @author2:2001]. Please see JOSS example paper and bibliography.

Software

The software is difficult to review as there are some issues that are going to impact the whole repository, not just the function added itself.

Issues related to the linear regression module:

Here are two potentially out-of-scope issues. I am happy to have the editors decide if this is too much of a ask:

osorensen commented 3 years ago

Thanks for your very thorough review, @htwangtw! I think these are very good points, which I hope @AKUMAR0019 can address.

AKUMAR0019 commented 3 years ago

Thanks for your review @htwangtw. Please see my point by point responses below:

  1. Thank you for your suggestion to improve our description of the command line parameter syntax and how to format the modal definitions. We are working on revising this section of the manuscript, as well as the documentation. These are being reviewed by the team right now.
  2. Thank you for your suggestion to better describe NIDM. We are working on revising this text in the manuscript.
  3. Thank you for your feedback on the references section. We will work on fixing that.
  4. The PyNIDM toolbox has lots of test cases. We are currently integrating the test cases for the linear regression tool and get back to you with a link to those specific to the linear regression tool. The test cases for the RestParser (a functional component of the linear regression tool) can be found here: https://github.com/incf-nidash/PyNIDM/blob/master/nidm/experiment/tools/tests/test_rest.py
  5. We will check on the issue with the license and get back to you.
  6. Currently, the code does check input validity and raises errors, which we did in order to reduce the burden on the users. The code handles erroneous inputs with custom error messages. For example, if a variable is not found, instead of letting it return a convoluted error message, we have it specify which variables were not found in which files, so the user has better feedback on what to change in the next run of the command. Error checks also exist for improper syntax and for cases where there are less than 20 points of data. Once these error messages are given, the code is stopped. However, we would appreciate more feedback on other parts of the user input to check.
  7. Thank you for your feedback on global variables. We will do our best to change the code to use parameters instead.
  8. Thank you for your feedback on docstrings. We will add in those to clarify what each function does.
osorensen commented 3 years ago

@AKUMAR0019, could you please update us on how it's going with updating the code as outlined in your post above?

AKUMAR0019 commented 3 years ago

Hi @osorensen. The paper has been updated with feedback from @htwangtw and @robbisg, and the test case is expected to be added by Tuesday. We will try to have the global variable and docstring changes done the end of next weekend. However, I will inform you if they take longer.

The license issue should be resolved now, and the license button on GitHub points to the license.txt file. Our references also now follow the guide in the checklist (https://pandoc.org/MANUAL.html#citations).

Thank you for your feedback, and please let us know if there are any other changes we should make.

osorensen commented 3 years ago

Thanks for the update @AKUMAR0019

osorensen commented 3 years ago

Dear @AKUMAR0019, please let us know when the test case has been updated, and when you think you have addressed all the points raised by the reviewers.

AKUMAR0019 commented 3 years ago

Hi @osorensen. The test case has been updated (https://github.com/incf-nidash/PyNIDM/blob/master/nidm/experiment/tools/tests/test_nidm_lingreg.py), and we have addressed all the points raised by the reviewers except for the feedback on global variables, which is taking a while to implement as tests and other code is already functioning with those variables. However, the code itself is still working, the test specific to the code is running properly, the paper has been updated, and the license and references have been fixed, so I think we are ready to proceed.

osorensen commented 3 years ago

@htwangtw and @robbisg, when time permits, can you please consider @AKUMAR0019's post above and consider whether the points you raised in your reviews have been addresses. If so, please also update your review checklists.

Thanks for your time and patience!

robbisg commented 3 years ago

@whedon generate pdf

whedon commented 3 years ago

PDF failed to compile for issue #3578 with the following error:

 Can't find any papers to compile :-(
osorensen commented 3 years ago

@AKUMAR0019, it looks like there are no papers to compile in the source repository. Can you please make sure the paper is there? If you wish, it is possible to put the paper in a separate branch, and then compile it with whedon generate pdf from branch branch-name.

AKUMAR0019 commented 3 years ago

@osorensen @robbisg While https://github.com/incf-nidash/PyNIDM is where the code is stored, the paper is in https://github.com/AKUMAR0019/PyNIDM/blob/master/paper.md. My fellow contributors to PyNIDM wanted me to keep the paper in my personal fork of it as opposed to putting it in the main repository. I am not certain how to get whedon to compile the paper there. Can you advise? Thank you.

osorensen commented 3 years ago

@AKUMAR0019, the submission guidelines say:

Your paper (paper.md and BibTeX files, plus any figures) must be hosted in a Git-based repository together with your software (although they may be in a short-lived branch which is never merged with the default).

I hence suggest that you discuss this again with your fellow contributors and see if you can add the paper to the repository.

danielskatz commented 3 years ago

Note that the paper was in the repo when this submission was accepted for review, and it needs to stay there, though it can be in a branch, as @osorensen says. Otherwise, we will have to reject the submission.

osorensen commented 3 years ago

Thanks @danielskatz. I just wanted to add that the original submission repository was the fork where the paper is saved. Thus the paper compilation succeeded in the pre review issue. However, since this fork is not the home of the software, we changed the submission repository after some discussion in the top of this thread. And since the paper has not been put in this repository, compilation now fails.

AKUMAR0019 commented 3 years ago

@whedon generate pdf

whedon commented 3 years ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

AKUMAR0019 commented 3 years ago

@osorensen @robbisg @htwangtw Thank you for your patience. The paper is now available at https://github.com/incf-nidash/PyNIDM/tree/master/docs/manuscripts/linreg_joss, and it is compiling.

AKUMAR0019 commented 2 years ago

Hi @osorensen @robbisg @htwangtw. Thank you for your time. Just to recap: The repository listed by whedon (https://github.com/incf-nidash/PyNIDM) contains the source code at https://github.com/incf-nidash/PyNIDM/blob/master/nidm/experiment/tools/nidm_linreg.py. The plain-text license file is also in the main repo (https://github.com/incf-nidash/PyNIDM/blob/master/LICENSE). The code itself contains documentation, as does the README.

There are automated tests running for both the RestParser and for the linear regression tool, which can be found at https://github.com/incf-nidash/PyNIDM/blob/master/nidm/experiment/tools/tests/test_rest.py and https://github.com/incf-nidash/PyNIDM/blob/master/nidm/experiment/tools/tests/test_nidm_lingreg.py. The README contains a button called "docs passing" that has redirects to the documentation (also found at https://pynidm.readthedocs.io/en/latest/). There is a section there (1.5-1.7) explaining how to contribute to the software, report issues, and seek support.

The paper is fully updated with references as described in the link in the checklist, and it has changed based on your feedback. It can be found at https://github.com/incf-nidash/PyNIDM/tree/master/docs/manuscripts/linreg_joss.

Is there anything else I need to do for the review?

osorensen commented 2 years ago

Thanks for the update @AKUMAR0019. I think you have done what you can do at this point. @robbisg and @htwangtw, can you please consider the updates and see if they address the points raised in your reviews?

robbisg commented 2 years ago

Hi @AKUMAR0019 ,

Thanks for improving the package and the submission.

I have a doubt regarding the Substantial Scholarity Effort requirement of JOSS. One one hand, I think that the package can be very useful to easily build linear models using heterogeneous NIDM documents and sources, in addition it eases the users to fetch and analyze this kind of datasets. On the other hand, it seems to me that this regression tool is more a "single-line" tool included in a the PyNIDM package, rather than a toolbox I think that is my main point to be addressed.

I found some issues that have not been addressed by this submission.

Paper

The tool it seems very promising to me, but I think that for NIDM non-expert, it is difficult to understand whether the tool can be appropriate for their research.

  1. I strongly suggest to include a clear description of how data is represented and what is the "shape" of these graphs.
  2. A statement describing similar tools or the uniqueness of NIDM-LR is recommended.
  3. Some references are wrong or need to be refined (e.g. Patsy, OpenNeuro). I'd recommend to avoid referencing an URL, unless is needed, and try to convert URL references to document references (e.g. preprint).
  4. This is related to the software, but I found it in the paper so I add it in this paragraph. In Figure 3, the results are presented using statsmodels, but the variables are not self-explanatory (e.g. I have to remember what ilx_0100400 stands for) . I suggest to help the user to insert a more explanatory string to quick understand and share the results.

Documentation

The documentation looks messy to me, maybe it is not stricktly related to this tool, but more for the PyNIDM.

  1. In the documentation the Installation part is not completely clear, so please change PyPi to Installation.
  2. The command for the linear regression tool is not shown in the linear_regression paragraph, so please fix this.
  3. A use case it is included in the documentation, but I think that including two (or more) examples, in the example section, covering a whole analysis is very useful for users.

Code

It is great that you added tests and this is valuable. :+1: This may be out of the scope, but as pointed out by @htwangtw , global variables and the definition of functions, without any input parameter and that use those global variables, strongly reduce the usability and the maintainability of the tool.

Thanks, Roberto

osorensen commented 2 years ago

Thanks a lot for your thorouogh comments, @robbisg.

After reading through the paper once more, I do share your concern about substantial scholarly effort. If I understand correctly, the main contribution of this submission is to enable fitting of linear regression models, by using modules available in scikit-learn.

It is a bit problematic to assess the exact number of lines of code for this tool, since it is part of a bigger package. In the source repository, I find this file. @AKUMAR0019 can you please help me out here by pointing out other files in the source respository which are part of this submission?

osorensen commented 2 years ago

@whedon query scope

whedon commented 2 years ago

Submission flagged for editorial review.

htwangtw commented 2 years ago

Main concerns

My main concern is test quality. I would like to first thank the authors for adding an executable tests for the linear regression tool, however the tests add are not sufficient as far as I can tell. I ran the tests updated and found the two out of four tests are skipped due to numerical instability. I had a look at the code of the test because of the skipped tests. The submitted software is a wrapper to combine NIDM with other linear regression tool that are thoroughly tested. In that case, the aim of the test should be focus on the format of the output is as expected, rather than the numerical accuracy. This point apply to all four tests added.

I agree with the point about scholarly effort raised by @robbisg. The paper is on the linear regression function, but through the review process the author did raise things along the line of the repo having a lot of tests and mentioned the RestParser? It's rather confusing and need to be further clarified.

Paper

Code

Out-of-scope comments

I understand this is not related to the submission author's work, but I really feel they should be raised. The organisation of the current package is rather messy. For users concerned about the reliability of the software, they will not trust it simply because there's an associated paper published. Several files in the top level of the repository are clearly accidentally pushed to repository (the .egg.info directory, all files started with =), file with unclear purpose (test.ttl; if this is for test it should be in the relevant test diretory), and files with some functionality but probably wrongly placed (rest-server.py, profiler.py).

osorensen commented 2 years ago

@AKUMAR0019 @robbisg @htwangtw; after giving it some thought, I've decided to add a query-scope label on this submission. I share the concern stated by @robbisg and @htwangtw in their posts just above that the contribution of this submission might be too small to be of sufficient scholarly effort for a JOSS publication. In particular, I am not sure if the submission satisfies the following criteria:

“Minor utility” packages, including “thin” API clients, and single-function packages are not acceptable.

@openjournals/joss-eics; please note that the submission is an extension of an existing tool, and is mainly this file, if I have understood correctly.