openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
725 stars 38 forks source link

[REVIEW]: SeqMetrics : a unified library for performance metrics calculation in python #6450

Closed editorialbot closed 4 months ago

editorialbot commented 9 months ago

Submitting author: !--author-handle-->@AtrCheema<!--end-author-handle-- (Ather Abbas) Repository: https://github.com/AtrCheema/SeqMetrics Branch with paper.md (empty if default branch): master Version: v2.0.0 Editor: !--editor-->@mstimberg<!--end-editor-- Reviewers: @FATelarico, @y1my1, @SkafteNicki Archive: 10.5281/zenodo.12958902

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/60529204a3b812ef772e3aa8bceccd16"><img src="https://joss.theoj.org/papers/60529204a3b812ef772e3aa8bceccd16/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/60529204a3b812ef772e3aa8bceccd16/status.svg)](https://joss.theoj.org/papers/60529204a3b812ef772e3aa8bceccd16)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@FATelarico & @y1my1 & @SkafteNicki, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @mstimberg know.

✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨

Checklists

πŸ“ Checklist for @FATelarico

πŸ“ Checklist for @SkafteNicki

πŸ“ Checklist for @y1my1

editorialbot commented 9 months ago

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf
editorialbot commented 9 months ago

Software report:

github.com/AlDanial/cloc v 1.90  T=0.06 s (515.4 files/s, 183199.4 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          12           1577           3313           3905
Markdown                         3            168              0            825
YAML                             4             14             13             86
reStructuredText                 7             52            187             55
TeX                              1              5              0             54
DOS Batch                        1              8              1             26
make                             1              4              7              9
-------------------------------------------------------------------------------
SUM:                            29           1828           3521           4960
-------------------------------------------------------------------------------

Commit count by author:

    60  AtrCheema
     7  Sara-Iftikhar
     6  Ather Abbas
     4  FazilaRubab
     1  The Codacy Badger
editorialbot commented 9 months ago

Paper file info:

πŸ“„ Wordcount for paper.md is 1026

βœ… The paper includes a Statement of need section

editorialbot commented 9 months ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1029/2007JD008972 is OK
- 10.48550/arXiv.1809.03006 is OK

MISSING DOIs

- 10.1163/2214-8647_dnp_e612900 may be a valid DOI for title: Keras
- No DOI given, and none found for title: Scikit-learn: Machine learning in Python

INVALID DOIs

- 10.21105/joss.041012 is INVALID
editorialbot commented 9 months ago

License info:

🟑 License found: GNU General Public License v3.0 (Check here for OSI approval)

editorialbot commented 9 months ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

mstimberg commented 9 months ago

πŸ‘‹πŸΌ @AtrCheema, @FATelarico, @y1my1, @SkafteNicki this is the review thread for the paper. All of our communications will happen here from now on.

As a reviewer, the first step is to create a checklist for your review by entering

@editorialbot generate my checklist

as the top of a new comment in this thread.

There are additional guidelines in the message at the start of this issue.

Please feel free to ping me (@mstimberg) if you have any questions/concerns.

SkafteNicki commented 9 months ago

Review checklist for @SkafteNicki

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

SkafteNicki commented 9 months ago

Going to preface my review by saying that I am the maintainer of Torchmetrics, which is being reference in this paper. The TM team welcomes more libraries in within the field of evaluating machine learning models, as we consider this paramount for the field of machine learning to move forward. Also we do not see SegMetrics to be a library in direct competition as the difference in computational backend (pytorch for TM vs numpy for SegMetrics) makes each package suitable for different researchers.

Overall I am satisfied with the paper as it is now. SegMetrics is a nice software package that can be used to calculate a large range of metric on 1D data. It is therefore narrow in scope, but that also means that it can be great at what it does (it definitely seems faster than torchmetrics for calculating a lot of metric in one go). It has a simple and consistent interface and is easy to use. The paper have a clear problem statement, relevant references and a explanation of the API.

However, my main concern is regarding the robustness of the package which is a large claim from the authors throughout the paper. I have laid out my full review in this issue: https://github.com/AtrCheema/SeqMetrics/issues/3, with proposed changes. There are a few breaking points for me at the moment to recommend this paper being accepted to JOSS.

FATelarico commented 9 months ago

Review checklist for @FATelarico

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

FATelarico commented 9 months ago

I concur with @SkafteNicki that the submission is almost good to go. However, I have to subscribe to some of the concerns he raised in his review (and the associated issue). Moreover, I have the following, short comments, to make.

Functionality

Installation

As I have encountered significant issue with this sort of flaws in the past, I invite the authors to check the entire software on the newest stable release of Python 3 and update all the main packages (especially numpy, of which most people probably have a much more recent version).

Personally, I had no problems running the programm on Python (downgraded to 3.7) under both Ubuntu 22. But it would not run on Windows 10 (64 bit) with Python 3.12.2 without downgrading. However, compatibility should be verified after the program is fully updated and the relevant information should be appended to the README.md file only then.

Functionality

I am not completely sure this is the right heading under which to put this comment, but it relates to 'claims' the paper makes. In fact, I did not see satisfactory indications of the tests' robustness. @SkafteNicki wrote extensively and better than I could about it

Documentation

Community guidelines

There are ready-made templates to add community guidelines. Consider just copy-pasting them and adapting the content to your desires. For instance: https://bttger.github.io/contributing-gen-web/ which is based on contributing-gen

Consider adding an Installation for contributors heading for quick reference if in agreement with your intended policy.

Software paper

State of the field

Content referrable under this point is contained in rows 24-31 of the Statement of need section. I would like the authors to consider shortening these passages and add a separate heading explictly dedicated to comparing their sofware to Keras, scikit-learn, Torchmetrics, forecasting_metrics, hydroeval, and other. They do not need to necessarily consider all of them, but at least the most widely used.

In particular, the paper would benefit from a clear description of (some of) the use-cases in which SeqMetrics is technically preferable to existing alternatives as opposed to application in which its main added value is the GUI. For instance, the emphasis here is clearly on tabular and time-series, one-dimension data. But reading the paper, at times, one may forget it.

Quality of writing

The language and style satisfy the standards of academic writing. However, I suspend the checking of this box until the paper is complete.

References

Five references seem too few for a paper that should help SeqMetrics stand out in a rather crowded field. Even if abovementioned suggestions to include additional sections are rejected, this issue ought to be settled through a more intenst dialogue with existing tools.

Postface to any review

The present comments are to intended as invitations to realise some edits and motivated rejections can lead to constructive arguments in some cases.

y1my1 commented 8 months ago

Review checklist for @y1my1

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

mstimberg commented 8 months ago

:wave: hi everyone.

Thanks a lot @SkafteNicki and @FATelarico for your reviews (special shoutout to @SkafteNicki for the fastest review I've ever received :blush: )!

@AtrCheema: could you let us know whether you are already working on incorporating the feedback, or are you still waiting for comments from the third reviewer?

@y1my1: could you please give us a rough timeline when you can provide your review?

Thanks again for your help with the review process.

AtrCheema commented 8 months ago

Hi @mstimberg , we are already working on the comments of @SkafteNicki and @FATelarico . Thanks to both of them for their valuable feedback.

y1my1 commented 8 months ago

Overall, this is a good package that may meet some needs of the scientific community. However, I echo most of the comments raised by @SkafteNicki and @FATelarico, especially about the documentation and writing of the paper. @SkafteNicki and @FATelarico have already made great suggestions. These are some just minor issues that may help improve the package.

Documentation

It's great that the authors provide documentation through readthedocs. The authors put a lot of effort there providing important information about the computation of metrics, like the formulas of some metrics. However, for some of the metrics, the authors just provided a reference there; it would be great if the authors could at least provide a formula that helps users understand the underhood of the computation. And it would be beneficial if the authors could write a concise introduction there.

Software paper

The writing of the paper follows the academic standard and is most understandable. However, there is room for improvement to make it easier to read. For example, this sentence seems to be not very well-written

Torchmetrics library, (Detlefsen et al., 2022) although contains 100+ metrics, however, it provides only 48 which are intended for 1-dimensional numerical data.

mstimberg commented 7 months ago

:wave: @AtrCheema could you please give us an update where you are with the changes to address the reviewer comments?

AtrCheema commented 7 months ago

Hi @mstimberg Thanks for the follow up. We are modifying the code. Some changes have already been pushed while others will soon be pushed (couple of days hopefully). Can you please tell if there is a deadline for the revision?

mstimberg commented 7 months ago

Hi @AtrCheema, thanks for the update. There is no strict deadline for the revision, but we prefer to not drag it out for too long, since it will be more work for the reviewers to remember what everything was about. If you could provide your updates/replies to the reviewers until the end of next week, that would be great. Please let me know if you need more time than that. Thanks!

mstimberg commented 7 months ago

:wave: @AtrCheema, could you give us an update with regard to the changes addressing reviewer comments?

AtrCheema commented 7 months ago

@mstimberg Sorry for the delayed response. Actually, I was infected and bedridden for more than a week. I am back at work now, and our response will be complete by the end of this week (Friday). Again apologies for this unexpected delay.

mstimberg commented 7 months ago

Many thanks for getting back to us, @AtrCheema, sorry to hear that you were ill. No worries of course for the delay, looking forward to your update.

mstimberg commented 7 months ago

:wave: @AtrCheema I hope you are doing well. Could you please let us know where you are with respect to the updates?

AtrCheema commented 6 months ago

@editorialbot generate pdf

@mstimberg We are almost done with the review. I apologize that the review quite some time which was not anticipated at the start.

I would first like to respond to the comments made by @SkafteNicki which are the most comprehensive one. Moreover they are endorsed and overlapped by the other two reviewers.

Moreover, I would like to thank all the three reviewers for taking the time to review the repository in detail. By addressing the comments, we have not only improved the overall quality of the package but also removed some bugs.

Comments by @SkafteNicki

You mention that easy_mpl is needed for plotting the metrics. However, it is not mentioned in the documentation or README that you can actually install this by writing pip install SeqMetrics[all]. Please add these additional install instructions.

Response: We have updated the readme and and documentation to add additional install instructions.

Documentation for the class based API is essentially missing: https://seqmetrics.readthedocs.io/en/latest/rgr.html#SeqMetrics.RegressionMetrics. I know this is because it is just calling the functional API but it would then be great if there was a reference per metric to there functional counterpart.

Response: Initially we thought, adding the same documentation for methods of class based API would involve significant duplication. However, we have now added he documentation for class-based API as well for both regression and classification

Additionally, in the README.md of the project there are multiple related projects mentioned at the bottom that are not included in the paper ( forecasting_metrics, hydroeval etc). I would like to ask the authors why these are not reference in the paper.

Response: The updated paper now contains reference to most of the packages listed in README.md.

On the other hand, all the frameworks mentioned in the paper are not listed in the related section on the README.md. Again, minor stuff.

Response: The updated README.md now contains all the frameworks which are mentioned in the paper.

The app should be better documented, especially for instructions for typing/pasting values. From the code I can see that a comma separated list is expected, but this is not clear from the instructions. A simple numpy array does not work for example. Including fig2 and fig3 to the documentation and README file would definitely help.

Response: The app can be used by typing/pasting data which is either comma separated or space separated. We have updated the instructions on the app. Furthermore, the two figures are also added to README file and to documentation.

Since it is a simple streamlit app that users can deploy themself without too much hassle, I really think the authors should consider adding instructions on how the app can be deployed by users locally (lets say that I do not trust streamlit servers with my data but still want the nice interface). This probably requires a bit of refactoring of the repository to include the app in the src directory and the addition of pip install SegMetrics[app] option for installing. Additionally, the paper should be updated to reflect that the webinterface can be self hosted.

Response: Launching the streamlit app locally requires installing the requirements including the streamlit package and then launching the streamlit app. We have explicitly added these steps in both README and documentation.

Here two different metrics are tested for a single given input. However, it is not clear at all why these tests are actually checking that implementation is correct.

Response: All the unit tests are now run for multiple inputs i.e. small values, large values (>e7), values with nans and -ve values. For all of these cases, the results are compared against a standard/reference. These standard/reference are more elaborated in response to next comment.

(Important) Implement unit testing against other frameworks whenever possible. The authors are already doing this for certain classification metrics (SeqMetrics/tests/test_cls.py Lines 11 to 12 in f1b8858

from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score 
from sklearn.metrics import confusion_matrix, balanced_accuracy_score 

), however it should be done for most metrics. Metrics where there is no reference implementation to compare against should be tested for multiple values and preferably it should be more clear where the reference values comes from.

Response: We have modified the unit tests to include references. Overall, the references and corresponding metrics can be categorized into five groups. 1) The metrics where an implementation in a standard library is available. This included metrics from sklearn and Torchmetrics. All the metrics in SeqMetrics which are also available in these libraries, are compared with corresponding functions of these libraries. 2) The metrics for which an implementation is available in other libraries such as HydroErr, NeuralHydrology, skill metrics etc. Since enlisting these libraries in the requirements for tests would increase the number of requirements thereby making future development of SeqMetrics difficult. This is especially for the case where those libraries are no longer maintained. We have, however installed these libraries on a colab notebook, calculated the reference values and then compared the SeqMetrics against these reference values in the tests. The colab notebook is provided in the tests for references. 3) The metrics whose calculation was too obvious such as std_ratio or gmean_diff. For these metrics, a reference is not provided but their documentation is improved. 5) The metrics for which reference implementation was not available in a python library/package. However their code is available in the form of stackoverflow answers or github gist. For these metrics, we have copied the code (with reference) in the colab notebook, and calculated the reference value. The tests are then run against these reference values. 4) The metrics for which no reference implementation was available in python. For these metrics we have provided the references for the formulas or the reference for the implementation in another language. 5) We also encountered two metrics for which we could not find any reference. We have removed these metrics for the time being.

(Important) Currently only Python 3.7 is tested, which is officially end-of-life. Either run CI that checks multiple versions of Python or at the very least a newer supported version

Response: We are now testing against 3.7 and 3.12 which are the lower and upper python versions supported by this library.

Test only run on ubuntu right now and no other major OS. I recommend that the authors either add test for other OS or explicitly state what OS are supported in there README.md

Response: We are now testing the library on windows, ubuntu and mac with python 3.7 and python 3.12 versions.

(Important) Because the CI only uses Python 3.7 the actual numpy version being tested is numpy-1.21.6, which is around 2 years old at this point. I see this as a overall consequence that the authors have not included some upper/lower bounds on supported numpy/scipy versions in the requirements file.

Response: We are now testing the library for numpy 1.26.4 and 1.17 which are the upper and lower numpy versions supported by the library. The setup and requirement files are also updated to reflect this change.

(Important) Missing community guidelines: are contributions welcome? what should a contribution look like etc.?

Response: We have added a CONTRIBUTING.rst file, highlighting protocol for the potential contributors.

(nitpicking) In fig1 of the paper the authors it is redundant to say "class-based api" both at the bottom and top of the figure (same goes for functional). Only mentioning this once should be enough?

Response: We have modifed the figure 1 by removing the "class-based api" at the bottom.

(nitpicking) The overall resolution (pixel-wise) of figures in the paper is on the lower side and could be increase to help the readability of the text in the figures

Response: We have added the figures with higher resolution (900 dpi).

editorialbot commented 6 months ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

mstimberg commented 6 months ago

@AtrCheema Many thanks for the update. If I understand you correctly, you will reply to the other reviewers with a separate message later?

@SkafteNicki Do you feel that the points you raised are sufficiently addressed by the changes/replies ?

AtrCheema commented 6 months ago

@mstimberg yes correct. Indeed, most of the comments by other two reviewers have also been addressed while addressing @SkafteNicki 's comments. However, two comments (adding reference equation for each metric, and a more detailed discussion of current literature/libraries on performance metrics) are remaining. We are currently improving the docs and I will respond to both reviewers as soon as we are done.

FATelarico commented 6 months ago

@mstimberg yes correct. Indeed, most of the comments by other two reviewers have also been addressed while addressing @SkafteNicki 's comments. However, two comments (adding reference equation for each metric, and a more detailed discussion of current literature/libraries on performance metrics) are remaining. We are currently improving the docs and I will respond to both reviewers as soon as we are done.

Thank you for the update! I understand the remaining aspects, mainly those pertaining to my review specifically (above), will be dealt with soon. However, I wanted to take the time to congratulate the authors on the great job in updating the dependencies and improving the documentation. I managed to install the library on both Ubuntu 22 and Winows 11 without downgrading Python or any other module. All the main functionalities are as expected and documented.

I am looking forward to a new version of the paper that engages more with the literature (incl. having more references) and is finally polished for publication.

SkafteNicki commented 6 months ago

Hi @mstimberg and @AtrCheema, sorry for the late reply from my side, I been sick the last week. I am overall really happy with the changes the authors have made to the code. I quickly looked through it and it is of a much higher quality now. I also looked through the paper and would say that is also good now. I will go ahead and update my checklist.

mstimberg commented 6 months ago

:wave: @AtrCheema Could you give us an estimate when you will be able to address the remaining open points raised by the reviewers? Thanks!

danielskatz commented 5 months ago

πŸ‘‹ @AtrCheema - I just wanted to check on if you are still able to work on this? (As track editor, I try to check on reviews in the CSISM track where no progress has been recorded in a 2-week period.)

mstimberg commented 5 months ago

:wave: @AtrCheema – could you please get back to us when/whether you will be able to work on the outstanding points? I'll try to contact you over mail as well, in case you are not seeing GitHub notifications.

AtrCheema commented 5 months ago

@editorialbot generate pdf

Thank you everyone, especially the editor and handling editor to staying with us and your patience.

Comments by @FATelarico

As I have encountered significant issue with this sort of flaws in the past, I invite the authors to check the entire software on the newest stable release of Python 3 and update all the main packages (especially numpy, of which most people probably have a much more recent version).

Response: We have updated the gh workflows to test the package against python 3.7 and 3.12. We have also made sure that all the unit tests pass for numpy 1.17 to 1.26.4. The requirements files and setup.py file also reflect these changes.

Personally, I had no problems running the programm on Python (downgraded to 3.7) under both Ubuntu 22. But it would not run on Windows 10 (64 bit) with Python 3.12.2 without downgrading. However, compatibility should be verified after the program is fully updated and the relevant information should be appended to the README.md file only then.

Response: We are now testing the code on ubuntu, windows and MacOS using gh workflows.

I am not completely sure this is the right heading under which to put this comment, but it relates to 'claims' the paper makes. In fact, I did not see satisfactory indications of the tests' robustness. @SkafteNicki wrote extensively and better than I could about it

Response: As has been mentioned in response to @SkafteNicki comments, we have completely revised the tests.

There are ready-made templates to add community guidelines. Consider just copy-pasting them and adapting the content to your desires. For instance: https://bttger.github.io/contributing-gen-web/ which is based on contributing-gen

Response: We are open to contributions and have added contributing.rst file.

Content referrable under this point is contained in rows 24-31 of the Statement of need section. I would like the authors to consider shortening these passages and add a separate heading explictly dedicated to comparing their sofware to Keras, scikit-learn, Torchmetrics, forecasting_metrics, hydroeval, and other. They do not need to necessarily consider all of them, but at least the most widely used. In particular, the paper would benefit from a clear description of (some of) the use-cases in which SeqMetrics is technically preferable to existing alternatives as opposed to application in which its main added value is the GUI. For instance, the emphasis here is clearly on tabular and time-series, one-dimension data. But reading the paper, at times, one may forget it.

Response: Thanks for the comment. We have improved the section 'statement of need' by enhancing more detailed comparison with other libraries. This include limitation due to their dependencies and complex usage. However, we have avoided some limitations of these libraries which we think maybe removed in future. For example, the following code will fail because torchmetrics library does not accept python list/numpy array /pandas DataFrame as input.

from  torchmetrics.functional.regression import r2_score
import numpy as np
import pandas as pd
r2_score(np.array([1,2,3]), np.array([1.1, 2.2, 3.3]))
r2_score([1,2,3], [1.1, 2.2, 3.3])
r2_score(pd.DataFrame(np.array([1,2,3])), pd.DataFrame(np.array([1.1, 2.2, 3.3])))

Five references seem too few for a paper that should help SeqMetrics stand out in a rather crowded field. Even if abovementioned suggestions to include additional sections are rejected, this issue ought to be settled through a more intenst dialogue with existing tools.

Response: We have added more references and the total number of references are now 10.

Comments by @y1my1

It's great that the authors provide documentation through readthedocs. The authors put a lot of effort there providing important information about the computation of metrics, like the formulas of some metrics. However, for some of the metrics, the authors just provided a reference there; it would be great if the authors could at least provide a formula that helps users understand the underhood of the computation. And it would be beneficial if the authors could write a concise introduction there.

Response: We have improved the documentation and have provided the brief explanation, equation and reference for each performance metric in the documentation.

editorialbot commented 5 months ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

FATelarico commented 5 months ago

@AtrCheema Thank you very much for the review. I am almost completely satisfied, this is really a great job!

I would just suggest the following (feel free to argue agains anything if I am mistaken):

AtrCheema commented 5 months ago

@editorialbot generate pdf

editorialbot commented 5 months ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

AtrCheema commented 5 months ago

Dear @FATelarico Thanks for highlighted the problems. I have incorporated the changes you mentioned.

FATelarico commented 4 months ago

Dear @FATelarico Thanks for highlighted the problems. I have incorporated the changes you mentioned.

Excellent job @AtrCheema! I updated my checklist. You're good to go as far as I'm concerned! 🦾

mstimberg commented 4 months ago

:wave: @y1my1 I know that it has been a while since you worked on this review, but could you confirm that the changes the authors made since then address all your concerns (and tick off the remaining open boxes in your checklist if it is the case)? Thanks!

mstimberg commented 4 months ago

:wave: @y1my1, please get back to us with respect to the question above :point_up:

y1my1 commented 4 months ago

Hi @mstimberg, I looked through the paper and the package, it looks good to me now (I have updated my checklist). Great job @AtrCheema.

mstimberg commented 4 months ago

Many thanks @y1my1 ! All reviewers recommend acceptance, I will therefore now start with the final editorial checks before handing things over to the track editor. @AtrCheema, please take care of the "author tasks" in the checklist below :point_down:

mstimberg commented 4 months ago

Post-Review Checklist for Editor and Authors

Additional Author Tasks After Review is Complete

Editor Tasks Prior to Acceptance

mstimberg commented 4 months ago

@editorialbot check references

editorialbot commented 4 months ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1029/2007JD008972 is OK
- 10.28945/4184 is OK
- 10.21105/joss.04050 is OK
- 10.5281/zenodo.2591217 is OK
- 10.3390/hydrology5040066 is OK
- 10.1145/3377811.3380426 is OK
- 10.1145/3460319.3464797 is OK
- 10.48550/arXiv.1912.01703 is OK

MISSING DOIs

- 10.1163/2214-8647_dnp_e612900 may be a valid DOI for title: Keras
- No DOI given, and none found for title: Scikit-learn: Machine learning in Python

INVALID DOIs

- 10.21105/joss.041012 is INVALID
mstimberg commented 4 months ago

Hi @AtrCheema I have created a PR with some minor editing changes. Please have a look and merge if you agree: AtrCheema/SeqMetrics#4 Please also take care of the tasks (e.g., archive, version) mentioned in the checklist above.

AtrCheema commented 4 months ago

HI @mstimberg Thanks for the PR and suggestions. As per your suggestions, we have completed the following tasks

-> made suggested edit in paper -> added ORCID info of first author -> released package on pypi -> archived release on Zenodo

Please let me know if we have to anything else.

mstimberg commented 4 months ago

Many thanks for the update, @AtrCheema.

Could you please fix the metadata on your Zenodo archive manually (no need for a new release/version), in order to:

Thanks !

mstimberg commented 4 months ago

@editorialbot set v2.0.0 as version

editorialbot commented 4 months ago

Done! version is now v2.0.0