Closed editorialbot closed 1 year ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=2.49 s (648.4 files/s, 148650.4 lines/s)
-------------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------------
C/C++ Header 1194 29525 70231 132436
C++ 266 17996 24091 65742
Markdown 48 3033 1 10745
CMake 60 734 1257 7159
Python 7 350 457 1459
Go 5 211 141 1114
YAML 8 82 103 531
Julia 1 88 55 335
Bourne Shell 5 58 67 293
Cython 7 96 200 284
R 4 80 66 228
XML 3 0 0 211
TeX 1 23 0 189
CSS 1 19 11 121
JavaScript 1 8 20 82
MSBuild script 1 0 0 52
WiX source 1 4 2 33
WiX string localization 1 1 0 6
-------------------------------------------------------------------------------------
SUM: 1614 52308 96702 221020
-------------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 1283
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
I have finished my initial review. I think overall, mlpack 4 is a well written package that has indicated great potential in the machine learning field, considering its good documentation, user friendliness, and the algorithms it implemented, etc. Thus, I would recommend it for publication after minor revision:
Hi, @osorensen,
Two authors listed in the paper (#5 and #9) are affiliated with the same institution I'm affiliated with. We are from different units and haven't collaborated in the past, and we do not know each other. I request that this conflict be waived. Please let me know if there is a different process for this. Thanks.
Hi, @osorensen,
Two authors listed in the paper (#5 and #9) are affiliated with the same institution I'm affiliated with. We are from different units and haven't collaborated in the past, and we do not know each other. I request that this conflict be waived. Please let me know if there is a different process for this. Thanks.
Thanks for letting me know @sandeep-ps. I think this should be waived, but I'll ping the editor-in-chief @openjournals/dsais-eics to hear what they think.
@sandeep-ps Thanks for letting us know. I'm a bit reticent about waiving the COI because there are implications of reviewing work by others at your institution if you are both still employed there. I'd prefer that we seek a different reviewer and hope we can assign you to another submission where we need reviewers. Ok? Thank you for understanding.
@osorensen Please assign to a different reviewer and keep @sandeep-ps in mind for other assignments!
Okay. Sounds good. Thanks.
FYI, my request was based on the COI guidelines, specifically, this statement.
HI @sandeep-ps - thanks for bringing this up. You are correct in that it is a conflict, and that it could be waived if needed, but it's up the track editor to decide.
Ok, @sandeep-ps, since you can attest that you are in a large organization and unlikely to be in contact with the individual profesionally, I am willing to grant the waiver. Given that we also have some independent feedback from the other reviewer, I think this might be ok. Let's proceed with you as a reviewer!
Thanks @gkthiruvathukal. Then please continue with the review @sandeep-ps, and feel free to reach out to me if you have any questions about the review process.
Thanks, @gkthiruvathukal, @danielskatz, and @osorensen. I will continue with the review then.
I am not sure of the exact process, but when should I start responding to the review comments? Should I wait for @sandeep-ps's review to be complete? It would be straightforward to add some comparisons with other machine learning libraries, so I'm happy to do that.
Thanks for asking @rcurtin. You're welcome and encouraged to start responding immediately.
@rcurtin, to give a progress update, since I had not used this library before, I was going through the installation process and checking out some of the functionality. I ran into some issues during that step last week, and I will be continuing my review this week.
I have finished my initial review. I think overall, mlpack 4 is a well written package that has indicated great potential in the machine learning field, considering its good documentation, user friendliness, and the algorithms it implemented, etc. Thus, I would recommend it for publication after minor revision:
1. even though I understand that in mlpack's website, there are some comparisons between mlpack and other similar packages, it is not mentioned in the manuscript itself. I would suggest the authors to add such descriptions, based on the requirements of "State of the field" and "performance" in the checklist.
@zhangjy-ge We do have some benchmarks on the homepage of the website. We can reference them more prominently in the paper---or do you mean to include some benchmarks specifically in the paper itself? (Personally I think it might be too long with a benchmarking table and sufficient description.)
@rcurtin, to give a progress update, since I had not used this library before, I was going through the installation process and checking out some of the functionality. I ran into some issues during that step last week, and I will be continuing my review this week.
@sandeep-ps happy to help out, just send a ping if you are having any issues. (You could even open an issue on mlpack's Github if you wanted, but whatever way is fine.) Often it can be easier to install mlpack via the system package manager, but our recent refactorings to make mlpack header-only have made the process much easier than it used to be.
@rcurtin After double check, it seems a performance comparison is optional but we should still explicitly state the difference between mlpack and other packages (from Do the authors describe how this software compares to other commonly-used packages?
) So maybe a short description of mlpack's differences comparing to other packages will be helpful.
I have also completed my initial review. In general, mlpack 4.0.0 is a well-written machine-learning library with good documentation, examples, community guidelines, etc.
Apart from zhangjy-ge's comment above, which I agree with, I have opened a few GitHub issues in the mlpack repository for further clarification and adding some missing content. I'm looking forward to more discussion on individual issues. Thanks.
Thanks @sandeep-ps. @rcurtin feel free to ping me if you have any questions, and also report back in this thread once the issues in the source repo open by @sandeep-ps have been addressed.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Thanks for all of the comments @sandeep-ps and @zhangjy-ge! We have addressed each of them by resolving the various issues in the mlpack repository, and we added a paragraph to the Statement of Need
section comparing mlpack with other C++ machine learning libraries. :+1:
Thanks for making the changes, @rcurtin.
The changes look good to me. I have completed my review and updated the checklist. Thanks for the opportunity to review this software paper.
Thanks for the update. I also agree it is now good for publication.
@editorialbot check references
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- None
MISSING DOIs
- 10.21105/joss.00726 may be a valid DOI for title: mlpack 3: a fast, flexible machine learning library
- 10.1126/science.aaa8415 may be a valid DOI for title: Machine learning: Trends, perspectives, and prospects
- 10.1145/3219819.3220124 may be a valid DOI for title: Training big random forests with little resources
- 10.1145/3533378 may be a valid DOI for title: Challenges in deploying machine learning: a survey of case studies
- 10.21203/rs.3.rs-133138/v1 may be a valid DOI for title: Technology readiness levels for machine learning systems
- 10.3390/make3010004 may be a valid DOI for title: AI System Engineering—Key Challenges and Lessons Learned
- 10.1007/978-3-319-46759-7_17 may be a valid DOI for title: Fast approximate furthest neighbors with data-dependent candidate selection
- 10.1002/sam.11218 may be a valid DOI for title: Dual-tree fast exact max-kernel search
- 10.21105/joss.00026 may be a valid DOI for title: Armadillo: a template-based C++ library for linear algebra
- 10.1137/141000671 may be a valid DOI for title: Julia: A fresh approach to numerical computing
- 10.1109/tbdata.2019.2921572 may be a valid DOI for title: Billion-scale similarity search with GPUs
- 10.5220/0001787803310340 may be a valid DOI for title: Fast approximate nearest neighbors with automatic algorithm configuration
INVALID DOIs
- None
@rcurtin, I have now read through the manuscript, and have only one editorial comment.
When citing multiple reference, please use the notation [@author1:2001; @author2:2001]
as in the example paper.
For example, on line 26 we see the following
This happens because you write [@jordan2015machine] [@carleo2019machine]
in the paper.
That is, please change to [@jordan2015machine; @carleo2019machine]
in this case, and similarly all other places in the paper where multiple references appear next to each other.
When done, I'm ready to recommend acceptance of the paper.
Ah! The semicolon is the trick. I was trying to play with that earlier and didn't figure out how. Clearly I did not read the example paper closely enough, since it's documented quite clearly.
Anyway, I pushed fixes for all instances where this happened. I'll ask editorialbot to build again. :+1:
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Thanks @rcurtin.
At this point could you:
I can then move forward with recommending acceptance of the submission.
4.0.1
linkArchive DOI: 10.5281/zenodo.7587252
I did realize I forgot to pull one commit update to the paper. That's done now. So the version generated yesterday was incorrect. I'll have editorialbot generate it again.
Thanks so much! Let me know if anything else is needed.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot set 10.5281/zenodo.7587252 as archive
Done! Archive is now 10.5281/zenodo.7587252
@editorialbot set 4.0.1 as release
I'm sorry human, I don't understand that. You can see what commands I support by typing:
@editorialbot commands
@editorialbot set 4.0.1 as version
Done! version is now 4.0.1
@editorialbot recommend-accept
Attempting dry run of processing paper acceptance...
Submitting author: !--author-handle-->@rcurtin<!--end-author-handle-- (Ryan Curtin) Repository: https://github.com/mlpack/mlpack Branch with paper.md (empty if default branch): mlpack4_joss Version: 4.0.1 Editor: !--editor-->@osorensen<!--end-editor-- Reviewers: @sandeep-ps, @zhangjy-ge Archive: 10.5281/zenodo.7587252
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@sandeep-ps & @zhangjy-ge, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @osorensen know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @sandeep-ps
📝 Checklist for @zhangjy-ge