Closed whedon closed 5 years ago
Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @stefanv it looks like you're currently assigned as the reviewer for this paper :tada:.
:star: Important :star:
If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿
To fix this do the following two things:
For a list of things I can do to help you, just type:
@whedon commands
Attempting PDF compilation. Reticulating splines etc...
@stefanv — Can I ask you to copy over your review comments here? https://github.com/openjournals/joss-reviews/issues/1155#issuecomment-463760234
Why, you may ask? We're looking into ways of exporting the Review issues into an archival form, to preserve and link to the published articles. (But not the Pre-Review issue, whose purpose is to find editors and reviewers, or occasionally make a desk rejection if the submission is out of scope).
Also, @stefan, you have a Reviewer checklist here, and you have automatically been added as a collaborator to the repo, so you can check items off there.
On this question, @stefanv
What is JOSS policy on content recommendations? E.g., I have some recommended language edits, as well as experiment suggestion (since there is a benchmark included in the paper).
You should feel free to make content recommendations, and also language edits (I do these all the time!).
@labarba Catching up. I can try do to the review by end of next week if it is still relevant.
Thank you, @rougier! I can add you as reviewer here.
Perfect. I'll try to do it this Sunday (but no promise). By end of next week for sure.
Your checklist is now added. If you have reviewed before for JOSS, you should already have repo permissions to edit the checklist. Let me know if you run into any trouble.
Some language recommendations. I tried not to touch your sentences other than to correct grammar.
Benchmark recommendation: include the standard deviation of the cross-validation comparisons with scikit-learn
— these are generally much more convincing than single runs.
Yes, but I wanted to verify whether this license also applies to the included optimizer.
Potential license breach through publication of datasets without attribution:
https://archive.ics.uci.edu/ml/citation_policy.html
No. The installation process is non-standard for a Python library. On my (Debian) system, the shell script (which I was hesitant to run, as I imagine many users will be), did not correctly detect Python 3 and and then asked for sudo access (at which point I aborted).
Please provide clear steps for installing manually from source.
I could not locate the API documentation.
None that I could find.
Editor / @labarba: is this a requirement of the .bib
file? Not entirely sure what is meant by "archival reference". No DOIs are provided, as far as I can tell.
I've started the review (and post an issue on the repo because I cannot build it). In the meantime and to parallelize things here are a few comments and questions:
@labarba It would be helpful to have a link to the last build of the paper at the top (eg. after repo url)
@stefanv @rougier, thank you both for agreeing to review this submission! Where relevant, please direct your JOSS-related questions to me, as the editor for this submission. @labarba helped out here in getting things started, but she's one of the Associated Editor-in-Chief's and edits other submissions, so we don't want to have her answer all the questions--that's part of my role here.
@labarba, thanks so much for getting things started and sorting out the review-related comments that were in the pre-review. I'll try and take it from here :).
@stefanv, DOIs are required in the bib (when possible), so, yes, those need to be added.
@rougier, that's a good suggestion, thanks! We can see about updating the initial review comment to include that.
Quick update on the review, we're trying to solve an installation problem
Hi
To help the reviewers and also users install and build the program manually, a step-by-step guide is provided here in the project's wiki. Moreover, DOIs are added to all the references in the bib file that have one.
We will also address other comments of the reviewers as soon as possible.
Regards, Mir
@mir-am Thank you for these instructions; that's helpful already. Do you have plans to build binary packages? If so, you may want to look at https://github.com/matthew-brett/multibuild. This would make your package pip-installable. That, and setting up a recipe on conda-forge, will give you access to the majority of Python users out there.
And I'm still fighting with the installation procedure (with no luck yet)
@rougier, thank you for the update (though unfortunate)
First, we would like to thank @stefanv for his suggestions, comments and reviewing this work.
Our responses to the comments are as follows:
Some language recommendations. I tried not to touch your sentences other than to correct grammar.
We revised the program's paper to correct the grammar.
Benchmark recommendation: include the standard deviation of the cross-validation comparisons with
scikit-learn
— these are generally much more convincing than single runs.
We will run the benchmark again and report the standard deviation of the accuracy in the paper.
- License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
Yes, but I wanted to verify whether this license also applies to the included optimizer.
Potential license breach through publication of datasets without attribution:
The project's license is GNU GPL v3, which also applies to the included optimizer. We also acknowledged the use of UCI datasets by adding a statement to the acknowledgments section of the README file.
- Installation: Does installation proceed as outlined in the documentation?
No. The installation process is non-standard for a Python library. On my (Debian) system, the shell script (which I was hesitant to run, as I imagine many users will be), did not correctly detect Python 3 and and then asked for sudo access (at which point I aborted).
Please provide clear steps for installing manually from source.
A step-by-step guide on how to install the LightTwinSVM manually is provided here.
- Functionality documentation: Is the functionality of the software documented to a satisfactory level (e.g. API method documentation)?
I could not locate the API documentation.
We used Read the Docs service, which hosts API documentation of the LightTwinSVM. It can be read here. All the LightTwinSVM's estimators and tools are documented so that users can employ them for their projects.
- Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
None that I could find.
We have already included the support section in the README file, which helps users how to seek support or report issues with the software. A CONTRIBUTING.md file was also added to the project, which explains how to contribute to the LightTwinSVM project through code or documentation.
- References: Do all archival references that should have a DOI list one (e.g. papers, datasets, software)?
Editor / @labarba: is this a requirement of the
.bib
file? Not entirely sure what is meant by "archival reference". No DOIs are provided, as far as I can tell.
DOIs were added to references in the paper when possible. https://github.com/mir-am/LightTwinSVM/commit/c07554bf0f98a75f76025d6d3d5e8f3946615623
Do you have plans to build binary packages? If so, you may want to look at https://github.com/matthew-brett/multibuild. This would make your package pip-installable. That, and setting up a recipe on conda-forge, will give you access to the majority of Python users out there.
Yes. We have a plan to publish a pip-installable package on PyPI website. Thanks for introducing a build tool.
Also, we would like to thank @rougier for his comments and reviewing this work.
Our responses to the comments are as follows:
In the paper, the legend of the figures 1 & 2 could benefit from a better caption. As they stand, they do not really help i finding what is to be seen.
In the research field of SVM, expressions like "Geometric illustration" or "Geometric interpretation" are common. These captions are used to show the main idea of a classifier in 2-dimensional feature space. Nonetheless, we welcome suggestions for the captions' title.
who is the second author (since you have made all the commits, what is his role in this software?)
He is my supervisor and taught me the theories behind SVM and its extensions like TwinSVM. Moreover, the LightTwinSVM project is developed at his lab. Therefore, I must include his name in the paper.
for all your references, you need to add the DOI
DOIs were added to references in the paper when possible. https://github.com/mir-am/LightTwinSVM/commit/c07554bf0f98a75f76025d6d3d5e8f3946615623
The README is great, but where is the API documentation, did you upload a documentation to readthedocs ? (your doc Makefile seems to indicate that you can build the doc, it would be helpful to be able to read it online)
Yes. The LightTwinSVM's documentation was uploaded to Read the Docs. The API documentation can be read here. All the LightTwinSVM's estimators and tools are documented so that users can employ them for their projects.
Thanks. I'm still fighting with the installation on OSX bur we've made some progress.
Some update on the installation problem: it has been fixed. @cMadan can you rebuild the PDF?
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
@mir-am @rougier, thanks for the updates!
Thanks. I'm almost done with the review. @mir-am There are still some missing DOI in the references, can you check? For the captions of figure 1 & 2, maybe you can have figures side by side in one figure and explain in the caption the fundamental difference between SVM and TwinSVM (but if your prefer to leave them as they are, I'm fine).
Thanks. I'm almost done with the review. @mir-am There are still some missing DOI in the references, can you check? For the captions of figure 1 & 2, maybe you can have figures side by side in one figure and explain in the caption the fundamental difference between SVM and TwinSVM (but if your prefer to leave them as they are, I'm fine).
Thank you for reviewing this work. I tried again to find missing DOI for some references. Looks like The Journal of Machine Learning research does not assign a DOI to its papers. Your suggestion about the figures seems fine for me. I think the paper's width is not large enough to place figures side by side and it may make the comparison fruitless. However, I couldn't use subfigures feature to place figures below each other.
@cMadan How can I place figures below each other like subfigures in LaTeX? I appreciate your help.
Ok. Then it's fine on my side. @cMadan I'm satisfied with all the changes and I recommend publication.
We will run the benchmark again and report the standard deviation of the accuracy in the paper.
Apart from this one outstanding item, I think this is good to go.
(I should note, that was a recommendation to the authors: it is not required for publication; but it would be great if they would add it.)
@mir-am, unfortunately, you can't do subfigures in the paper markdown. Instead, you should combine the figures using a graphics program and add the "(a)" panel notation, such that the figures are a single image.
@mir-am, unfortunately, you can't do subfigures in the paper markdown. Instead, you should combine the figures using a graphics program and add the "(a)" panel notation, such that the figures are a single image.
Thank you for help and suggestion. The two figures are on the same page. Therefore, readers can see the difference between the two classifiers. However, we wanted to use subfigures in order to combine the two figures into one. Anyhow, in my opinion, the location of the two figures on the page is fine.
This issue was also fixed without breaking the installation scripts for both CI services and users. (Thanks to @stefanv for reminding us of the issue.) https://github.com/mir-am/LightTwinSVM/issues/4
@mir-am, do you plan to report the SD of accuracy in the paper, as @stefanv recommended?
@stefanv, do you have any other remaining concerns about the performance of the package? (It looks like this is the only thing in the reviewer checklist you haven't checked off yet.)
@mir-am, do you plan to report the SD of accuracy in the paper, as @stefanv recommended?
Yes. As of writing this, I am conducting experiments to report the SD of accuracies. I will report the results in the paper as soon as all the experiments are finished. I will have it done within a few days.
@cMadan @stefanv We carefully revised the paper for including the new experiments (as requested by the reviewer).
Modifications to the paper are as follows:
We welcome new suggestions and ideas to improve the LightTwinSVM and its paper.
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
I'm happy with the paper and the new updates; thanks!
Thank you, @stefanv!
@mir-am, I will now do a final check of everything.
@whedon check references
Attempting to check references...
OK DOIs
- 10.1017/CBO9781107298019 is OK
- 10.1109/TPAMI.2007.1068 is OK
- 10.1145/1961189.1961199 is OK
- 10.1016/j.knosys.2014.08.005 is OK
- 10.1007/s10489-018-1225-z is OK
- 10.1109/MCSE.2011.37 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@mir-am, thinks look good and I'm almost ready to accept. Can you mint a new release that includes the changes from the review process? I also need a DOI for an archived version of the code (i.e., from Zenodo or figshare).
@mir-am, thinks look good and I'm almost ready to accept. Can you mint a new release that includes the changes from the review process? I also need a DOI for an archived version of the code (i.e., from Zenodo or figshare).
@cMadan Thanks for considering our research work in the Journal of Open Source Software. I have released a new version of the program that contains changes from the review process. I have also uploaded the new version on Zenodo to get a DOI. 10.5281/zenodo.2619436
@whedon set v0.6.0 as version
OK. v0.6.0 is the version.
Submitting author: @mir-am (A. Mir) Repository: https://github.com/mir-am/LightTwinSVM Version: v0.6.0 Editor: @cMadan Reviewer: @stefanv @rougier Archive: 10.5281/zenodo.2619436
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@stefanv, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @cMadan know.
✨ Please try and complete your review in the next two weeks ✨
Review checklist for @stefanv
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
paper.md
file include a list of authors with their affiliations?Review checklist for @rougier
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
paper.md
file include a list of authors with their affiliations?