openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
710 stars 38 forks source link

[REVIEW]: LightTwinSVM: A Simple and Fast Implementation of Standard Twin Support Vector Machine Classifier #1252

Closed whedon closed 5 years ago

whedon commented 5 years ago

Submitting author: @mir-am (A. Mir) Repository: https://github.com/mir-am/LightTwinSVM Version: v0.6.0 Editor: @cMadan Reviewer: @stefanv @rougier Archive: 10.5281/zenodo.2619436

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/a56708efcbf27c12352670b4647d92ca"><img src="http://joss.theoj.org/papers/a56708efcbf27c12352670b4647d92ca/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/a56708efcbf27c12352670b4647d92ca/status.svg)](http://joss.theoj.org/papers/a56708efcbf27c12352670b4647d92ca)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@stefanv, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @cMadan know.

Please try and complete your review in the next two weeks

Review checklist for @stefanv

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

Review checklist for @rougier

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

whedon commented 5 years ago

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @stefanv it looks like you're currently assigned as the reviewer for this paper :tada:.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands
whedon commented 5 years ago
Attempting PDF compilation. Reticulating splines etc...
whedon commented 5 years ago

:point_right: Check article proof :page_facing_up: :point_left:

labarba commented 5 years ago

@stefanv — Can I ask you to copy over your review comments here? https://github.com/openjournals/joss-reviews/issues/1155#issuecomment-463760234

labarba commented 5 years ago

This one too https://github.com/openjournals/joss-reviews/issues/1155#issuecomment-463750224

labarba commented 5 years ago

Why, you may ask? We're looking into ways of exporting the Review issues into an archival form, to preserve and link to the published articles. (But not the Pre-Review issue, whose purpose is to find editors and reviewers, or occasionally make a desk rejection if the submission is out of scope).

Also, @stefan, you have a Reviewer checklist here, and you have automatically been added as a collaborator to the repo, so you can check items off there.

labarba commented 5 years ago

On this question, @stefanv

What is JOSS policy on content recommendations? E.g., I have some recommended language edits, as well as experiment suggestion (since there is a benchmark included in the paper).

You should feel free to make content recommendations, and also language edits (I do these all the time!).

rougier commented 5 years ago

@labarba Catching up. I can try do to the review by end of next week if it is still relevant.

labarba commented 5 years ago

Thank you, @rougier! I can add you as reviewer here.

rougier commented 5 years ago

Perfect. I'll try to do it this Sunday (but no promise). By end of next week for sure.

labarba commented 5 years ago

Your checklist is now added. If you have reviewed before for JOSS, you should already have repo permissions to edit the checklist. Let me know if you run into any trouble.

stefanv commented 5 years ago

Some language recommendations. I tried not to touch your sentences other than to correct grammar.

```diff --- lighttwinsvm.md 2019-02-14 10:40:40.417605199 -0800 +++ lighttwinsvm-sjw.md 2019-02-14 10:58:44.144803880 -0800 @@ -21,25 +21,25 @@ --- # Abstract -This paper presents the `LightTwinSVM` program and its features. It is a simple and fast implementation of the twin support vector machine algorithm (TwinSVM). Numerical experiments on benchmark datasets shows the effectiveness of the `LightTwinSVM` program in terms of accuracy. This program can be used by researchers, students, and practitioners to solve classifications tasks. +This paper presents the `LightTwinSVM` program and its features. It is a simple and fast implementation of twin support vector machine algorithm (TwinSVM). Numerical experiments on benchmark datasets show the effectiveness of `LightTwinSVM` in terms of accuracy. This program can be used by researchers, students, and practitioners to solve classifications tasks. # Introduction -Classification is a widely-used learning method in machine learning. The task of classification consists of training samples for which the class labels are available. On the basis of such training samples, a classifier learns a rule for predicting an unseen sample [@shalev2014]. To do a classification task, many algorithms have been proposed in machine learning literature such as Artificial Neural Network (ANN), Support Vector Machine (SVM), K-nearest neighbors (KNN), and Decision Trees. Among these classification algorithms, SVM classifier has relatively better prediction accuracy and generalization [@kotsiantis2007]. The main idea of SVM is to find the optimal separating hyperplane with the largest margin between the two classes. Figure \ref{fig:1} shows the geometric illustration of SVM classifier. +Classification is a widely-used learning method in machine learning. The task of classification consists of training samples for which the class labels are available. On the basis of such training samples, a classifier learns a rule for predicting the label of an unseen sample [@shalev2014]. To do a classification task, many algorithms have been proposed in machine learning literature such as Artificial Neural Network (ANN), Support Vector Machine (SVM), K-nearest neighbors (KNN), and Decision Trees. Among these classification algorithms, SVM classifiers have relatively better prediction accuracy and generalization [@kotsiantis2007]. The main idea of SVM is to find the optimal separating hyperplane with the largest margin between the two classes. Figure \ref{fig:1} shows the geometric illustration of SVM classifier. ![Geometric illustration of SVM classifier.\label{fig:1}](figs/SVM.png) -Over the past decade, researchers have proposed new classifiers based on the SVM [@nayak2015]. Of these extensions of SVM, twin support vector machine (TwinSVM) [@jayadeva2007] has received more attention from scholars in the field of SVM research. This may be due to the novel idea of TwinSVM which is doing classification using two non-parallel hyperplanes. Each of which is as close as possible to samples of its own class and far from samples of the other class. To show the central idea of TwinSVM graphically, Figure \ref{fig:2} shows the geometric illustration of TwinSVM classifier. +Over the past decade, researchers have proposed new classifiers based on the SVM [@nayak2015]. Of these extensions of SVM, the twin support vector machine (TwinSVM) [@jayadeva2007] has received more attention from scholars in the field of SVM research. This may be due to the novel idea of TwinSVM which is doing classification using two non-parallel hyperplanes. Each of which is as close as possible to samples of its own class and far from samples of the other class. To show the central idea of TwinSVM graphically, Figure \ref{fig:2} shows the geometric illustration of TwinSVM classifier. ![Geometric illustration of TwinSVM classifier.\label{fig:2}](figs/TwinSVM.png) -SVM has several stable software packages and implementations such as `LIBSVM` [@chang2011libsvm] and `LIBLINEAR` [@fan2008liblinear]. These packages were used to implement SVM in `Scikit-learn` [@pedregosa2011scikit] which is a widely-used machine learning package for Python programming language. To solve a classification problem with the SVM algorithm, one can use `Scikit-learn` with several lines of code in Python. +For SVMs, there exist several stable software packages and implementations such as `LIBSVM` [@chang2011libsvm] and `LIBLINEAR` [@fan2008liblinear]. These packages were used to implement SVMs in `scikit-learn` [@pedregosa2011scikit] which is a widely-used machine learning package for Python programming language. To solve a classification problem with the SVM algorithm, one can use `scikit-learn` with only a few lines of code in Python. -Even though TwinSVM is a popular classification algorithm in the field of SVM research, to best of our knowledge, it has not a free and reliable implementation with user guide on the internet. This motivated us to develop `LightTwinSVM` program to help researchers, practitioners, and students build their own classifier on the basis of `LightTwinSVM`. Moreover, this program can be used to solve classification problems. In the next section, we present `LightTwinSVM`, its features, and compare it with `Scikit-learn`'s SVM. +Even though TwinSVM is a popular classification algorithm in the field of SVM research, to the best of our knowledge, there exists no free and reliable implementation with a user guide on the internet. This motivated us to develop `LightTwinSVM` to help researchers, practitioners, and students build their own TwinSVM classifiers. Moreover, this program can be used to solve classification problems. In the next section, we present `LightTwinSVM`, its features, and compare it with `scikit-learn`'s SVM. # LightTwinSVM -`LightTwinSVM` program is a simple and fast implementation of TwinSVM classifier. It is mostly written in Python and its main design goals are simplicity and speed. Also, this program is free, open source, and licensed under the terms of GNU GPL v3^[https://opensource.org/licenses/GPL-3.0]. `LightTwinSVM` is built on top of `NumPy` [@walt2011], `Scikit-learn` [@pedregosa2011scikit], and `pandas` [@mckinney2011pandas]. +The `LightTwinSVM` program is a simple and fast implementation of the TwinSVM classifier. It is mostly written in Python and its main design goals are simplicity and speed. Also, this program is free, open source, and licensed under the terms of GNU GPL v3^[https://opensource.org/licenses/GPL-3.0]. `LightTwinSVM` is built on top of `NumPy` [@walt2011], `scikit-learn` [@pedregosa2011scikit], and `pandas` [@mckinney2011pandas]. `LightTwinSVM` program can be used by both researchers in the field of SVM research and by students in courses on pattern recognition and machine learning. Moreover, this software can be applied to a wide variety of research applications such as text classification, image or video recognition, medical diagnosis, and bioinformatics. For example, `LightTwinSVM` was used for the numerical experiments in our previous research paper [@mir2018]. @@ -49,18 +49,18 @@ - Since speed is one of the design goals, the **clipDCD optimization algorithm** [@peng2014] is employed which is a simple and fast external optimizer. It was improved and implemented in C++. - In order to solve linear or non-linear classification problems, both **linear** and **RBF kernels** are supported. - Multi-class classification problems can be solved using either **One-vs-One** or **One-vs-All** scheme. -- The One-vs-One classifier is **Scikit-learn compatible**. Therefore, `Scikit-learn` tools such as GridSearchCV and cross_val_score can be employed. +- The One-vs-One classifier is **scikit-learn compatible**. Therefore, `scikit-learn` tools such as `GridSearchCV` and `cross_val_score` can be employed. - To evaluate the performance of the classifier, **K-fold cross-validation** and **train/test split** are supported. - The optimal values of hyper-parameters can be found with **grid search**. - **CSV** and **LIBSVM** formats are supported for importing datasets. - Detailed classification results are saved in a spreadsheet file so that results can be analyzed and interpreted. -In order to show the efficiency of `LightTwinSVM` program, we conducted experiments and compared it with the implementation of SVM in `Scikit-learn` on UCI^[http://archive.ics.uci.edu/ml/datasets.html] benchmark datasets. Table \ref{tab:1} shows the accuracy comparison between `LightTwinSVM` and `Scikit-learn`'s SVM. +In order to show the efficiency of `LightTwinSVM`, we conducted experiments and compared it with the implementation of SVM in `scikit-learn` on the UCI^[http://archive.ics.uci.edu/ml/datasets.html] benchmark datasets. Table \ref{tab:1} shows the accuracy comparison between `LightTwinSVM` and `scikit-learn`'s SVM. -Table: The accuracy comparison between `LightTwinSVM` and `Scikit-learn`'s SVM\label{tab:1} +Table: The accuracy comparison between `LightTwinSVM` and `scikit-learn`'s SVM\label{tab:1} - Datasets LightTwinSVM Scikit-learn's SVM Difference in Accuracy + Datasets LightTwinSVM scikit-learn's SVM Difference in Accuracy -------------------------- -------------------------- -------------------------- -------------------------- Pima-Indian **78.91** 78.26 0.65 Australian **87.25** 86.81 0.44 @@ -74,7 +74,7 @@ Titanic **82.04** 81.71 0.33 Mean **83.44** 81.90 1.53 -From the Table \ref{tab:1}, it can be seen that `LightTwinSVM` outperforms `Scikit-learn`'s SVM on most datasets. For instance, the accuracy difference in Sonar dataset is as high as 10.74% which is a significant result. However, one may notice that the difference in accuracy between the two classifiers is not very large. To show whether a significant difference exists, statistical tests are often used in research papers on classification [@demsar2006]. Due to the limited space, comprehensive experiments with statistical tests are skipped in this paper. In summary, the experiment indicates that the `LightTwinSVM` program can be used for classification tasks and it may produce a better prediction accuracy. +From the Table \ref{tab:1}, it can be seen that `LightTwinSVM` outperforms `scikit-learn`'s SVM on most datasets. For instance, the accuracy difference in Sonar dataset is as high as 10.74% which is a significant result. However, one may notice that the difference in accuracy between the two classifiers is not very large. To show whether a significant difference exists, statistical tests are often used in research papers on classification [@demsar2006]. Due to the limited space, comprehensive experiments with statistical tests are skipped in this paper. In summary, the experiment indicates that the `LightTwinSVM` program can be used for classification tasks and it may produce a better prediction accuracy. The source code of `LightTwinSVM`, its installation guide and usage example can be found at [https://github.com/mir-am/LightTwinSVM](https://github.com/mir-am/LightTwinSVM). ```


Benchmark recommendation: include the standard deviation of the cross-validation comparisons with scikit-learn — these are generally much more convincing than single runs.

stefanv commented 5 years ago

Yes, but I wanted to verify whether this license also applies to the included optimizer.

Potential license breach through publication of datasets without attribution:

https://archive.ics.uci.edu/ml/citation_policy.html

No. The installation process is non-standard for a Python library. On my (Debian) system, the shell script (which I was hesitant to run, as I imagine many users will be), did not correctly detect Python 3 and and then asked for sudo access (at which point I aborted).

Please provide clear steps for installing manually from source.

I could not locate the API documentation.

None that I could find.

Editor / @labarba: is this a requirement of the .bib file? Not entirely sure what is meant by "archival reference". No DOIs are provided, as far as I can tell.

rougier commented 5 years ago

I've started the review (and post an issue on the repo because I cannot build it). In the meantime and to parallelize things here are a few comments and questions:

@labarba It would be helpful to have a link to the last build of the paper at the top (eg. after repo url)

cMadan commented 5 years ago

@stefanv @rougier, thank you both for agreeing to review this submission! Where relevant, please direct your JOSS-related questions to me, as the editor for this submission. @labarba helped out here in getting things started, but she's one of the Associated Editor-in-Chief's and edits other submissions, so we don't want to have her answer all the questions--that's part of my role here.

@labarba, thanks so much for getting things started and sorting out the review-related comments that were in the pre-review. I'll try and take it from here :).

@stefanv, DOIs are required in the bib (when possible), so, yes, those need to be added.

@rougier, that's a good suggestion, thanks! We can see about updating the initial review comment to include that.

rougier commented 5 years ago

Quick update on the review, we're trying to solve an installation problem

mir-am commented 5 years ago

Hi

To help the reviewers and also users install and build the program manually, a step-by-step guide is provided here in the project's wiki. Moreover, DOIs are added to all the references in the bib file that have one.

We will also address other comments of the reviewers as soon as possible.

Regards, Mir

stefanv commented 5 years ago

@mir-am Thank you for these instructions; that's helpful already. Do you have plans to build binary packages? If so, you may want to look at https://github.com/matthew-brett/multibuild. This would make your package pip-installable. That, and setting up a recipe on conda-forge, will give you access to the majority of Python users out there.

rougier commented 5 years ago

And I'm still fighting with the installation procedure (with no luck yet)

cMadan commented 5 years ago

@rougier, thank you for the update (though unfortunate)

mir-am commented 5 years ago

First, we would like to thank @stefanv for his suggestions, comments and reviewing this work.

Our responses to the comments are as follows:

Some language recommendations. I tried not to touch your sentences other than to correct grammar.

We revised the program's paper to correct the grammar.

Benchmark recommendation: include the standard deviation of the cross-validation comparisons with scikit-learn — these are generally much more convincing than single runs.

We will run the benchmark again and report the standard deviation of the accuracy in the paper.

  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?

Yes, but I wanted to verify whether this license also applies to the included optimizer.

Potential license breach through publication of datasets without attribution:

https://archive.ics.uci.edu/ml/citation_policy.html

The project's license is GNU GPL v3, which also applies to the included optimizer. We also acknowledged the use of UCI datasets by adding a statement to the acknowledgments section of the README file.

  • Installation: Does installation proceed as outlined in the documentation?

No. The installation process is non-standard for a Python library. On my (Debian) system, the shell script (which I was hesitant to run, as I imagine many users will be), did not correctly detect Python 3 and and then asked for sudo access (at which point I aborted).

Please provide clear steps for installing manually from source.

A step-by-step guide on how to install the LightTwinSVM manually is provided here.

  • Functionality documentation: Is the functionality of the software documented to a satisfactory level (e.g. API method documentation)?

I could not locate the API documentation.

We used Read the Docs service, which hosts API documentation of the LightTwinSVM. It can be read here. All the LightTwinSVM's estimators and tools are documented so that users can employ them for their projects.

  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

None that I could find.

We have already included the support section in the README file, which helps users how to seek support or report issues with the software. A CONTRIBUTING.md file was also added to the project, which explains how to contribute to the LightTwinSVM project through code or documentation.

  • References: Do all archival references that should have a DOI list one (e.g. papers, datasets, software)?

Editor / @labarba: is this a requirement of the .bib file? Not entirely sure what is meant by "archival reference". No DOIs are provided, as far as I can tell.

DOIs were added to references in the paper when possible. https://github.com/mir-am/LightTwinSVM/commit/c07554bf0f98a75f76025d6d3d5e8f3946615623

Do you have plans to build binary packages? If so, you may want to look at https://github.com/matthew-brett/multibuild. This would make your package pip-installable. That, and setting up a recipe on conda-forge, will give you access to the majority of Python users out there.

Yes. We have a plan to publish a pip-installable package on PyPI website. Thanks for introducing a build tool.

mir-am commented 5 years ago

Also, we would like to thank @rougier for his comments and reviewing this work.

Our responses to the comments are as follows:

In the paper, the legend of the figures 1 & 2 could benefit from a better caption. As they stand, they do not really help i finding what is to be seen.

In the research field of SVM, expressions like "Geometric illustration" or "Geometric interpretation" are common. These captions are used to show the main idea of a classifier in 2-dimensional feature space. Nonetheless, we welcome suggestions for the captions' title.

who is the second author (since you have made all the commits, what is his role in this software?)

He is my supervisor and taught me the theories behind SVM and its extensions like TwinSVM. Moreover, the LightTwinSVM project is developed at his lab. Therefore, I must include his name in the paper.

for all your references, you need to add the DOI

DOIs were added to references in the paper when possible. https://github.com/mir-am/LightTwinSVM/commit/c07554bf0f98a75f76025d6d3d5e8f3946615623

The README is great, but where is the API documentation, did you upload a documentation to readthedocs ? (your doc Makefile seems to indicate that you can build the doc, it would be helpful to be able to read it online)

Yes. The LightTwinSVM's documentation was uploaded to Read the Docs. The API documentation can be read here. All the LightTwinSVM's estimators and tools are documented so that users can employ them for their projects.

rougier commented 5 years ago

Thanks. I'm still fighting with the installation on OSX bur we've made some progress.

rougier commented 5 years ago

Some update on the installation problem: it has been fixed. @cMadan can you rebuild the PDF?

cMadan commented 5 years ago

@whedon generate pdf

whedon commented 5 years ago
Attempting PDF compilation. Reticulating splines etc...
whedon commented 5 years ago

:point_right: Check article proof :page_facing_up: :point_left:

cMadan commented 5 years ago

@mir-am @rougier, thanks for the updates!

rougier commented 5 years ago

Thanks. I'm almost done with the review. @mir-am There are still some missing DOI in the references, can you check? For the captions of figure 1 & 2, maybe you can have figures side by side in one figure and explain in the caption the fundamental difference between SVM and TwinSVM (but if your prefer to leave them as they are, I'm fine).

mir-am commented 5 years ago

Thanks. I'm almost done with the review. @mir-am There are still some missing DOI in the references, can you check? For the captions of figure 1 & 2, maybe you can have figures side by side in one figure and explain in the caption the fundamental difference between SVM and TwinSVM (but if your prefer to leave them as they are, I'm fine).

Thank you for reviewing this work. I tried again to find missing DOI for some references. Looks like The Journal of Machine Learning research does not assign a DOI to its papers. Your suggestion about the figures seems fine for me. I think the paper's width is not large enough to place figures side by side and it may make the comparison fruitless. However, I couldn't use subfigures feature to place figures below each other.

@cMadan How can I place figures below each other like subfigures in LaTeX? I appreciate your help.

rougier commented 5 years ago

Ok. Then it's fine on my side. @cMadan I'm satisfied with all the changes and I recommend publication.

stefanv commented 5 years ago

We will run the benchmark again and report the standard deviation of the accuracy in the paper.

Apart from this one outstanding item, I think this is good to go.

stefanv commented 5 years ago

(I should note, that was a recommendation to the authors: it is not required for publication; but it would be great if they would add it.)

cMadan commented 5 years ago

@mir-am, unfortunately, you can't do subfigures in the paper markdown. Instead, you should combine the figures using a graphics program and add the "(a)" panel notation, such that the figures are a single image.

mir-am commented 5 years ago

@mir-am, unfortunately, you can't do subfigures in the paper markdown. Instead, you should combine the figures using a graphics program and add the "(a)" panel notation, such that the figures are a single image.

Thank you for help and suggestion. The two figures are on the same page. Therefore, readers can see the difference between the two classifiers. However, we wanted to use subfigures in order to combine the two figures into one. Anyhow, in my opinion, the location of the two figures on the page is fine.

This issue was also fixed without breaking the installation scripts for both CI services and users. (Thanks to @stefanv for reminding us of the issue.) https://github.com/mir-am/LightTwinSVM/issues/4

cMadan commented 5 years ago

@mir-am, do you plan to report the SD of accuracy in the paper, as @stefanv recommended?

@stefanv, do you have any other remaining concerns about the performance of the package? (It looks like this is the only thing in the reviewer checklist you haven't checked off yet.)

mir-am commented 5 years ago

@mir-am, do you plan to report the SD of accuracy in the paper, as @stefanv recommended?

Yes. As of writing this, I am conducting experiments to report the SD of accuracies. I will report the results in the paper as soon as all the experiments are finished. I will have it done within a few days.

mir-am commented 5 years ago

@cMadan @stefanv We carefully revised the paper for including the new experiments (as requested by the reviewer).

Modifications to the paper are as follows:

We welcome new suggestions and ideas to improve the LightTwinSVM and its paper.

mir-am commented 5 years ago

@whedon generate pdf

whedon commented 5 years ago
Attempting PDF compilation. Reticulating splines etc...
whedon commented 5 years ago

:point_right: Check article proof :page_facing_up: :point_left:

stefanv commented 5 years ago

I'm happy with the paper and the new updates; thanks!

cMadan commented 5 years ago

Thank you, @stefanv!

@mir-am, I will now do a final check of everything.

cMadan commented 5 years ago

@whedon check references

whedon commented 5 years ago
Attempting to check references...
whedon commented 5 years ago

OK DOIs

- 10.1017/CBO9781107298019 is OK
- 10.1109/TPAMI.2007.1068 is OK
- 10.1145/1961189.1961199 is OK
- 10.1016/j.knosys.2014.08.005 is OK
- 10.1007/s10489-018-1225-z is OK
- 10.1109/MCSE.2011.37 is OK

MISSING DOIs

- None

INVALID DOIs

- None
cMadan commented 5 years ago

@mir-am, thinks look good and I'm almost ready to accept. Can you mint a new release that includes the changes from the review process? I also need a DOI for an archived version of the code (i.e., from Zenodo or figshare).

mir-am commented 5 years ago

@mir-am, thinks look good and I'm almost ready to accept. Can you mint a new release that includes the changes from the review process? I also need a DOI for an archived version of the code (i.e., from Zenodo or figshare).

@cMadan Thanks for considering our research work in the Journal of Open Source Software. I have released a new version of the program that contains changes from the review process. I have also uploaded the new version on Zenodo to get a DOI. 10.5281/zenodo.2619436

cMadan commented 5 years ago

@whedon set v0.6.0 as version

whedon commented 5 years ago

OK. v0.6.0 is the version.