Closed whedon closed 3 years ago
Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @skadio it looks like you're currently assigned to review this paper :tada:.
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
:star: Important :star:
If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿
To fix this do the following two things:
For a list of things I can do to help you, just type:
@whedon commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@whedon generate pdf
@whedon add @bstabler as reviewer
OK, @bstabler is now a reviewer
@whedon assign @skadio as editor
@whedon assign @bstabler as reviewer
OK, @bstabler is now a reviewer
Hi @Kuifje02, thanks for your submittal. This is my first review so please be patient as I learn the process. In addition to the checklist above, I have a few comments at this point:
@whedon assign @kakiac as editor
@whedon add @skadio as reviewer
OK, @skadio is now a reviewer
I mixed up @skadio and @kakiac, sorry about that. I think I fixed it.
@bstabler - I'm getting confused. Our normal editing workflow is to assign 2 or more reviewers in the pre-review issue, then start the review - this generates the checklists in the review issue. We should only be assigning reviewers in the review issue in unusual circumstances, and if you do this, you have to manually edit the checklists to make them right. Look at the first comment in this issue. It says
Editor: @kakiac Reviewers: @bstabler, @skadio
Is this correct? If you are a reviewer, why are you doing the editorial work?
Hi @danielskatz, yes that's correct, and thanks for checking in. @kakiac and I discussed roles with @arfon yesterday and @kakiac will be editor and I'll be a reviewer. I won't do the editorial work moving forward. Thanks.
ok, thanks
Hi @Kuifje02, thanks for your submittal. This is my first review so please be patient as I learn the process. In addition to the checklist above, I have a few comments at this point:
Hi @bstabler ! Thanks for accepting to review. And thanks for the constructive comments, we'll make the adjustments one by one.
Dear Authors,
Thank you for introducing an important library. VRP is such a classical optimization problem and having a dedicated package in the Python ecosystem would definitely contribute to further research and experiments.
Overall, the API looks well-designed, the usage example is intuitive, the paper is easy to follow, the documentation and examples are elaborate.
The library passes my initial review, I enjoyed testing and using the software.
My comments below are offered in the spirit of strengthening the library or broadening its appeal to the users.
Best, Serdar
===
The comments are somewhat in random order. Most of them are minor, and some are "subjective" so feel free to discard.
At a high-level, my main comments would be
[Algorithm] Is there a paper/tech report that shows the formulations of different VRPs using ColGen? The unification approach is very neat and some interested users might want to read more. I say this because VRP, while very applicable, will attract a certain sub-group of OR/Optimization specialists, and then the formulation would be interesting to collect in a document/paper/tech report or individual references for each problem type.
[Performance] Is there any reports/performance you can refer to? If full-blown comparison results are not available, even a quick comment might help the user on what to expect from this library? Is this designed to be competitive in terms of runtime and solution quality? OR, there is no such performance claim but instead, it offers an easy-to-use, unified API for many variants of VRP?
[Implementation] There is some hacking with path variables. Some tests are failing because of that. It would be best to switch to relative imports and not alter the system path variable, which might have other unintended consequences.
Please mention in the paper that the library does NOT solve the instances to optimality. See my comment on the performance above.
You have a nice quick start example in the docs. Maybe add that to the paper?
"The flexibility and genericity of vrpy is strongly due to ... the relevance of cspy.". I am not sure what you mean here, please consider a rephrase
The high-level intro to the ColGen approach is nice. For the interested reader who might want to see how different variants of VRP can be formulation in this ColGen framework is there a paper or technical report that you can cite/refer? It is not obvious how all of the variants are modeled, the trick happens in the pricing and a reference of details of each formulation would be nice to have.
Could you please add a word about other libraries and compare/contrast with this one? Does it solve more variants, why should the reader/user consider this library? For example, you mention examples from the ORTools, a comment on the comparison with ORTools and other libraries would be good for the reader to evaluate.
Any performance to report and/or refer to? There are many standard VRP benchmarks for this purpose, and I see that you already use Solomon etc.
Add some comments on your strategy for testing the library? In particular; testing invalid conditions and testing behavior (optimal value and route)
Git clone failed for me. It might as well be a problem with my network/firewall setup. @bstabler can you please check that. I downloaded the zip instead.
Out of the box, running tests in unittests/ folder from PyCharm (on Windows) worked for me. Test_toy took some time to run but maybe that's expected
Out of the box, all tests in benchmarks/ fail. All errors, except the one in test_ortools, are the same:
with open(path + instance_name) as fp:
E FileNotFoundError: [Errno 2] No such file or directory: '../examples/benchmarks/data/P-n16-k8.vrp'
....\examples\benchmarks\cvrp_augerat.py:53: FileNotFoundError
You seem to have an issue with setting path variables. Please check.
test_ortools error is
from ortools.data import ( E ModuleNotFoundError: No module named 'ortools'
Again, a similar issue. It can't find the ortools folder which is under examples/
Related to this. I suggest renaming the "ortools" folder. The current import statements read as if you have a dependency on the ortools library. It's not obvious that it's just a folder in the upper level.
Some examples are using numpy.matrix(). It is not recommended to use. Please consider regular numpy ndarrays.
In test_toy.py, remove "from pytest import raises" --not used. same for test_greedy
In cvrp_augeratpy, remove "from vrpy.main import VehicleRoutingProblem" --not used
In benchmarks folder, some files have a main() function, some files don't. Why?
Running mdvrp_cordeau.py fails with
raise KeyError("Node %s missing from initial solution." % v)
KeyError: 'Node 52 missing from initial solution.'
In examples/ folder, you typically print the best value and best route. Please also ASSERT them. I can run them, but I don't know if the solution is correct or not.
BTW, why is the separation between tests and examples. Why not add these examples to test and assert them.
btw, I am not convinced by the structure of the "examples" folder. My expectation as a user would be to see examples of how to solve different variants of VRP problems. The benchmark folder is somewhat okay but the "ortools" folder does not serve that purpose. The user needs to know what ortools is in the first place, and that what is meant by that folder is not really the library but the examples from ortools.
In my mind; examples folder would look like sth like this:
examples/ | vrp | vrptw | capacitedvrp | capacited_vrpxxx (xxx is some benchmark name) | multi_depocvrp | pickup_delivery
examples.ipync is nice.. but please add ASSERTs along with prints() so that we can verify correctness.
You might want to add LICENSE information as a header on top of every source code file. Maybe also add License section to README?
Thinking out loud comment: if you rename main.py to vrp or vehicle_routing, then the imports would read:
from vrpy.vrp import VehicleRoutingProblem from vrpy.vehicle_routing import VehicleRoutingProblem
which might read a bit more relevant than importing "main"
Math Background sections have a lot of relevant references, but most are not cited in the text.
The section "Why use the cspy library?" is somewhat misleading in the doc section of the "VRPy" library. We are not directly using the cspy here. Maybe call if "Solving the Pricing using cspy" or sth similar?
In Index.html it reads "solving Vehicle Routing Problems (VRP)". What you probably mean is solving "instances" of different types of Vehicle Routing Problems. Here you should definitely mention that the library is NOT guaranteed to find optimal solutions, and maybe a word on the solution quality of what it typically achieves.
The submission meets most of the JOSS checklist criteria. The current open points are:
@Kuifje02 @bstabler @kakiac
I am done with my 1st pass of the review. Overall, I am pleased with the library, detailed comments above.
👋 @bstabler @skadio Thanks all for your hard work and your extensive feedback! 🥇
@Kuifje02 - would you please address their comments, and upload a revised version on your submission in https://github.com/Kuifje02/vrpy/tree/master/paper?
Also if you keep referencing this issue in your repository issues related to the comments (like you are already doing, that will ensure they are listed on the reviewing timeline,
@skadio - git clone works for me on Windows.
@kakiac @bstabler Quick question, is it possible to add a new co-author at this stage?
We'd like to include @Halvaros as he's done a lot of work over the summer including some new hyper-heuristic selection functionality that picks the pricing heuristics on the fly.
@kakiac @bstabler While we are at it, another question. We have taken into account all of the comments for the paper, but the resulting paper is longer than the 1000 word limit (~ 1300 words). Is the 1000 word limit strict ?
I do not have an issue with the slightly longer resulting paper --as it is adding value. Same for the additional author.
That said, I cannot comment on these "editorially" from JOSS perspective but only as a reviewer.
@kakiac @bstabler While we are at it, another question. We have taken into account all of the comments for the paper, but the resulting paper is longer than the 1000 word limit (~ 1300 words). Is the 1000 word limit strict ?
It's OK to go a little over so as long as @kakiac is OK with this you should be fine.
Let me follow-up on the status of this submission: what are the next steps to make this work to cross the finish line? The work & review seem almost complete but I am not sure how to finalize this.
@Kuifje02 my last question about the submission is about the "Community Guidelines". Please see the open review item and let us know your thoughts.
@bstabler I see that you completed most of the review checklist, Any following comments/questions from your end?
Thanks @skadio for the follow up.
The Community Guidelines have been addressed with a CONTRIBUTING.md file, which is referenced at the bottom of the README.md.
We still have a few minor items to do :
[x] rename main.py to vrp.py
[x] add a license file
[x] update the docs:
Hopefully we can do this quickly (the two first items for sure), merge with the master branch, and the finish line should not be so far :)
@whedon generate pdf
PDF failed to compile for issue #2408 with the following error:
Error reading bibliography ./paper.bib (line 239, column 1): unexpected "@" expecting space, ",", white space or "}" Error running filter pandoc-citeproc: Filter returned error status 1 Looks like we failed to compile the PDF
@Kuifje02 @skadio @bstabler @arfon many thanks for all your contributions and great work on the paper! Just going through the final checks for the paper to finalise and publish 🎆
Here are a few last bits:
@Kuifje02
I can then run the scripts for checking references and DOIs :)
@skadio
can you confirm (by ticking this box or the box in your review checklist) that you are happy that the following have been adequately addressed in the revised version of the paper:
[x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
would you recommend acceptance?
@bstabler
can you confirm (by ticking this box or the box in your review checklist) that you are happy that the following have been adequately addressed in the revised version of the paper:
[ ] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)
[ ] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
[ ] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
[ ] State of the field: Do the authors describe how this software compares to other commonly-used packages?
would you recommend acceptance?
@whedon check references
Checking the BibTeX entries failed with the following error:
Failed to parse BibTeX on value "@" (AT) [#
@kakiac @Kuifje02 Yes, I confirm the changes which improve the library/paper. I assume that the author can easily fix the remaining missing bib, etc. With that, my recommendation is to accept.
Editor's "After reviewers recommend acceptance" checks: (for @kakiac):
@skadio Thanks :)
- in reply to #2408 (comment) - yes, fine to have an additional author included, I can see his contribution evidenced as well on the repository. @Halvaros is that ok with you?
This is ok with @Halvaros 👍
Here are a few last bits:
@Kuifje02
- [x] could you merge my suggested minor change/typo on your paper.bib: Kuifje02/vrpy#64
- [x] check / accept changes on DOIs in paper.bib for the references in this issue: Kuifje02/vrpy#65
- [x] minor typo :) Kuifje02/vrpy#66
- [x] references ok
- [x] typos, wording etc ok
@Kuifje02 Thanks!
@whedon generate pdf
PDF failed to compile for issue #2408 with the following error:
Error reading bibliography ./paper.bib (line 180, column 3): unexpected "u" expecting space, ",", white space or "}" Error running filter pandoc-citeproc: Filter returned error status 1 Looks like we failed to compile the PDF
@whedon check references
Checking the BibTeX entries failed with the following error:
Failed to parse BibTeX on value "url" (NAME) [#
I think there was a missing comma in the reference. I have just fixed it. Should we give it another try?
@whedon generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@whedon check references
@whedon check references
Hrm, Crossref's API is struggling with some of these references. Not sure why...
@whedon check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1016/j.ejor.2017.06.063 is OK
- 10.1287/trsc.2018.0878 is OK
- 10.21105/joss.01655 is OK
- 10.1287/trsc.1050.0118 is OK
- 10.1287/opre.12.4.568 is OK
- 10.5281/zenodo.3700700 is OK
- 10.5281/zenodo.3748677 is OK
- 10.1007/0-387-22619-2_16 is OK
- 10.1287/mnsc.6.1.80 is OK
- 10.1007/978-3-642-46629-8_10 is OK
- 10.1016/j.cor.2005.08.002 is OK
- 10.1287/opre.33.5.1050 is OK
- 10.1007/s10479-009-0650-0 is OK
- 10.1002/nav.20261 is OK
- 10.1287/ijoc.2018.0822 is OK
- 10.1287/opre.35.2.254 is OK
- 10.1287/ijoc.2017.0784 is OK
- 10.1109/CEC.2015.7256977 is OK
- 10.1109/CEC.2017.7969356 is OK
- 10.1007/978-3-642-32964-7_31 is OK
MISSING DOIs
- None
INVALID DOIs
- None
Submitting author: @Kuifje02 (Romain Montagne) Repository: https://github.com/Kuifje02/vrpy Version: 0.3.0 Editor: @kakiac Reviewers: @bstabler, @skadio Archive: 10.5281/zenodo.4248877
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@bstabler & @skadio, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @kakiac know.
✨ Please try and complete your review in the next six weeks ✨
Review checklist for @bstabler
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
Review checklist for @skadio
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper