openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
721 stars 38 forks source link

[REVIEW]: A parallel global multiobjective framework for optimization: pagmo #2338

Closed whedon closed 4 years ago

whedon commented 4 years ago

Submitting author: @bluescarni (Francesco Biscani) Repository: https://github.com/esa/pagmo2-paper Version: v2.15.0 Editor: @eloisabentivegna Reviewer: @dgoldri25, @jangmys Archive: 10.5281/zenodo.4013250

:warning: JOSS reduced service mode :warning:

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/133bd81e4126c041c2998f744b0dc8c3"><img src="https://joss.theoj.org/papers/133bd81e4126c041c2998f744b0dc8c3/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/133bd81e4126c041c2998f744b0dc8c3/status.svg)](https://joss.theoj.org/papers/133bd81e4126c041c2998f744b0dc8c3)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@dgoldri25 & @jangmys, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @eloisabentivegna know.

Please try and complete your review in the next six weeks

Review checklist for @dgoldri25

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

Review checklist for @jangmys

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

whedon commented 4 years ago

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @dgoldri25, @jangmys it looks like you're currently assigned to review this paper :tada:.

:warning: JOSS reduced service mode :warning:

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf
whedon commented 4 years ago
Reference check summary:

OK DOIs

- 10.5281/zenodo.3702783 is OK
- 10.1016/j.parco.2010.04.002 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1109/mcse.2011.37 is OK
- 10.1109/MCSE.2007.55 is OK

MISSING DOIs

- None

INVALID DOIs

- None
whedon commented 4 years ago

:point_right: Check article proof :page_facing_up: :point_left:

eloisabentivegna commented 4 years ago

Dear @dgoldri25 and @jangmys, thanks for agreeing to review this submission! As you can see, I have started the review process, whereby the original issue has been closed and a new (this) one has been opened, with further directions for you. Please take a moment to go over the instructions and checklists above, and let me know if anything is unclear.

Please notice that this is a slightly unusual submission with respect to standard JOSS practice, in that you will not find the source code under the paper repository above, as the submission consists of two separate packages:

https://github.com/esa/pagmo2https://github.com/esa/pygmo2

This should not impact the review process, but let me know if you need further clarifications.

I look forward to your comments!

jangmys commented 4 years ago

Concerning the point "State of the field":

The paper (which is globally very well written) starts directly with the general formulation of the optimization problem. While the "Summary" provides some general motivation for designing (parallel) frameworks for multi-objective optimization, I think it would be useful and important to give an overview of existing frameworks and to explain what differentiates pagmo/pygmo from them : Why should I choose pagmo/pygmo instead of, say, jMetal, jMetalPy, DEAP, HeuristicLab, ParadisEO, Mallba or Opt4j (plus NLOpt) ?

I am aware that a complete comparison of functionalities/documentation/etc is a huge task which is certainly beyond the scope of this paper - however at least a summary comparison with existing frameworks would clearly enhance the quality of the paper and give interested readers additional motivation to use pagmo. Maybe this comparision could be restricted to frameworks that offer some parallel processing support... In the (now 8 years old) paper by Parejo, José Antonio, et al., "Metaheuristic optimization frameworks: a survey and benchmarking.", Soft Computing 16.3 (2012): 527-561 (https://core.ac.uk/reader/51388224) the authors perform a complete comparative study of metaheuristic optimization frameworks. The results of this study are (at least partially) outdated, but it could still be a good starting point (and/or a reference to give to the reader!).

jangmys commented 4 years ago

The paper says "The parallel evaluation can be performed by multiple threads, processes, nodes in an HPC cluster or even by GPU devices (via, e.g., OpenCL or CUDA)" (p.3) - I could not find a "GPU batch evaluator" in the code (please help me out if I missed it).

Of course, conceptually, a batch of solutions could be evaluated on a GPU - technically, however, there are some challenges (for example: (1) no std library in device code -> DS for individuals on the device?; (2) need to explicitly copy problem data to the device and separate implementation of __device fitness function (alternatively "Unified Memory" + __host __device__ lambdas...which are still experimental: performance?); (3) thread-data mapping on the device (only 1 thread/fitness eval or possibility of parallelizing fitness evaluation itself?

If a GPUbatchEval has already been implemented (and I just didn't see it), the paper should provide some explanations and instructions for using GPU acceleration should appear in the documentation... otherwise, if GPU support for batch evaluation is future work, the paper should clarify this. Concerning a "unified" heterogeneous programming approach that fits well into pagmo's design, I think intel OneAPI/SYCL/Data Parallel C++ is an interesting alternative to CUDA/OpenCL.

jangmys commented 4 years ago

Minor remarks (typos): Summary: cuncurrently -> concurrently p.2: non linear ; non linearly -> non-linear ; non-linearly p.3: Cuncurrent fitness -> Concurrent

jangmys commented 4 years ago

I was able to install pygmo/pagmo easily, following the instructions, and the provided tutorials helped me to get started quickly. While I haven't used the software myself before doing this review, some colleagues in my department have been using pagmo/pygmo in their research for some time not - as far as I know, the fact that pagmo/pygmo is well documented and very accessible has been an important factor in their choice. An active community of users and the possibility to find help in case of difficulties and/or exchange with the developers (e.g. Gitter) is also a very positive point. Overall, the pagmo/pymgo framework is a valuable and (I've got the impression) already quite well established optimization tool.

Therefore, in order to increase the software's visibility and allow researchers to properly cite pagmo/pygmo (the last (only?) pagmo-related paper I found is from 2010, "A global optimization toolbox for massively parallel engineering optimisation"), publishing this paper in JOSS definitely makes sense. However, before I can recommend accepting the article, I think it is important that the issues I mentioned above are addressed.

eloisabentivegna commented 4 years ago

Thanks for your comments, @jangmys!

Could you raise any future issues in the respective repositories? This will ensure your suggestions will enter the package histories and be properly credited. You can create a mention here by using this issue's URL in a repository's issue, if you wish (see https://joss.readthedocs.io/en/latest/reviewer_guidelines.html#guiding-principles).

eloisabentivegna commented 4 years ago

@dgoldri25, do you concur with @jangmys' suggestions regarding a comparison with the state of the art? Is this why you have left the corresponding box unticked?

davidfgold commented 4 years ago

@eloisabentivegna I second @jangmys' suggestion. While I found the paper to be high quality overall, the review of the state of the art should be expanded.

eloisabentivegna commented 4 years ago

@bluescarni, does the request by @jangmys and @dgoldri25 make sense to you? Can you expand the literature review?

bluescarni commented 4 years ago

@eloisabentivegna @jangmys @dgoldri25 thanks for the review!

We can certainly expand the literature review. Currently all the authors are still on vacation, so apologies for the late reply. We should be able to revise the paper next week.

bluescarni commented 4 years ago

@whedon generate pdf

whedon commented 4 years ago

:point_right: Check article proof :page_facing_up: :point_left:

bluescarni commented 4 years ago

@eloisabentivegna @jangmys @dgoldri25 @darioizzo we have added a section about related projects/frameworks and fixed the typos. Please let us know if there is anything else.

@jangmys regarding your question about the GPU: what pagmo provides is an API to compute the fitness evaluation of a group of independent decision vectors in a (possibly) parallel fashion. Such API can then be used by optimisation algorithms capable of taking advantage of parallel fitness evaluation (in pagmo we have a handful of such algorithms, mostly genetic/evolutionary ones). Thus you are absolutely right that we don't directly address the specifics of GPU programming from within pagmo (such as data transfer, compilation model, etc.). This is left to the author of an optimisation problem, who is free to choose between CUDA/OpenCL/SYCL for the implementation of the batch evaluation API in his/her specific optimisation problem. We expanded a bit on this point in the latest revision of the paper, please let us know if this clarifies the matter.

jangmys commented 4 years ago

@bluescarni @eloisabentivegna @dgoldri25 @darioizzo I think the additional paragraph addresses the existing state-of-the-art appropriately (and I therefore checked the corresponding box). I also agree with the "Concurrent fitness evaluation" section which now clarifies that there is no "out-of-the-box" GPU support, but rather a clean interface that could allow users to build one on their own. From my side there are no further remarks and I recommend to accept the paper. I hope you'll be able to keep up the good work on this framework in the future ;-)

davidfgold commented 4 years ago

@bluescarni @eloisabentivegna @jangmys @darioizzo I concur with jangmys' assessment, I've also checked the box for "State of the Field".

I recommend to accept the paper as well.

eloisabentivegna commented 4 years ago

Thanks, @jangmys and @dgoldri25! It sounds like we are ready to accept the paper. @arfon, is there something special we need to do because of the double code repository?

arfon commented 4 years ago

@arfon, is there something special we need to do because of the double code repository?

We need the authors to make a single archive with e.g. Zenodo of all of the software associated with the submission. This will likely require a little extra work on the part of the authors compared with some of the automated methods for doing this with GitHub.

bluescarni commented 4 years ago

@arfon would it be enough to make an archive of the latest releases of pagmo and pygmo (i.e., from the released tarballs on github)?

arfon commented 4 years ago

@arfon would it be enough to make an archive of the latest releases of pagmo and pygmo (i.e., from the released tarballs on github)?

We would like the archive to include any changes that have been made as a result of this review. Would that be the case if you used the latest releases?

bluescarni commented 4 years ago

@arfon No, there were no code changes as a result of the review.

bluescarni commented 4 years ago

@arfon where should the archive be uploaded?

arfon commented 4 years ago

@arfon where should the archive be uploaded?

We need an archive DOI from e.g. Zenodo or figshare.

bluescarni commented 4 years ago

@arfon Is this ok?

https://zenodo.org/record/4013250

eloisabentivegna commented 4 years ago

@bluescarni, thanks for posting the archive DOI! The title should match the paper's, though. Could you change it to "A parallel global multiobjective framework for optimization: pagmo"?

eloisabentivegna commented 4 years ago

@whedon generate pdf

whedon commented 4 years ago

:point_right: Check article proof :page_facing_up: :point_left:

eloisabentivegna commented 4 years ago

@whedon check references

whedon commented 4 years ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.5281/zenodo.3702783 is OK
- 10.1016/j.parco.2010.04.002 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1109/mcse.2011.37 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.1016/j.advengsoft.2011.05.014 is OK

MISSING DOIs

- 10.1023/b:heur.0000026900.92269.ec may be a valid DOI for title: Paradiseo: A framework for the reusable design of parallel and distributed metaheuristics

INVALID DOIs

- None
bluescarni commented 4 years ago

@whedon generate pdf

whedon commented 4 years ago

:point_right: Check article proof :page_facing_up: :point_left:

bluescarni commented 4 years ago

@whedon check references

bluescarni commented 4 years ago

@eloisabentivegna thanks for the PR! I have changed the title of the archive DOI.

whedon commented 4 years ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.5281/zenodo.3702783 is OK
- 10.1016/j.parco.2010.04.002 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1109/mcse.2011.37 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.1016/j.advengsoft.2011.05.014 is OK
- 10.1023/B:HEUR.0000026900.92269.ec is OK

MISSING DOIs

- None

INVALID DOIs

- None
bluescarni commented 4 years ago

@whedon generate pdf

whedon commented 4 years ago

:point_right: Check article proof :page_facing_up: :point_left:

bluescarni commented 4 years ago

@whedon check references

whedon commented 4 years ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1007/978-3-642-28789-3_7 is OK
- 10.5281/zenodo.3702783 is OK
- 10.1146/annurev.es.09.110178.000335 is OK
- 10.1016/j.parco.2010.04.002 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1109/mcse.2011.37 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.1016/j.advengsoft.2011.05.014 is OK
- 10.1023/B:HEUR.0000026900.92269.ec is OK

MISSING DOIs

- None

INVALID DOIs

- https://doi.org/10.1023/A:1008202821328 is INVALID because of 'https://doi.org/' prefix
bluescarni commented 4 years ago

@whedon generate pdf

whedon commented 4 years ago

:point_right: Check article proof :page_facing_up: :point_left:

bluescarni commented 4 years ago

@whedon check references

whedon commented 4 years ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1007/978-3-642-28789-3_7 is OK
- 10.5281/zenodo.3702783 is OK
- 10.1146/annurev.es.09.110178.000335 is OK
- 10.1016/j.parco.2010.04.002 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1023/A:1008202821328 is OK
- 10.1109/mcse.2011.37 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.1016/j.advengsoft.2011.05.014 is OK
- 10.1023/B:HEUR.0000026900.92269.ec is OK

MISSING DOIs

- None

INVALID DOIs

- None
eloisabentivegna commented 4 years ago

@bluescarni, thanks for checking everything!

By a manual inspection, I noticed three further missing DOIs, and created a new pull request. @arfon, do you know why these three references might have not been picked up by the automated check?

Also @arfon, is it OK that the paper "Repository" link points to the paper repository, not the pagmo/pygmo ones? If not, is there a way to fix this?

Other than these points, the proofs look fine to me. @bluescarni, as soon as you confirm you are fine with the proofs too, I will complete the pre-publication stage.

arfon commented 4 years ago

By a manual inspection, I noticed three further missing DOIs, and created a new pull request. @arfon, do you know why these three references might have not been picked up by the automated check?

Whedon simply uses the Crossref API to check for missing DOIs, they presumably aren't being found there...

Also @arfon, is it OK that the paper "Repository" link points to the paper repository, not the pagmo/pygmo ones? If not, is there a way to fix this?

It's not ideal but this is a consequence of having the paper in a separate repository. I would suggest directly linking to both variants of the software pymgo and pagmo in the body of the paper to make it easier for the reader.

bluescarni commented 4 years ago

@whedon generate pdf

whedon commented 4 years ago

:point_right: Check article proof :page_facing_up: :point_left:

bluescarni commented 4 years ago

@whedon check references

whedon commented 4 years ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1007/978-3-642-28789-3_7 is OK
- 10.5281/zenodo.3702783 is OK
- 10.1146/annurev.es.09.110178.000335 is OK
- 10.1016/j.parco.2010.04.002 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1023/A:1008202821328 is OK
- 10.1109/mcse.2011.37 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.1016/j.advengsoft.2011.05.014 is OK
- 10.1023/B:HEUR.0000026900.92269.ec is OK

MISSING DOIs

- None

INVALID DOIs

- None