openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
725 stars 38 forks source link

[REVIEW]: COOLEST: the COde-independent Organized LEns STandard #5567

Closed editorialbot closed 1 year ago

editorialbot commented 1 year ago

Submitting author: !--author-handle-->@aymgal<!--end-author-handle-- (Aymeric Galan) Repository: https://github.com/aymgal/COOLEST Branch with paper.md (empty if default branch): main Version: v0.1.0 Editor: !--editor-->@plaplant<!--end-editor-- Reviewers: @smsharma, @AlexandreAdam Archive: 10.5281/zenodo.8207512

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/44c6c626404f241deecbb62cd4128e85"><img src="https://joss.theoj.org/papers/44c6c626404f241deecbb62cd4128e85/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/44c6c626404f241deecbb62cd4128e85/status.svg)](https://joss.theoj.org/papers/44c6c626404f241deecbb62cd4128e85)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@smsharma & @AlexandreAdam, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @plaplant know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @smsharma

📝 Checklist for @AlexandreAdam

editorialbot commented 1 year ago

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf
editorialbot commented 1 year ago
Software report:

github.com/AlDanial/cloc v 1.88  T=0.16 s (602.2 files/s, 106631.8 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
JSON                            12              0              0           7048
Python                          52           1067           1483           3470
Jupyter Notebook                 6              0           1858            528
Markdown                        14            193              0            455
TeX                              2             18              0            316
YAML                             3             19             23             63
reStructuredText                 2             17              1             49
CSS                              1              5              3             18
make                             1              3              1              5
Ruby                             1              0              0              2
-------------------------------------------------------------------------------
SUM:                            94           1322           3369          11954
-------------------------------------------------------------------------------

gitinspector failed to run statistical information for the repository
editorialbot commented 1 year ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.21105/joss.03283 is OK
- 10.1051/0004-6361/202244464 is OK
- 10.1093/mnras/stac1924 is OK
- 10.21105/joss.02825 is OK
- 10.3847/1538-4357/ac6de4 is OK
- 10.1051/0004-6361/201015481 is OK
- 10.1093/mnras/staa108 is OK
- 10.1088/1367-2630/9/12/447 is OK
- 10.1093/mnras/stx2064 is OK
- 10.1093/mnras/stab484 is OK
- 10.1051/0004-6361/202243401 is OK
- 10.3847/0004-637X/817/1/60 is OK
- 10.1051/0004-6361/202037929 is OK

MISSING DOIs

- None

INVALID DOIs

- None
editorialbot commented 1 year ago

Wordcount for paper.md is 1703

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

plaplant commented 1 year ago

@smsharma @AlexandreAdam thanks very much for agreeing to be reviewers! When you get a chance, please reply to this issue with:

@editorialbot generate my checklist

This will make your reviewer checklist that you can begin going through. If you would like to make suggestions for improvements, you can open issues on the upstream repo, which you can then refer to in comments in this thread. Please let me know if you have any questions!

smsharma commented 1 year ago

Review checklist for @smsharma

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

AlexandreAdam commented 1 year ago

Review checklist for @AlexandreAdam

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

AlexandreAdam commented 1 year ago

Installation

Please include a small statement in the README.md file for the installation method, either with pip or directly calling the setup.py script. There should be a method to install the package for general usage and one for development / testing. I have opened this issue which should be quick to resolve.

Functionality

The full list of adopted convention is not yet in the README.md (joss branch), nor is it in the paper. I have opened this issue for this. Since this is the main output of this project, I strongly suggest that the complete list be provided and described in detail somewhere in the project and in the paper. In itself, this will be very useful for other developer to possibly adjust their convention manually in order to align it with COOLEST.

See this issue for bugs in the /examples folder.

Notebooks in /docs/notebooks/ are working fine.

I dug around in the code and found this notebook which describes a different method (compared to lenstronomy) for computing the effective Einstein radii. Speed comparisons with lenstronomy are provided, which I reproduced. This notebook should be moved in the test folder as an explicit test. Now as to the merit of this code, I cannot tell right away if this algorithm is correct. It seems to me like it misses the mark set by an analytical value by a substantial margin. To be fair, so does lenstronomy. Yet if this code is used for reproducibility purposes it seems a rather important issue. This last point is subjective, so this is not an acceptance-blocking issue.

Documentation

While individual classes have a neat documentation, I would find it useful to have a dedicated website hosting the documentation. The docs and html for this are already in the joss branch, but I could not find the documentation online. Please provide a link to it in the README.md file when it is properly hosted. See this issue.

For the purpose of this review, I hosted locally the documentation and have the following comments

  1. I am confused by the author's point in the page What is a "lensing entity"? placed in <host>/template_file/lensing_entities.html. The concept of a lensing plane is particularly useful, not only to distinguish it from the source plane, but also in the context of multiplane lensing. I do agree that the properties of a galaxy, like it's mass profile and light profile, are semantically separate from the lensing plane, but this distinction does not seem relevant for what the authors want. If I understood the context correctly, the authors want a naming convention that attaches physical properties to physical objects while keeping the lensing formalism in the background. I do not disagree with that choice, but the text is rather vague on this and seem to miss the point.

  2. It would be great if the examples (notebooks) were more elaborate. For example, a notebook on how to compare models like is showcased in the paper would be great but is missing. I have opened this issue.

  3. I am completely missing the point described in the page Modes: mock vs best-fit. The usage of the MOCK mode is not illustrated in the examples, and the DOC mode is rather mysterious. In either case, I would want to see how the JSON template structure is modified by these choice (if it is), or what segment of the code is affected by this.

Tests

There is no documentation or mention in the README.md file on how to run tests. I figured the author where using pytest, but a small statement should be included in the README.md to tell users how to run it (a simple one-liner would be appreciated).

All test pass on my system, although I get a significant amounts of warning related to the usage of deprecated function. The specific warning messages I get from numpy look safe, so this is not an acceptance-blocker issue.

Task list for the authors

aymgal commented 1 year ago

Thank you very much @AlexandreAdam for your thorough review on our work! We will go through the entire list above and github issues, and address your points in the following days/weeks. Once this is done I will summarize this here.

Just a quick remark here: the documentation is hosted here, as mentioned on the github repo. I will make this even clearer when updating the README based on your suggestions.

aymgal commented 1 year ago

Hi @AlexandreAdam , here are answers to your comments.

If possible, I would prefer to have the list of conventions in the documentation / GitHub only, such that it can be completed and improved along its adoption by researchers and the feedback from the community. Although the current stated conventions should not change, it is still an eventuality and it is probable that other conventions get added to the list in the future. Moreover, as the paper is already over the "standard" size, this would make it significantly longer.

I dug around in the code and found this notebook which describes a different method (compared to lenstronomy) for computing the effective Einstein radii. Speed comparisons with lenstronomy are provided, which I reproduced. This notebook should be moved in the test folder as an explicit test. Now as to the merit of this code, I cannot tell right away if this algorithm is correct. It seems to me like it misses the mark set by an analytical value by a substantial margin. To be fair, so does lenstronomy. Yet if this code is used for reproducibility purposes it seems a rather important issue. This last point is subjective, so this is not an acceptance-blocking issue.

We added unit test for various input parameters and grid resolution in test/test. For these tests, we quote a maximum relative error of 5% on Einstein radius, effective kappa slope and effective half-light radius computations (the error being most of the time inferior to 5%).

We note that COOLEST is not meant to reproduce the original modeling results (this would be unrealistic), but to provide tools to analyse lens models uniformly, independently from the original modeling code as much as possible. Therefore, differences in the values of effective quantities compared to original lens modeling codes and theoretical values are expected, but does not impact the goal of the standard.

I am confused by the author's point in the page What is a "lensing entity"? placed in /template_file/lensing_entities.html. The concept of a lensing plane is particularly useful, not only to distinguish it from the source plane, but also in the context of multiplane lensing. I do agree that the properties of a galaxy, like it's mass profile and light profile, are semantically separate from the lensing plane, but this distinction does not seem relevant for what the authors want.

We reformulated the documentation, and you can see the commit diff here. Please let me know if this is still unclear. Note that the multiplane aspect is fully retained, as each lensing entity has redshift associated to it.

If I understood the context correctly, the authors want a naming convention that attaches physical properties to physical objects while keeping the lensing formalism in the background. I do not disagree with that choice, but the text is rather vague on this and seem to miss the point.

This is exactly the goal. It has two main advantages: 1) it is more intuitive and makes it easier to access the information within the template file / through the Python API; 2) on the long run, this will become handy when combining constraints on the properties of physical objects from different analyses (this is relevant in the goal of having a database of lens models, that researchers can query). Hopefully the text is clearer now.

I am completely missing the point described in the page Modes: mock vs best-fit. The usage of the MOCK mode is not illustrated in the examples, and the DOC mode is rather mysterious. In either case, I would want to see how the JSON template structure is modified by these choice (if it is), or what segment of the code is affected by this.

We clarified the documentation, and you can see the commit diff here. We also completed this example notebook such that it also dumps templates with the "MOCK" and "DOC" modes. In addition to the default "MAP" mode, only the "MOCK" has an obvious application, as it can serve as an input file to a simulator code to generate mock data.

The "DOC" mode does not have explicit application, and was originally meant for generating automatic documentation by querying the file and rendering it on a webpage.

There is no documentation or mention in the README.md file on how to run tests. I figured the author where using pytest, but a small statement should be included in the README.md to tell users how to run it (a simple one-liner would be appreciated).

We added the pytest command in the README.

All test pass on my system, although I get a significant amounts of warning related to the usage of deprecated function. The specific warning messages I get from numpy look safe, so this is not an acceptance-blocker issue.

Which warning from numpy are you referring to? On my machine, I obtained a deprecation warning due to pytest (it is now fixed with this commit), and another one from numba (this one is related to lenstronomy if I am correct). On the GitHub server, only the pytest showed up before.

We addressed each opened issue, and commented on them directly on our repo.

aymgal commented 1 year ago

Hi @plaplant, any news from @smsharma?

smsharma commented 1 year ago

@aymgal @plaplant Apologies for the delay, will have the review in by early next week.

smsharma commented 1 year ago

Review

plaplant commented 1 year ago

@smsharma @AlexandreAdam thanks very much for your thorough reviews! I appreciate all of the work you have put in thus far.

@aymgal please continue responding to the reviewers' comments. When you feel that you have sufficiently addressed all of the concerns, you can reply to this thread. At that point, the reviewers can go through the and update checklists if the issues have been addressed, or comment on further issues seen. Let me know if you have any questions!

aymgal commented 1 year ago

Hi @smsharma , thank you for your comments! Here are the answers and details on where to find the changes in the code, doc and the paper.

The introduction of the paper has been updated to address your comment. The changes can be seen this PR diff.

The requirements_tests.txt is indeed destined to run the unit tests with pytest. We made that clearer in the README. We also made some improvements in the setup.py such that running the command pip install ".[opt]" will automatically installs optional dependencies as well (see this PR).

Thank you for suggesting this, it's a useful thing to have indeed! We added more information with this commit to encourage contributions by the lensing community in a CONTRIBUTION.md document.

This has been fixed with this commit, and the error message should now be clearer. The issue was that one of the example coolest template file (from 'code_1') contained fields that do no more corresponds to actual Python attributes in the coolest.template hierarchy, which caused jsonpickle to fail constructing Python objects.

Thank you for the suggestion: this is now fixed with this PR.

Please let us know if something more should be addressed.

aymgal commented 1 year ago

Thank you @plaplant @smsharma @AlexandreAdam we are done with our response to the reviewers.

AlexandreAdam commented 1 year ago

Review #2

List of conventions

If possible, I would prefer to have the list of conventions in the documentation / GitHub only, such that it can be completed and improved along its adoption by researchers and the feedback from the community. Although the current stated conventions should not change, it is still an eventuality and it is probable that other conventions get added to the list in the future. Moreover, as the paper is already over the "standard" size, this would make it significantly longer.

That works with me. The conventions are clearly stated and easily accessible from the documentation or the README.md.

Tests of the effective quantities computation

We added unit test for various input parameters and grid resolution in test/test. For these tests, we quote a maximum relative error of 5% on Einstein radius, effective kappa slope and effective half-light radius computations (the error being most of the time inferior to 5%). We note that COOLEST is not meant to reproduce the original modeling results (this would be unrealistic), but to provide tools to analyse lens models uniformly, independently from the original modeling code as much as possible. Therefore, differences in the values of effective quantities compared to original lens modeling codes and theoretical values are expected, but does not impact the goal of the standard.

Thank you for the clarification.

Clarification of the concept of lensing entities

The new formulation in the documention is much more clear and to the point.

Clarification on the template "modes"

The documentation is now very clear on the modes keyword.

Deprecetation warning when running tests

The specific DeprecationWarning I get is the following

DeprecationWarning: `np.long` is a deprecated alias for `np.compat.long`. To silence this warning, use `np.compat.long` by itself. In the likely event your code does not need to work on Python 2 you can use the builtin `int` for which `np.compat.long` is itself an alias. Doing this will not modify any behaviour and is safe. When replacing `np.long`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
  Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

Issues

All issues related to my previous review were resolved satisfactorily.

Conclusion

In conclusion, I am pleased to report that the code has met all the criterias in the checklist to my satisfaction.

Throughout the assessment process, it was clear to me that the authors have dedicated significant efforts to ensure the quality and robustness of the code. The code is modular, well tested and well documented. Its purpose is clear and timely.

In light of these observations, I commend the authors for their meticulous attention to details. It is my conclusion that the submitted code aligns with the JOSS standards of a scientific open source software.

I, therefore, recommend this code for a JOSS publication without reservation. Its adoption would be a valuable contribution to the gravitational lensing community.

plaplant commented 1 year ago

@AlexandreAdam thanks very much for your response! I am glad to hear that you recommend the submission for publication.

@smsharma do you have additional comments or suggestions for the submission? If so, feel free to respond to this thread. If not, you can update your review checklist accordingly. Thanks!

smsharma commented 1 year ago

Thanks very much @aymgal for addressing all of the comments, in particular the comprehensive contribution guidelines. At this point I can enthusiastically recommend acceptance in JOSS, and believe this will be a great tool to aid in gravitational lens modeling in a standardized way.

aymgal commented 1 year ago

Thanks a lot @AlexandreAdam and @smsharma for your detailed and serious reviews, as well as your suggestions to improve the standard we propose. We really appreciate it. Happy to see this going through the JOSS process!

plaplant commented 1 year ago

@AlexandreAdam @smsharma thank you both for your very thorough reviews! Happy to hear that this paper is ready to go.

@aymgal both reviewers have recommended acceptance of the submission, so we can move forward with the final acceptance checks. If the software has changed in the course of the review (excluding changes to the paper/manuscript itself), please make a new tagged release of the repository. After that, please archive it (on Zenodo, figshare, or some other long-term hosting platform). Once those are done, please post the version number and DOI of the archive in this thread. Let me know if you have any questions!

plaplant commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

aymgal commented 1 year ago

@plaplant Thanks! Here is the zenodo upload, and the corresponding DOI is 10.5281/zenodo.7900798. I created a tag v0.1.0 on the repo, and merged the joss branch to the main.

aymgal commented 1 year ago

We just added some minor changes to the paper.md, the very last version is on the main branch.

plaplant commented 1 year ago

@aymgal thanks for the info!

plaplant commented 1 year ago

@editorialbot check references

editorialbot commented 1 year ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.21105/joss.03283 is OK
- 10.1051/0004-6361/202244464 is OK
- 10.1093/mnras/stac1924 is OK
- 10.1093/mnras/stac268 is OK
- 10.21105/joss.02825 is OK
- 10.3847/1538-4357/ac6de4 is OK
- 10.1051/0004-6361/201015481 is OK
- 10.1093/mnras/staa108 is OK
- 10.1088/1367-2630/9/12/447 is OK
- 10.1093/mnras/stx2064 is OK
- 10.1093/pasj/62.4.1017 is OK
- 10.1093/mnras/stab484 is OK
- 10.1051/0004-6361/202243401 is OK
- 10.3847/0004-637X/817/1/60 is OK
- 10.1051/0004-6361/202037929 is OK

MISSING DOIs

- None

INVALID DOIs

- None
plaplant commented 1 year ago

@editorialbot set 10.5281/zenodo.8207512 as archive

editorialbot commented 1 year ago

Done! archive is now 10.5281/zenodo.8207512

plaplant commented 1 year ago

@editorialbot set v0.1.0 as version

editorialbot commented 1 year ago

Done! version is now v0.1.0

plaplant commented 1 year ago

@editorialbot recommend-accept

editorialbot commented 1 year ago
Attempting dry run of processing paper acceptance...
editorialbot commented 1 year ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.21105/joss.03283 is OK
- 10.1051/0004-6361/202244464 is OK
- 10.1093/mnras/stac1924 is OK
- 10.1093/mnras/stac268 is OK
- 10.21105/joss.02825 is OK
- 10.3847/1538-4357/ac6de4 is OK
- 10.1051/0004-6361/201015481 is OK
- 10.1093/mnras/staa108 is OK
- 10.1088/1367-2630/9/12/447 is OK
- 10.1093/mnras/stx2064 is OK
- 10.1093/pasj/62.4.1017 is OK
- 10.1093/mnras/stab484 is OK
- 10.1051/0004-6361/202243401 is OK
- 10.3847/0004-637X/817/1/60 is OK
- 10.1051/0004-6361/202037929 is OK

MISSING DOIs

- None

INVALID DOIs

- None
editorialbot commented 1 year ago

:wave: @openjournals/aass-eics, this paper is ready to be accepted and published.

Check final proof :point_right::page_facing_up: Download article

If the paper PDF and the deposit XML files look good in https://github.com/openjournals/joss-papers/pull/4450, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

aymgal commented 1 year ago

Hi @plaplant it looks like the latest compiled paper is not the one on the main branch (which is now the latest up to date). Should I update the joss branch as well? Or is it possible to ensure the bot compiles the paper from the main branch?

dfm commented 1 year ago

@editorialbot set main as branch

This should do the trick!

editorialbot commented 1 year ago

Done! branch is now main

dfm commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

dfm commented 1 year ago

I opened a minor PR with some edits to the bibliography, but otherwise things are looking good to me!

aymgal commented 1 year ago

Thanks @dfm , it is now merged :)

dfm commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

dfm commented 1 year ago

@editorialbot recommend-accept

editorialbot commented 1 year ago
Attempting dry run of processing paper acceptance...
editorialbot commented 1 year ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.21105/joss.03283 is OK
- 10.1051/0004-6361/202244464 is OK
- 10.1093/mnras/stac1924 is OK
- 10.1093/mnras/stac268 is OK
- 10.21105/joss.02825 is OK
- 10.3847/1538-4357/ac6de4 is OK
- 10.1051/0004-6361/201015481 is OK
- 10.1093/mnras/staa108 is OK
- 10.1088/1367-2630/9/12/447 is OK
- 10.1093/mnras/stx2064 is OK
- 10.1093/pasj/62.4.1017 is OK
- 10.1093/mnras/stab484 is OK
- 10.1051/0004-6361/202243401 is OK
- 10.3847/0004-637X/817/1/60 is OK
- 10.1051/0004-6361/202037929 is OK

MISSING DOIs

- None

INVALID DOIs

- None
editorialbot commented 1 year ago

:wave: @openjournals/aass-eics, this paper is ready to be accepted and published.

Check final proof :point_right::page_facing_up: Download article

If the paper PDF and the deposit XML files look good in https://github.com/openjournals/joss-papers/pull/4465, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept