openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
701 stars 36 forks source link

[REVIEW]: hessQuik: Fast Hessian computation of composite functions #4171

Closed whedon closed 2 years ago

whedon commented 2 years ago

Submitting author: !--author-handle-->@elizabethnewman<!--end-author-handle-- (Elizabeth Newman) Repository: https://github.com/elizabethnewman/hessQuik Branch with paper.md (empty if default branch): Version: v1.0.0 Editor: !--editor-->@jbytecode<!--end-editor-- Reviewers: @GregaVrbancic, @yhtang Archive: 10.5281/zenodo.6478757

:warning: JOSS reduced service mode :warning:

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/126dc76c56d70524ac58b47a9bee7460"><img src="https://joss.theoj.org/papers/126dc76c56d70524ac58b47a9bee7460/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/126dc76c56d70524ac58b47a9bee7460/status.svg)](https://joss.theoj.org/papers/126dc76c56d70524ac58b47a9bee7460)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@GregaVrbancic & @yhtang, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @jbytecode know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Review checklist for @GregaVrbancic

✨ Important: Please do not use the Convert to issue functionality when working through this checklist, instead, please open any new issues associated with your review in the software repository associated with the submission. ✨

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

Review checklist for @yhtang

✨ Important: Please do not use the Convert to issue functionality when working through this checklist, instead, please open any new issues associated with your review in the software repository associated with the submission. ✨

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Related issues: https://github.com/elizabethnewman/hessQuik/issues/2

Software paper

whedon commented 2 years ago

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @GregaVrbancic, @yhtang it looks like you're currently assigned to review this paper :tada:.

:warning: JOSS reduced service mode :warning:

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf
whedon commented 2 years ago

Wordcount for paper.md is 2045

whedon commented 2 years ago
Software report (experimental):

github.com/AlDanial/cloc v 1.88  T=0.10 s (504.7 files/s, 46324.0 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          37            717            243           1811
Jupyter Notebook                 3              0            388            775
Markdown                         9            106              3            316
TeX                              1             29              0            201
-------------------------------------------------------------------------------
SUM:                            50            852            634           3103
-------------------------------------------------------------------------------

Statistical information for the repository '674facf220f018c87f0f2210' was
gathered on 2022/02/16.
The following historical commit information, by author, was found:

Author                     Commits    Insertions      Deletions    % of changes
elizabethnewman                128          6362           3591          100.00

Below are the number of rows from each author that have survived and are still
intact in the current revision:

Author                     Rows      Stability          Age       % in comments
elizabethnewman            2771           43.6          1.4                6.35
whedon commented 2 years ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1109/CVPR.2016.90 is OK
- 10.1109/EuroSP.2016.36 is OK
- 10.1007/978-1-4842-4470-8_7 is OK
- 10.1137/1.9781611973808 is OK
- 10.1073/pnas.1922204117 is OK
- 10.1137/07070111X is OK
- 10.1007/s40304-017-0103-z is OK
- 10.1088/1361-6420/aa9a90 is OK
- 10.1073/pnas.1916634117 is OK

MISSING DOIs

- 10.1002/gamm.202100008 may be a valid DOI for title: An Introduction to Deep Generative Modeling
- 10.1109/cvpr.2016.90 may be a valid DOI for title: Deep residual learning for image recognition
- 10.1016/j.jcp.2018.10.045 may be a valid DOI for title: Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations

INVALID DOIs

- None
whedon commented 2 years ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

jbytecode commented 2 years ago

Dear reviewers @GregaVrbancic and @yhtang

Here is the review thread. Each reviewer has 20 checkboxes with a corresponding reviewing task. Whenever you complete a task, you can manually check it. The reviewing process is continuous in JOSS so you can interact with the author and the editor during the review.

Please do not hesitate to ask anything. I will always be following the ongoing process.

Thank you in advance.


Dear @elizabethnewman

The editorial bot suggests some DOI's for the missing bibtex items. You can start with correcting them as it suggested. You can use the test environment for compiling and examining the pdf using https://whedon.theoj.org/

Thank you in advance.

elizabethnewman commented 2 years ago

Dear @jbytecode

Thank you for suggesting that I correct the DOI's now. I now have added the DOI's the editorial bot suggested.

Thank you!

jbytecode commented 2 years ago

@whedon check references

whedon commented 2 years ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1002/gamm.202100008 is OK
- 10.1109/cvpr.2016.90 is OK
- 10.1016/j.jcp.2018.10.045 is OK
- 10.1109/CVPR.2016.90 is OK
- 10.1109/EuroSP.2016.36 is OK
- 10.1007/978-1-4842-4470-8_7 is OK
- 10.1137/1.9781611973808 is OK
- 10.1073/pnas.1922204117 is OK
- 10.1137/07070111X is OK
- 10.1007/s40304-017-0103-z is OK
- 10.1088/1361-6420/aa9a90 is OK
- 10.1073/pnas.1916634117 is OK

MISSING DOIs

- None

INVALID DOIs

- None
jbytecode commented 2 years ago

@whedon generate pdf

@elizabethnewman - thanks, it is better now!

whedon commented 2 years ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

yhtang commented 2 years ago

I am not quite sure if it is meaningful to have two co-first authors for this two-author paper, and if this accurately reflects the contributions of the authors:

elizabethnewman:
225 commits
11,384 ++    6,644 --

lruthotto:
2 commits
7 ++    1 --

@elizabethnewman: could you please share some thoughts on it?

elizabethnewman commented 2 years ago

Dear @yhtang

Thank you for your question about the contributions to the paper. The commit history does not reflect the contributions accurately. The repository was a true collaboration between me and my co-author. We had many discussions regarding the repository design, the examples to provide, and the paper writeup. It is an accurate representation of the collaboration to list us as co-first authors.

jbytecode commented 2 years ago

Thank you for opening the issue and discussing the improvements. Although this is a common practice, it is more appropriate to have the actual (and the vast majority of) dialogue here and keep a small amount of reviewing process out of the main thread.

Thank you all!

yhtang commented 2 years ago

Thank you for opening the issue and discussing the improvements. Although this is a common practice, it is more appropriate to have the actual (and the vast majority of) dialogue here and keep a small amount of reviewing process out of the main thread.

Thank you all!

Hi @jbytecode, I might be a bit confused here :sweat_smile:.

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

I thought the text above, which I quoted from the top of the review checklists, asks for the opposite. Perhaps I misunderstood something here. Could you please help to clarify?

yhtang commented 2 years ago

Dear @yhtang

Thank you for your question about the contributions to the paper. The commit history does not reflect the contributions accurately. The repository was a true collaboration between me and my co-author. We had many discussions regarding the repository design, the examples to provide, and the paper writeup. It is an accurate representation of the collaboration to list us as co-first authors.

Got it. Thanks for the info!

jbytecode commented 2 years ago

dear @yhtang - yes, the best place for hosting the lengthy details is a separate thread. no confusion. it was a suggestion to follow up the main process. no problem. thank you!

yhtang commented 2 years ago

I think we need some clarification regarding what suffices as 'automated tests' following up on the discussion here. Specifically, are continuous integration setups required in addition to having unit tests written?

jbytecode commented 2 years ago

Formally we always check the software if it has unit tests with an acceptable level of coverage in JOSS.

Despite it is not a rigid requirement, I suggest adding GitHub Actions and some other continuous integration tools. In our case, yes, it would be better to have CI on GitHub so we can track if the latest changes don't break the code.

danielskatz commented 2 years ago

See https://joss.readthedocs.io/en/latest/review_criteria.html#tests for more guidance

yhtang commented 2 years ago

@jbytecode @danielskatz Thank you both for the information!

yhtang commented 2 years ago

@elizabethnewman: I'm having some difficulties running the timing result notebook on my local machine. Specifically, the problem seems to be related to invoking system commands (e.g. !python) in a notebook cell. In that case, the system's default Python interpreter gets called, instead of the one in the virtual environment.

Screen Shot 2022-03-03 at 2 42 13 PM
jbytecode commented 2 years ago

Dear @elizabethnewman,

Could you please update your status?

Thank you in advance.

elizabethnewman commented 2 years ago

Hi @yhtang

Thank you for running the code. To ensure reproducibility, timing result notebook is designed to be run on Google Colab only. However, the script run_timing_test.py, which is called in the notebook, should be compatible with your local machine. I have clarified the instructions in the examples directory README file. I hope this helps!

elizabethnewman commented 2 years ago

Dear @elizabethnewman,

Could you please update your status?

Thank you in advance.

Hi @jbytecode

Thank you for checking in. I am addressing the helpful questions and comments from the reviewers now. Is there anything else I have to do to update my status?

jbytecode commented 2 years ago

Dear @elizabethnewman,

I just wanted to ping you and be sure if you were receiving the notifications.

Thank you!

yhtang commented 2 years ago

However, the script run_timing_test.py, which is called in the notebook, should be compatible with your local machine. I have clarified the instructions in the examples directory README file. I hope this helps!

I'll get it a try. Thanks!

Thank you for running the code. To ensure reproducibility, timing result notebook is designed to be run on Google Colab only.

This actually brings up another issue that I wanted to ask while reading through the paper. Since Colab assigns GPUs based on resource availability (and in my own experience we can potentially get anything from a decade-old K80 to a newer model like V100), I'm not quite sure that it is the best platform to showcase reproducibility --- at least not so quantitatively. Someone would have to be a bit lucky in order to obtain the same hardware resource. This was also part of my motivation to run the notebook locally.

Perhaps it'll be a good idea to clarify and mention these subtleties in the paper?

elizabethnewman commented 2 years ago

However, the script run_timing_test.py, which is called in the notebook, should be compatible with your local machine. I have clarified the instructions in the examples directory README file. I hope this helps!

I'll get it a try. Thanks!

Thank you for running the code. To ensure reproducibility, timing result notebook is designed to be run on Google Colab only.

This actually brings up another issue that I wanted to ask while reading through the paper. Since Colab assigns GPUs based on resource availability (and in my own experience we can potentially get anything from a decade-old K80 to a newer model like V100), I'm not quite sure that it is the best platform to showcase reproducibility --- at least not so quantitatively. Someone would have to be a bit lucky in order to obtain the same hardware resource. This was also part of my motivation to run the notebook locally.

Perhaps it'll be a good idea to clarify and mention these subtleties in the paper?

Thank you for your comments. We expect users to get similar qualitative (not quantitative) results to the ones presented in the paper when they run the code on their own Colab instance or locally. Also, in some ways, using Google Colab can be more reproducible than using local runs where users may not have access to the same hardware. We also chose to use Colab to make it easier for all users to run our tests, especially on GPUs.

To address your comments directly, I have added clarifying language in the paper about what we mean by reproducible and I have added a new Jupyter notebook to the examples directory to allow users to run the timing tests locally more easily. I hope this is helpful!

elizabethnewman commented 2 years ago

Dear @yhtang and @GregaVrbancic,

I hope you are doing well! I tried to address your helpful comments about reproducibility a couple of weeks ago. Have I addressed your concerns adequately? Do you have any other comments or suggestions about the repository and/or paper? Thank you again for reviewing this work!

jbytecode commented 2 years ago

Dear reviewers @yhtang and @GregaVrbancic - could you please update your status? Thank you in advance.

yhtang commented 2 years ago

Dear @elizabethnewman and @jbytecode, I am still working to set up some computation to confirm the performance claims. Other than that, I don't have any serious concern regarding the remaining checklist items. I should be able to come back within a week or so with any findings regarding the benchmarks.

GregaVrbancic commented 2 years ago

Dear @jbytecode and @elizabethnewman I will finish my review by the end of the week. Sorry for late response, I have been a bit busy.

GregaVrbancic commented 2 years ago

@whedon generate pdf

editorialbot commented 2 years ago

My name is now @editorialbot

GregaVrbancic commented 2 years ago

@editorialbot generate pdf

editorialbot commented 2 years ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

GregaVrbancic commented 2 years ago

Dear @jbytecode and @elizabethnewman, I have finished my review. In addition to already adequately addressed issues https://github.com/elizabethnewman/hessQuik/issues/4 and https://github.com/elizabethnewman/hessQuik/issues/5 I have no further concerns regarding the package or paper. Therefore, I would recommend this submission for publication.

jbytecode commented 2 years ago

@yhtang - could you please update your status? We are looking forward to hear from you. Thank you in advance.

jbytecode commented 2 years ago

Dear @yhtang

Since we are a little bit ahead of the normal reviewing times, I just wanted to know if we could finalize the process soon?

Sorry for pinging and bothering.

Thank you in advance.

yhtang commented 2 years ago

Hi @jbytecode

Sorry for the delay! I have just successfully reproduced the benchmark results on my machine and checked the corresponding review item(s). That concludes my review. Please let me know if you need any further input or discussion.

jbytecode commented 2 years ago

Dear @yhtang

Thank you for the response and your review, however, a single checkbox seems to be unchecked. Could you please complete your review if it is not an issue anymore?

Thank you.

yhtang commented 2 years ago

Done! 😄

jbytecode commented 2 years ago

@editorialbot check references

editorialbot commented 2 years ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1002/gamm.202100008 is OK
- 10.1109/cvpr.2016.90 is OK
- 10.1016/j.jcp.2018.10.045 is OK
- 10.1109/CVPR.2016.90 is OK
- 10.1109/EuroSP.2016.36 is OK
- 10.1007/978-1-4842-4470-8_7 is OK
- 10.1137/1.9781611973808 is OK
- 10.1073/pnas.1922204117 is OK
- 10.1137/07070111X is OK
- 10.1007/s40304-017-0103-z is OK
- 10.1088/1361-6420/aa9a90 is OK
- 10.1073/pnas.1916634117 is OK

MISSING DOIs

- None

INVALID DOIs

- None
jbytecode commented 2 years ago

@editorialbot generate pdf

editorialbot commented 2 years ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

jbytecode commented 2 years ago

Dear @elizabethnewman

I have just created a pull request (https://github.com/elizabethnewman/hessQuik/pull/6) in the software repository.

Please review them carefully and apply them if you are agreed with the corrections.

Thank you in advance.

elizabethnewman commented 2 years ago

Dear @jbytecode,

Thank you for creating the pull request. I looked through your corrections and agreed with them.

Please let me know if you need any additional information at this time, and thank you again!

jbytecode commented 2 years ago

@editorialbot check references

editorialbot commented 2 years ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1002/gamm.202100008 is OK
- 10.1109/cvpr.2016.90 is OK
- 10.1016/j.jcp.2018.10.045 is OK
- 10.1109/CVPR.2016.90 is OK
- 10.1109/EuroSP.2016.36 is OK
- 10.1007/978-1-4842-4470-8_7 is OK
- 10.1137/1.9781611973808 is OK
- 10.1073/pnas.1922204117 is OK
- 10.1137/07070111X is OK
- 10.1007/s40304-017-0103-z is OK
- 10.1088/1361-6420/aa9a90 is OK
- 10.1073/pnas.1916634117 is OK

MISSING DOIs

- None

INVALID DOIs

- None