Closed whedon closed 2 years ago
Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @GregaVrbancic, @yhtang it looks like you're currently assigned to review this paper :tada:.
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
:star: Important :star:
If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿
To fix this do the following two things:
For a list of things I can do to help you, just type:
@whedon commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@whedon generate pdf
Wordcount for paper.md
is 2045
Software report (experimental):
github.com/AlDanial/cloc v 1.88 T=0.10 s (504.7 files/s, 46324.0 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Python 37 717 243 1811
Jupyter Notebook 3 0 388 775
Markdown 9 106 3 316
TeX 1 29 0 201
-------------------------------------------------------------------------------
SUM: 50 852 634 3103
-------------------------------------------------------------------------------
Statistical information for the repository '674facf220f018c87f0f2210' was
gathered on 2022/02/16.
The following historical commit information, by author, was found:
Author Commits Insertions Deletions % of changes
elizabethnewman 128 6362 3591 100.00
Below are the number of rows from each author that have survived and are still
intact in the current revision:
Author Rows Stability Age % in comments
elizabethnewman 2771 43.6 1.4 6.35
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1109/CVPR.2016.90 is OK
- 10.1109/EuroSP.2016.36 is OK
- 10.1007/978-1-4842-4470-8_7 is OK
- 10.1137/1.9781611973808 is OK
- 10.1073/pnas.1922204117 is OK
- 10.1137/07070111X is OK
- 10.1007/s40304-017-0103-z is OK
- 10.1088/1361-6420/aa9a90 is OK
- 10.1073/pnas.1916634117 is OK
MISSING DOIs
- 10.1002/gamm.202100008 may be a valid DOI for title: An Introduction to Deep Generative Modeling
- 10.1109/cvpr.2016.90 may be a valid DOI for title: Deep residual learning for image recognition
- 10.1016/j.jcp.2018.10.045 may be a valid DOI for title: Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Dear reviewers @GregaVrbancic and @yhtang
Here is the review thread. Each reviewer has 20 checkboxes with a corresponding reviewing task. Whenever you complete a task, you can manually check it. The reviewing process is continuous in JOSS so you can interact with the author and the editor during the review.
Please do not hesitate to ask anything. I will always be following the ongoing process.
Thank you in advance.
Dear @elizabethnewman
The editorial bot suggests some DOI's for the missing bibtex items. You can start with correcting them as it suggested. You can use the test environment for compiling and examining the pdf using https://whedon.theoj.org/
Thank you in advance.
Dear @jbytecode
Thank you for suggesting that I correct the DOI's now. I now have added the DOI's the editorial bot suggested.
Thank you!
@whedon check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1002/gamm.202100008 is OK
- 10.1109/cvpr.2016.90 is OK
- 10.1016/j.jcp.2018.10.045 is OK
- 10.1109/CVPR.2016.90 is OK
- 10.1109/EuroSP.2016.36 is OK
- 10.1007/978-1-4842-4470-8_7 is OK
- 10.1137/1.9781611973808 is OK
- 10.1073/pnas.1922204117 is OK
- 10.1137/07070111X is OK
- 10.1007/s40304-017-0103-z is OK
- 10.1088/1361-6420/aa9a90 is OK
- 10.1073/pnas.1916634117 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@whedon generate pdf
@elizabethnewman - thanks, it is better now!
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
I am not quite sure if it is meaningful to have two co-first authors for this two-author paper, and if this accurately reflects the contributions of the authors:
elizabethnewman:
225 commits
11,384 ++ 6,644 --
lruthotto:
2 commits
7 ++ 1 --
@elizabethnewman: could you please share some thoughts on it?
Dear @yhtang
Thank you for your question about the contributions to the paper. The commit history does not reflect the contributions accurately. The repository was a true collaboration between me and my co-author. We had many discussions regarding the repository design, the examples to provide, and the paper writeup. It is an accurate representation of the collaboration to list us as co-first authors.
Thank you for opening the issue and discussing the improvements. Although this is a common practice, it is more appropriate to have the actual (and the vast majority of) dialogue here and keep a small amount of reviewing process out of the main thread.
Thank you all!
Thank you for opening the issue and discussing the improvements. Although this is a common practice, it is more appropriate to have the actual (and the vast majority of) dialogue here and keep a small amount of reviewing process out of the main thread.
Thank you all!
Hi @jbytecode, I might be a bit confused here :sweat_smile:.
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
I thought the text above, which I quoted from the top of the review checklists, asks for the opposite. Perhaps I misunderstood something here. Could you please help to clarify?
Dear @yhtang
Thank you for your question about the contributions to the paper. The commit history does not reflect the contributions accurately. The repository was a true collaboration between me and my co-author. We had many discussions regarding the repository design, the examples to provide, and the paper writeup. It is an accurate representation of the collaboration to list us as co-first authors.
Got it. Thanks for the info!
dear @yhtang - yes, the best place for hosting the lengthy details is a separate thread. no confusion. it was a suggestion to follow up the main process. no problem. thank you!
I think we need some clarification regarding what suffices as 'automated tests' following up on the discussion here. Specifically, are continuous integration setups required in addition to having unit tests written?
Formally we always check the software if it has unit tests with an acceptable level of coverage in JOSS.
Despite it is not a rigid requirement, I suggest adding GitHub Actions and some other continuous integration tools. In our case, yes, it would be better to have CI on GitHub so we can track if the latest changes don't break the code.
See https://joss.readthedocs.io/en/latest/review_criteria.html#tests for more guidance
@jbytecode @danielskatz Thank you both for the information!
@elizabethnewman: I'm having some difficulties running the timing result notebook on my local machine. Specifically, the problem seems to be related to invoking system commands (e.g. !python
) in a notebook cell. In that case, the system's default Python interpreter gets called, instead of the one in the virtual environment.
Dear @elizabethnewman,
Could you please update your status?
Thank you in advance.
Hi @yhtang
Thank you for running the code. To ensure reproducibility, timing result notebook is designed to be run on Google Colab only. However, the script run_timing_test.py, which is called in the notebook, should be compatible with your local machine. I have clarified the instructions in the examples directory README file. I hope this helps!
Dear @elizabethnewman,
Could you please update your status?
Thank you in advance.
Hi @jbytecode
Thank you for checking in. I am addressing the helpful questions and comments from the reviewers now. Is there anything else I have to do to update my status?
Dear @elizabethnewman,
I just wanted to ping you and be sure if you were receiving the notifications.
Thank you!
However, the script run_timing_test.py, which is called in the notebook, should be compatible with your local machine. I have clarified the instructions in the examples directory README file. I hope this helps!
I'll get it a try. Thanks!
Thank you for running the code. To ensure reproducibility, timing result notebook is designed to be run on Google Colab only.
This actually brings up another issue that I wanted to ask while reading through the paper. Since Colab assigns GPUs based on resource availability (and in my own experience we can potentially get anything from a decade-old K80 to a newer model like V100), I'm not quite sure that it is the best platform to showcase reproducibility --- at least not so quantitatively. Someone would have to be a bit lucky in order to obtain the same hardware resource. This was also part of my motivation to run the notebook locally.
Perhaps it'll be a good idea to clarify and mention these subtleties in the paper?
However, the script run_timing_test.py, which is called in the notebook, should be compatible with your local machine. I have clarified the instructions in the examples directory README file. I hope this helps!
I'll get it a try. Thanks!
Thank you for running the code. To ensure reproducibility, timing result notebook is designed to be run on Google Colab only.
This actually brings up another issue that I wanted to ask while reading through the paper. Since Colab assigns GPUs based on resource availability (and in my own experience we can potentially get anything from a decade-old K80 to a newer model like V100), I'm not quite sure that it is the best platform to showcase reproducibility --- at least not so quantitatively. Someone would have to be a bit lucky in order to obtain the same hardware resource. This was also part of my motivation to run the notebook locally.
Perhaps it'll be a good idea to clarify and mention these subtleties in the paper?
Thank you for your comments. We expect users to get similar qualitative (not quantitative) results to the ones presented in the paper when they run the code on their own Colab instance or locally. Also, in some ways, using Google Colab can be more reproducible than using local runs where users may not have access to the same hardware. We also chose to use Colab to make it easier for all users to run our tests, especially on GPUs.
To address your comments directly, I have added clarifying language in the paper about what we mean by reproducible and I have added a new Jupyter notebook to the examples directory to allow users to run the timing tests locally more easily. I hope this is helpful!
Dear @yhtang and @GregaVrbancic,
I hope you are doing well! I tried to address your helpful comments about reproducibility a couple of weeks ago. Have I addressed your concerns adequately? Do you have any other comments or suggestions about the repository and/or paper? Thank you again for reviewing this work!
Dear reviewers @yhtang and @GregaVrbancic - could you please update your status? Thank you in advance.
Dear @elizabethnewman and @jbytecode, I am still working to set up some computation to confirm the performance claims. Other than that, I don't have any serious concern regarding the remaining checklist items. I should be able to come back within a week or so with any findings regarding the benchmarks.
Dear @jbytecode and @elizabethnewman I will finish my review by the end of the week. Sorry for late response, I have been a bit busy.
@whedon generate pdf
My name is now @editorialbot
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Dear @jbytecode and @elizabethnewman, I have finished my review. In addition to already adequately addressed issues https://github.com/elizabethnewman/hessQuik/issues/4 and https://github.com/elizabethnewman/hessQuik/issues/5 I have no further concerns regarding the package or paper. Therefore, I would recommend this submission for publication.
@yhtang - could you please update your status? We are looking forward to hear from you. Thank you in advance.
Dear @yhtang
Since we are a little bit ahead of the normal reviewing times, I just wanted to know if we could finalize the process soon?
Sorry for pinging and bothering.
Thank you in advance.
Hi @jbytecode
Sorry for the delay! I have just successfully reproduced the benchmark results on my machine and checked the corresponding review item(s). That concludes my review. Please let me know if you need any further input or discussion.
Dear @yhtang
Thank you for the response and your review, however, a single checkbox seems to be unchecked. Could you please complete your review if it is not an issue anymore?
Thank you.
Done! 😄
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1002/gamm.202100008 is OK
- 10.1109/cvpr.2016.90 is OK
- 10.1016/j.jcp.2018.10.045 is OK
- 10.1109/CVPR.2016.90 is OK
- 10.1109/EuroSP.2016.36 is OK
- 10.1007/978-1-4842-4470-8_7 is OK
- 10.1137/1.9781611973808 is OK
- 10.1073/pnas.1922204117 is OK
- 10.1137/07070111X is OK
- 10.1007/s40304-017-0103-z is OK
- 10.1088/1361-6420/aa9a90 is OK
- 10.1073/pnas.1916634117 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Dear @elizabethnewman
I have just created a pull request (https://github.com/elizabethnewman/hessQuik/pull/6) in the software repository.
Please review them carefully and apply them if you are agreed with the corrections.
Thank you in advance.
Dear @jbytecode,
Thank you for creating the pull request. I looked through your corrections and agreed with them.
Please let me know if you need any additional information at this time, and thank you again!
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1002/gamm.202100008 is OK
- 10.1109/cvpr.2016.90 is OK
- 10.1016/j.jcp.2018.10.045 is OK
- 10.1109/CVPR.2016.90 is OK
- 10.1109/EuroSP.2016.36 is OK
- 10.1007/978-1-4842-4470-8_7 is OK
- 10.1137/1.9781611973808 is OK
- 10.1073/pnas.1922204117 is OK
- 10.1137/07070111X is OK
- 10.1007/s40304-017-0103-z is OK
- 10.1088/1361-6420/aa9a90 is OK
- 10.1073/pnas.1916634117 is OK
MISSING DOIs
- None
INVALID DOIs
- None
Submitting author: !--author-handle-->@elizabethnewman<!--end-author-handle-- (Elizabeth Newman) Repository: https://github.com/elizabethnewman/hessQuik Branch with paper.md (empty if default branch): Version: v1.0.0 Editor: !--editor-->@jbytecode<!--end-editor-- Reviewers: @GregaVrbancic, @yhtang Archive: 10.5281/zenodo.6478757
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@GregaVrbancic & @yhtang, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @jbytecode know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Review checklist for @GregaVrbancic
✨ Important: Please do not use the Convert to issue functionality when working through this checklist, instead, please open any new issues associated with your review in the software repository associated with the submission. ✨
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
Review checklist for @yhtang
✨ Important: Please do not use the Convert to issue functionality when working through this checklist, instead, please open any new issues associated with your review in the software repository associated with the submission. ✨
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Related issues: https://github.com/elizabethnewman/hessQuik/issues/2
Software paper