Closed whedon closed 3 years ago
Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @martinju, @gjo11 it looks like you're currently assigned to review this paper :tada:.
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
:star: Important :star:
If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿
To fix this do the following two things:
For a list of things I can do to help you, just type:
@whedon commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@whedon generate pdf
Failed to discover a Statement of need
section in paper
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.21105/joss.01541 is OK
- 10.21105/joss.01412 is OK
- 10.5281/zenodo.3952153 is OK
- 10.5281/zenodo.3952214 is OK
- 10.1111/2041-210X.12225 is OK
- 10.21105/joss.02445 is OK
- 10.21105/joss.02306 is OK
- 10.1098/rsif.2017.0213 is OK
- 10.2307/1912557 is OK
MISSING DOIs
- None
INVALID DOIs
- None
Software report (experimental):
github.com/AlDanial/cloc v 1.88 T=0.72 s (186.0 files/s, 23690.6 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
R 119 2323 3649 6689
XML 1 0 42 1484
Markdown 6 507 0 1135
YAML 3 37 2 229
TeX 1 29 0 194
Rmd 3 158 309 156
-------------------------------------------------------------------------------
SUM: 133 3054 4002 9887
-------------------------------------------------------------------------------
Statistical information for the repository '560a357b9ea205c92f619769' was
gathered on 2021/03/29.
The following historical commit information, by author, was found:
Author Commits Insertions Deletions % of changes
Daniel 4 176 376 73.40
Dominique Makowski 1 200 0 26.60
Below are the number of rows from each author that have survived and are still
intact in the current revision:
Author Rows Stability Age % in comments
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@martinju, @gjo11 (aided by @sachsmc): Thanks for agreeing to review. Please carry out your review in this issue by updating the checklist above and giving feedback in this issue. The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. If possible create issues (and cross-reference) in the submission's repository to avoid too specific discussions in this review thread.
If you have any questions or concerns please let me know.
@strengejacke Please note the error "Failed to discover a Statement of need section in paper". Please fix, recompile and notify the reviewers.
@whedon generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Thanks for opening the review! I fixed the error when compiling the paper and explicitly named the section "statement of need".
@strengejacke Please change "an" in the paper title to "An".
Done.
@whedon generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Thank you. This seems like a really nice and useful package! Just a few things: The output from the paper does not reflect what is produced by the current version of the package (the version under review is 0.7.1). I suppose they may have been generated using an older version.
1. The following example is given in the paper:
library(see) model <- lm(mpg ~ wt + am + gear + vs * cyl, data = mtcars) check_model(model)
Running the last command, the package wants to use "qqplotr", which should likely be listed as "suggests".
Also, the visuals are slightly different from those in the paper, which may have been generated using an older version.
2. Running the example
library(brms) set.seed(123) model <- brm(mpg ~ wt + (1 | cyl) + (1 + wt | gear), data = mtcars) icc(model)
outputs
Adjusted ICC: 0.929 Conditional ICC: 0.797
which is similar although not exactly the same as in the paper.
3. The example
test_performance(lm1, lm2, lm3, lm4)
outputs
lm1 | lm |
lm2 | lm | > 1000
lm3 | lm | > 1000
lm4 | lm | > 1000
Each model is compared to lm1.
rather than the output in the paper.
4. The indices of the output of the example
test_bf(lm1, lm2, lm3, lm4)
are not prefixed by "lm" as in the output in the paper.
@gjo11 Thanks for your comments. I addressed your issues, however, some are related to dependencies from GitHub versions of packages.
The second point you mentioned, the slightly different figure in the paper, might be due to the fact that we have largely revised the plot-style from check_model()
due to several user feedback (including improvements for color-blinds, see https://github.com/easystats/see/issues/97). So installing performance and see from GitHub should give you the same results.
Not sure if this was due to changes in RStan, brms or insight? I have checked the example on two different computers (both Windows, R 4.0.4, and both yielded the same result), and updated the numbers in the paper.
& 4. I updated the paper, these were indeed outputs from "older" versions of the print()
function.
@mikldk I am trying to start the review, but I am not able to tick off the markers in the checklist. Also when I try to click the "accept invitation" link, it says it is expired. I am quite sure I did click that link when first being assigned as a reviewer, so I am not sure it if is related. Please let me know how to proceed.
@whedon re-invite @martinju as reviewer
OK, the reviewer has been re-invited.
@martinju please accept the invite by clicking this link: https://github.com/openjournals/joss-reviews/invitations
@whedon generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@gjo11 All relevant update to packages that were only available on GitHub are now accepted on CRAN, thus you should be able to reproduce all outputs from the paper with both the CRAN and GitHub versions. I re-generated the PDF proof with the changes you suggested, so the most recent draft was just generated by whedon.
:wave: @gjo11, please update us on how your review is going (this is an automated reminder).
:wave: @martinju, please update us on how your review is going (this is an automated reminder).
👋 @martinju, please update us on how your review is going (this is an automated reminder).
I have done the majority of the review work. Will finish up and give my feedback in a few days. Spoiler: Very nice package!
Done with the review!
I agree with @gjo11 that performance seems like a very nice package. There is clearly a comprehensive amout of work underneath here. The package serves an important purpose, seems to be well programmed, is easy to use, well documented and tested and provides visually appealing plots and console print out. I only have a few comments. I am just putting them in a task list below here as I don't think it necessary to make seperate GitHub issues for these comments.
Author list
Feature requests (Not required for acceptance):
Paper:
@strengejacke please respond to each of these comments.
@martinju Thanks for your comments and suggestions. Let me summarize what I have done so far to address your issues.
Author list: Philip is an early member of the team, who hasn't contributed much code lately. However, he made substantial contributions to this paper. Thus, since he both contributed to package code, ideas and the documentation/paper, from our point of view, he definitely deserves an authorship role for this paper.
Feature requests: Thanks for those suggestions! I have implemented performance_mae()
(or its alias mae()
) now as a first step. RMSE is already possible to calculate (performance_rmse()
resp. rmse()
). AUC is available in bayestestR::area_under_curve()
, while the performance-package provides a function to calculate the ROC curve (performance_roc()
). We also opened a new issue (https://github.com/easystats/performance/issues/266) resp. added your suggestions to an existing, related issue (https://github.com/easystats/performance/issues/36) in order to work on these ideas.
Paper:
I hope I have addresses all issues to your satisfaction!
@whedon generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@martinju, @gjo11 Can you please ensure your checklists are updated and get back when you have done so? If there are any unresolved issues, please let me know, too.
I have no remaining issues and am I happy to recommend this for publication. Thank you to everyone who put work into this!
Just a minor thing about this
I have implemented
performance_mae()
(or its aliasmae()
) now as a first step. RMSE is already possible to calculate (performance_rmse()
resp.rmse()
)
My point here was to compute these measures for a model, but on out-of-sample data. From what I can see, the performance_* only does this for the data used to fit the model. Maybe you could just add two new optional arguments to the functions ytest and xtest which are used to compute the measures if provided?
If necessary we can continue the discussion in the relevant issue. Dealing with this is not necessary to warrant JOSS publication.
I am happy with the response and adjustments from @strengejacke to my comments. (Note: I consider the above feature request comments that to be a feature for a potential future version and not something required at this stage.)
@mikldk All points in the checklist are now checked off, and I recommend the current version of this package and paper for publication! Nice work @strengejacke et al.
Thanks to you all for your suggestions and comments that helped improving the paper and repo!
@gjo11: Can you also confirm that you have finished the review and recommend that this paper is now published?
@strengejacke :
@whedon generate pdf
I confirm I have finished the review and recommend publication.
@whedon generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@mikldk
I confirm that the GitHub release as well as the Zenodo-archive are the correct package versions and in the state that should be published.
The doi is 10.5281/zenodo.4700887
.
@whedon set 10.5281/zenodo.4700887 as archive
OK. 10.5281/zenodo.4700887 is the archive.
@martinju, @gjo11 Thank you very much for your effort in reviewing this paper!
@whedon accept
Attempting dry run of processing paper acceptance...
:wave: @openjournals/joss-eics, this paper is ready to be accepted and published.
Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/2246
If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/2246, then you can now move forward with accepting the submission by compiling again with the flag deposit=true
e.g.
@whedon accept deposit=true
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.21105/joss.01541 is OK
- 10.21105/joss.01412 is OK
- 10.5281/zenodo.3952153 is OK
- 10.5281/zenodo.3952214 is OK
- 10.1111/2041-210X.12225 is OK
- 10.21105/joss.02445 is OK
- 10.21105/joss.02306 is OK
- 10.1098/rsif.2017.0213 is OK
- 10.2307/1912557 is OK
MISSING DOIs
- None
INVALID DOIs
- None
Thanks to all for the smooth review and editing process! 🎉
@whedon accept deposit=true
Doing it live! Attempting automated processing of paper acceptance...
🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦
🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨
Here's what you must now do:
Party like you just published a paper! 🎉🌈🦄💃👻🤘
Any issues? Notify your editorial technical team...
Submitting author: @strengejacke (Daniel Lüdecke) Repository: https://github.com/easystats/performance Version: 0.7.1 Editor: @mikldk Reviewer: @martinju, @gjo11 Archive: 10.5281/zenodo.4700887
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@martinju & @gjo11, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @mikldk know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Review checklist for @martinju
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
Review checklist for @gjo11
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper