Closed editorialbot closed 1 year ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.03 s (1035.7 files/s, 110678.1 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
R 19 188 740 1460
Markdown 4 88 0 312
TeX 1 14 0 154
YAML 4 8 6 51
Rmd 1 28 38 12
-------------------------------------------------------------------------------
SUM: 29 326 784 1989
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 1940
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1093/jnci/22.1.173 is OK
- 10.1093/oxfordjournals.aje.a112581 is OK
- 10.1016/0021-9681(66)90062-2 is OK
- 10.1097/EDE.0000000000000457 is OK
- 10.1111/j.2517-6161.1983.tb01242.x is OK
- 10.2307/2533848 is OK
- 10.7326/M16-2607 is OK
- 10.1111/rssb.12348 is OK
- 10.1007/978-0-387-87959-8 is OK
- 10.1002/(SICI)1097-0258(19981015)17:19<2265::AID-SIM918>3.0.CO;2-B is OK
- 10.1037/h0037350 is OK
- 10.1214/12-aos1058 is OK
- 10.1159/000315883 is OK
- 10.3102/10769986011003207 is OK
MISSING DOIs
- None
INVALID DOIs
- None
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
👋🏼 @LucyMcGowan @gcskoenig @MichaelSchomaker this is the review thread for the paper. All of our communications will happen here from now on.
As a reviewer, the first step is to create a checklist for your review by entering
@editorialbot generate my checklist
as the top of a new comment in this thread.
These checklists contain the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. The first comment in this thread also contains links to the JOSS reviewer guidelines.
The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention openjournals/joss-reviews#REVIEW_NUMBER
so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.
We aim for reviews to be completed within about 2-4 weeks. Please let me know if any of you require some more time. We can also use EditorialBot (our bot) to set automatic reminders if you know you'll be away for a known period of time.
Please feel free to ping me (@fabian-s) if you have any questions/concerns. Sorry this spent so much time in pre-review.
Comments about content: while the space in JOSS is limited, I have a few smaller suggestions and questions that could be addressed in the paper in 1-2 sentences each:
👋🏼 @LucyMcGowan could you update us on your timeline to adress the reviewers' comments?
Thank you so much! I hope to have these addressed by next week.
Thank you so much for the reviews, they certainly helped make the paper better! Please find my point-by-point responses below.
I have added a license.md file to the repository.
I have updated the README to include this.
I have added the tests via Github actions
I have removed this section and amended the previous to list the various methods (rather than all function names)
I have removed this section
I have added the following text to describe the current (single package) landscape rather than an overview package
There are several related methods for conducting sensitivity analyses for unmeasured confounders [@Cornfield; @Bross; @Schlesselman; @Rosenbaum:1983; @Lin; @lash2009applying; @rosenbaum1986dropping; @cinelli2020making; @VanderWeele:2017ki; @Ding], some of which have their own R packages, for example methods in @cinelli2020making can be implemented using the
sensemakr
R package,obsSens
implements methods from @Lin, andEValue
implements methods in @VanderWeele:2017ki;. However, there is not currently a single R package that has a unified grammar allowing the user to conduct appropriate sensitivity analysis for their study.
I have corrected this spacing
I have added a DAG as well as an additional figure to the final example
The dependencies are in the DESCRIPTION file of the package itself (and handled by CRAN on installation) I would be happy to include this information in the paper, however I couldn't find any other R papers that do -- is there a standard way to include this information?
I have added a license.md file to the repository
I have added a bugreports link to the DESCRIPTION file as well as a code of conduct to the repository (and linked in the README)
Comments about content: while the space in JOSS is limited, I have a few smaller suggestions and questions that could be addressed in the paper in 1-2 sentences each:
Mathematically, this is a bias adjustment, so would account for any direct effects (if there are several, correlated, confounders that are missing, you can think of this as measuring the summary score of the independent effects). I have changed the word "confounder" to "confounding" in several places to indicate that this is accounting for all "confounding". For example, the Normally distributed U could be the linear combination of several confounders.
Thank you for this important comment. I completely agree with the concern about dichotomizing. A main motivation for performing these tipping point analyses is that they suggest not only a different effect, but potentially a completely different direction of the effect itself, which could completely alter conclusions. I have added the following line to the summary:
The
adjust
functions allow an investigator to examine how a specific (or set of specific) confounders would change a result while thetip
functions provide sensitivity analyses that allow an investigator to examine how extreme an unmeasured confounder would need to be in order to change the direction of the effect, and thus often the conclusions of the study.
I have added the following text to specify what I mean by "standardized" as well as explained the difference in means:
If quantifying the impact of a standardized Normally distributed confounder, the impact of the unmeasured confounder on the exposure is parameterized as a difference in means between the unmeasured confounder in the exposed population and the unexposed population. By "standardized Normally distributed" we mean that the unmeasured confounder is Normally distributed with mean $\mu_1$ for the exposed and $\mu_0$ for the unexposed and unit variance. (Note a standardized Normally distributed confounder can be created from a Normally distributed confounder by dividing by the variance).
I have also changed the name of the parameter since smd
can have different meanings
Thank you! Thank you again for reading through this paper, I really appreciate the comments.
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1093/jnci/22.1.173 is OK
- 10.1093/oxfordjournals.aje.a112581 is OK
- 10.1016/0021-9681(66)90062-2 is OK
- 10.1097/EDE.0000000000000457 is OK
- 10.1111/j.2517-6161.1983.tb01242.x is OK
- 10.2307/2533848 is OK
- 10.7326/M16-2607 is OK
- 10.1111/rssb.12348 is OK
- 10.1007/978-0-387-87959-8 is OK
- 10.1002/(SICI)1097-0258(19981015)17:19<2265::AID-SIM918>3.0.CO;2-B is OK
- 10.1037/h0037350 is OK
- 10.1214/12-aos1058 is OK
- 10.1159/000315883 is OK
- 10.3102/10769986011003207 is OK
MISSING DOIs
- None
INVALID DOIs
- None
thank you @LucyMcGowan, excellent. I added some very minor comments on formatting / language directly in the diffs of your commits.
@MichaelSchomaker @gcskoenig are you satisfied with these changes as well?
re:
The dependencies are not explicitly stated in the paper.
that's fine for a JOSS paper on an R package since the dependencies are explicit in the package's DESCRIPTION
Thank you so much! I tried to add a line space again and I think it fixed the caption?
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Thank you so much! I tried to add a line space again and I think it fixed the caption?
looks like it :)
I think I'll just go ahead and make and editorial decision to accept this now , seems fine to me. (Sorry Gunnar & Michael, I want this done before my vacation)
@LucyMcGowan At this point could you please:
I can then move forward with accepting the submission.
@editorialbot set 10.5281/zenodo.6958926 as archive
Done! Archive is now 10.5281/zenodo.6958926
@editorialbot set <v1.0.0> as version
Done! version is now <v1.0.0>
@editorialbot set v1.0.0 as version
Done! version is now v1.0.0
@editorialbot recommend-accept
Attempting dry run of processing paper acceptance...
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1093/jnci/22.1.173 is OK
- 10.1093/oxfordjournals.aje.a112581 is OK
- 10.1016/0021-9681(66)90062-2 is OK
- 10.1097/EDE.0000000000000457 is OK
- 10.1111/j.2517-6161.1983.tb01242.x is OK
- 10.2307/2533848 is OK
- 10.7326/M16-2607 is OK
- 10.1111/rssb.12348 is OK
- 10.1007/978-0-387-87959-8 is OK
- 10.1002/(SICI)1097-0258(19981015)17:19<2265::AID-SIM918>3.0.CO;2-B is OK
- 10.1037/h0037350 is OK
- 10.1214/12-aos1058 is OK
- 10.1159/000315883 is OK
- 10.3102/10769986011003207 is OK
MISSING DOIs
- None
INVALID DOIs
- None
:wave: @openjournals/joss-eics, this paper is ready to be accepted and published.
Check final proof :point_right::page_facing_up: Download article
If the paper PDF and the deposit XML files look good in https://github.com/openjournals/joss-papers/pull/3414, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept
Thank you so much for your dilligent and constructive reviews, @gcskoenig & @MichaelSchomaker!
@LucyMcGowan, congratulations!
I think I'll just go ahead and make and editorial decision to accept this now , seems fine to me. (Sorry Gunnar & Michael, I want this done before my vacation)
@fabian-s – given the reviewers left open a number of checkbox items, I would like them to confirm here that they're happy with the revisions made by the author here (and update their checklists) as appropriate.
@MichaelSchomaker @gcskoenig – can you confirm you're happy with these revisions?
We can take care you this while you're on your vacation :-) 🏖️
Hi everyone, I have also been on vacation.
The paper is nice and the revisions good. Some last very minor points:
@LucyMcGowan: 1) you write: "if there are several, correlated, confounders that are missing, you can think of this as measuring the summary score of the independent effects" -> why not simply spelling this out in the paper? 2) why keeping the phrase "rendering it inconclusive" throughout the paper? The point estimate could cross the null, but the point estimates, together with the associated CI, may be similar suggestive of the direction of effect for different estimates/levels of unmeasured confounding. That is, the CI contains the set of parameter values that are more compatible with the data than those values lying outside the interval (given the background assumptions); a point estimate of -0.1 with a 95% CI of -0.3;0.1 may be similar suggestive of a positive effect as an estimate of 0.01 and CI of -0.05;0.07. I think it is an optional point, but you may consider either omitting the phrase or emphasizing more the descriptive an exploratory nature of the tool, rather than emphasizing the cutoff point.
@arfon: I have no other comments, or checklist items, than those just posted. I also recommend acceptance and pursuing the final editorial steps after a short reply from the author.
:wave: @gcskoenig – just waiting on your final 👍 to move ahead please?
@MichaelSchomaker I definitely hear what you’re saying re: “rendering it inconclusive” that language came from a review of the methods several years ago when we used to say “rendering it null” (since the interval included the null) and it was pointed out that just barely including the null was more likely inconclusive — but maybe I should just remove the “rendering…” text all together.
Technical question, @fabian-s I archived the previous version, if I made this change would I just increment that?
@LucyMcGowan – you don't need to make a new archive of the software if you've just tweaked the language in the paper.
@LucyMcGowan : yes, maybe just drop the phrase to avoid any unintentional misuse of it from package users when writing up their results.
Thank you! I have dropped the phrase from the current draft -- please let me know if you need any other updates from me. Thank you all for making this paper better!
Thanks, I have no further comments.
@editorialbot recommend-accept
Submitting author: !--author-handle-->@LucyMcGowan<!--end-author-handle-- (Lucy D'Agostino McGowan) Repository: https://github.com/LucyMcGowan/tipr Branch with paper.md (empty if default branch): joss Version: v1.0.0 Editor: !--editor-->@fabian-s<!--end-editor-- Reviewers: @gcskoenig, @MichaelSchomaker Archive: 10.5281/zenodo.6958926
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@gcskoenig & @MichaelSchomaker, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @fabian-s know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @gcskoenig
📝 Checklist for @MichaelSchomaker