Closed editorialbot closed 2 years ago
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.
For a list of things I can do to help you, just type:
@editorialbot commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@editorialbot generate pdf
Software report:
github.com/AlDanial/cloc v 1.88 T=0.20 s (472.0 files/s, 134722.0 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
R 65 3679 3808 12491
HTML 3 107 6 2847
TeX 2 101 0 768
Markdown 5 187 0 652
Rmd 3 395 761 452
YAML 12 105 15 446
SAS 1 36 22 81
JSON 1 0 0 60
Dockerfile 1 8 0 47
Bourne Shell 1 4 1 34
C/C++ Header 1 0 1 0
-------------------------------------------------------------------------------
SUM: 95 4622 4614 17878
-------------------------------------------------------------------------------
gitinspector failed to run statistical information for the repository
Wordcount for paper.md
is 940
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- None
MISSING DOIs
- 10.1093/biomet/86.4.948 may be a valid DOI for title: Miscellanea. Small-sample degrees of freedom with multiple imputation
- 10.1177/0962280220932189 may be a valid DOI for title: Bootstrap inference for multiple imputation under uncongeniality and misspecification
- 10.1002/(sici)1097-0258(20000515)19:9<1141::aid-sim479>3.0.co;2-f may be a valid DOI for title: Bootstrap confidence intervals: when, which, what? A practical guide for medical statisticians
- 10.1080/10543406.2013.834911 may be a valid DOI for title: Analysis of longitudinal trials with protocol deviation: a framework for relevant, accessible assumptions, and inference via multiple imputation
- 10.1111/rssa.12423 may be a valid DOI for title: Information-anchored sensitivity analysis: Theory and application
- 10.1002/sim.8569 may be a valid DOI for title: Sensitivity analysis for clinical trials with missing continuous outcome data using controlled multiple imputation: a practical guide
- 10.1002/pst.2019 may be a valid DOI for title: The attributable estimand: a new approach to account for intercurrent events
- 10.1080/07474930008800459 may be a valid DOI for title: Bootstrap tests: How many bootstraps?
- 10.1080/19466315.2020.1736141 may be a valid DOI for title: The Use of a Variable Representing Compliance Improves Accuracy of Estimation of the Effect of Treatment Allocation Regardless of Discontinuation in Trials with Incomplete Follow-up
- 10.1111/j.1540-5907.2010.00447.x may be a valid DOI for title: What to do about missing values in time-series cross-section data
- 10.1214/aos/1043351257 may be a valid DOI for title: A unified jackknife theory for empirical best prediction with M-estimation
- 10.1080/10543406.2015.1094810 may be a valid DOI for title: On analysis of longitudinal clinical trials with missing data using reference-based imputation
- 10.1177/009286150804200402 may be a valid DOI for title: Recommendations for the primary analysis of continuous endpoints in longitudinal clinical trials
- 10.1177/2168479019836979 may be a valid DOI for title: Aligning estimators with estimands in clinical trials: putting the ICH E9 (R1) guidelines into practice
- 10.1093/biomet/58.3.545 may be a valid DOI for title: Recovery of inter-block information when block sizes are unequal
- 10.1080/19466315.2019.1689845 may be a valid DOI for title: Aligning Treatment Policy Estimands and Estimators—A Simulation Study in Alzheimer’s Disease
- 10.1080/10543406.2014.928306 may be a valid DOI for title: Comment on “Analysis of longitudinal trials with protocol deviations: A framework for relevant, accessible assumptions, and inference via multiple imputation,” by Carpenter, Roger, and Kenward
- 10.1080/10543401003777995 may be a valid DOI for title: MMRM versus MI in dealing with missing data—a comparison based on 25 NDA data sets
- 10.1177/0962280216683570 may be a valid DOI for title: Should multiple imputation be the method of choice for handling missing data in randomized trials?
- 10.1111/biom.12702 may be a valid DOI for title: On the multiple imputation variance estimator for control-based and delta-adjusted pattern mixture models
- 10.1214/20-sts793 may be a valid DOI for title: Maximum likelihood multiple imputation: Faster imputations and consistent standard errors without posterior draws
- 10.1093/biomet/85.4.935 may be a valid DOI for title: Large-sample theory for parametric multiple imputation procedures
- 10.1080/10543406.2019.1684308 may be a valid DOI for title: A causal modelling framework for reference-based imputation and tipping point analysis in clinical trials with quantitative outcome
INVALID DOIs
- None
@DanielRivasMD and @JoranTiU - Please find above instructions for getting started with the reviews. The first task is to generate the checklists with the syntax mentioned above. Please let me know if you have any questions :)
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@DanielRivasMD and @JoranTiU - please feel free to generate your review checklists per the above syntax. Please let me know if you have any questions about this. Thanks!!
@fboehm I have checked the points on the checklist above and I imagine there is a way to generate this list and mark it
@DanielRivasMD - please use the command
@editorialbot generate my checklist
Please let me know if you encounter difficulties in creating the checklist. Thanks again!
@DanielRivasMD & @JoranTiU - how is the review going? Is there anything that I can help with? Thanks again!
@editorialbot
I'm sorry human, I don't understand that. You can see what commands I support by typing:
@editorialbot commands
@editorialbot commands
Hello @DanielRivasMD, here are the things you can ask me to do:
# List all available commands
@editorialbot commands
# Get a list of all editors's GitHub handles
@editorialbot list editors
# Check the references of the paper for missing DOIs
@editorialbot check references
# Perform checks on the repository
@editorialbot check repository
# Adds a checklist for the reviewer using this command
@editorialbot generate my checklist
# Set a value for branch
@editorialbot set joss-paper as branch
# Generates the pdf paper
@editorialbot generate pdf
# Get a link to the complete list of reviewers
@editorialbot list reviewers
@fboehm Hi, my apologies it took so long to get back to this. Now, I have taken a deeper look into the source code (I had already read the paper and looked at the repo superficially) and tested the functionality. Overall, I think it is a pretty complete package, extensibly tested, good documentation and a use case attached as a dataset (indicated in the checklist that I hope you can see above ^). I have only three issues with this project, and not entirely certain whether they fit with JOSS criteria:
1) Community / contributors guidelines. There is only a link at the README to open issues, but no suggestions on format or further documentation. 2) The paper does not present state of the field for comparison with other software. 3) In order to use and test, dependencies must be installed. I would have like this to be documented at least if not automatically setup.
Please do not hesitate to come back if any point is unclear, or further issues must be discussed.
Thank you, @DanielRivasMD ! I think that you make some good points in the comment. @nociale, please address the points that @DanielRivasMD made above:
Thank you!
@DanielRivasMD & @JoranTiU - how is the review going? Is there anything that I can help with? Thanks again!
It's going well :). Should finish this week :)
@fboehm Hi, I agree with @DanielRivasMD, it is a nice and pretty complete package with good documentation. The Vigenette's are also very helpful. Next to the points raised by @DanielRivasMD (i.e., (i) There is only a link at the README to open issues, but no suggestions on format or further documentation; (ii) The paper does not present state of the field for comparison with other software, and (iii) In order to use and test, dependencies must be installed.) I had the following minor remarks:
Overall though, as I said, like the package :).
Best, Joran
@nociale - The reviews for your package are very positive. Please make the changes suggested or discuss them here in the thread. Thanks again!
Thank you, @JoranTiU and @DanielRivasMD for your timely and thorough reviews. Once the suggestions are implemented, I'll ask you to verify that you're satisfied with the updates.
You're very welcome @fboehm! 😊. And great! 😊
Hi @fboehm, @DanielRivasMD and @JoranTiU
Thanks a lot for appreciating the package and for the careful review. Please find below answers to the issues that you have pointed out.
About the issues pointed out by @DanielRivasMD and @fboehm here https://github.com/openjournals/joss-reviews/issues/4251#issuecomment-1094361800:
To inform users on recent updates about rbmi
as well as provide contributors more information on how to contribute to the package, we have created two additional files: NEWS.md and CONTRIBUTING.md.
The first keeps track of the rbmi
version history. The second contains more detailed information about the development process which might be useful for potential contributors.
Could you please clarify what exactly is expected regarding the comparison with other softwares? The paper reports that we made a comparison with the SAS "5 macros" (https://www.lshtm.ac.uk/research/centres-projects-groups/missing-data#dia-missing-data). This includes the check of the results of the methodology based on Bayesian multiple imputation only, since this is the only method of rbmi
that is also implemented in the mentioned macros. To our knowledge, no other publicly available software implements any of the other methods. Additionally, from the two papers that are mentioned in the manuscript (Wolbers et al (2021) and Noci et al (2021)) rbmi
shows consistent results for all the methods implemented. In particular, in Wolbers et al (2021) (https://arxiv.org/abs/2109.11162), where the methods have been compared on a publicly available example dataset from an antidepressant clinical trial, the reported treatment effects estimates and standard errors are very similar to those obtained in Tang (https://pubmed.ncbi.nlm.nih.gov/28407203/) on the same dataset using a different implementation. This dataset is also included in rbmi
(?antidepressant_data
for internal documentation).
Since rbmi
has been uploaded on CRAN, one can install it via install.packages("rbmi")
. All dependencies would automatically be installed. Moreover, we have added more information on rbmi
main dependencies in the CONTRIBUTING.md file (sections "glmmTMB" and "rstan").
About the issues pointed out by @JoranTiU here https://github.com/openjournals/joss-reviews/issues/4251#issuecomment-1094959617:
We already have an open issue for the runnable examples here. This task is currently on our backlog of tasks to complete as its implementation isn’t trivial due to most functions being dependent on prior functions having been executed meaning examples are likely to be flabby and uninformative. Again this is something we are actively considering on how best to proceed with.
Thanks for suggesting the documentation enhancements at comments (2), (3), and (4). We have addressed them (this is the link to the pull request).
I am looking forward to hearing back from you. Kind regards, Alessandro
@DanielRivasMD and @JoranTiU - Please see the authors' comments above. Please evaluate the revisions and continue the discussion with the authors to ensure that a final product meets the criteria. Please ask me questions as they arise. Be sure to check off items from the checklist once you feel that the requirement is met.
thanks again!!
Hi all (@fboehm @nociale), apologies for the delay in this reply. In reply to the comments above:
1) I believe that adding such files to the community helps enormously to make this package a collective effort. I applaud such a resolution.
2) As a matter of fact, initially I did not review the references that the authors indicate above. However, they, in my personal opinion, provide enough evidence for comparison between this and other implementations. I leave it to the editors to decide whether this particular aspect is in coherence with the journal policy.
3) When I installed rbmi
via install.packages()
it errored due to a dependency, which is not unheard of in R
. Nevertheless, I think that the newly provided documentation should suffice.
Thanks.
Hi everyone (@fboehm @nociale) !
Apologies for my late reply too, I was on holiday last week ;).
I'm very happy and very satisfied with the changes :). The runnable examples would be nice, but the Vignette makes up for this to a large extend, so not having those definitely isn't a deal breaker as far as I'm concerned :).
Thanks for all the work and the great package :).
Best, Joran
@JoranTiU and @DanielRivasMD - Thank you for continuing the reviews. If you're satisfied with the changes, please check off the corresponding boxes in the checklist. If you feel that more changes are needed, please specify them here. Thanks again!
@JoranTiU and @DanielRivasMD - Thank you for continuing the reviews. If you're satisfied with the changes, please check off the corresponding boxes in the checklist. If you feel that more changes are needed, please specify them here. Thanks again!
Done! :)
Me too.
@JoranTiU and @DanielRivasMD - thank you for your thorough reviews.
@nociale - the reviewers have recommended your submission for publication. The next step is for me to proofread it. I'll do that in the next few days. I'll comment here with any changes that are needed.
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@nociale - the manuscript looks great. I have no edits to offer on it. The one thing that remains before proceeding is to verify that the DOIs are accurate. I'll do that now. If I need you to add a DOI, I'll indicate it below.
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- None
MISSING DOIs
- 10.1093/biomet/86.4.948 may be a valid DOI for title: Miscellanea. Small-sample degrees of freedom with multiple imputation
- 10.1177/0962280220932189 may be a valid DOI for title: Bootstrap inference for multiple imputation under uncongeniality and misspecification
- 10.1002/(sici)1097-0258(20000515)19:9<1141::aid-sim479>3.0.co;2-f may be a valid DOI for title: Bootstrap confidence intervals: when, which, what? A practical guide for medical statisticians
- 10.1080/10543406.2013.834911 may be a valid DOI for title: Analysis of longitudinal trials with protocol deviation: a framework for relevant, accessible assumptions, and inference via multiple imputation
- 10.1111/rssa.12423 may be a valid DOI for title: Information-anchored sensitivity analysis: Theory and application
- 10.1002/sim.8569 may be a valid DOI for title: Sensitivity analysis for clinical trials with missing continuous outcome data using controlled multiple imputation: a practical guide
- 10.1002/pst.2019 may be a valid DOI for title: The attributable estimand: a new approach to account for intercurrent events
- 10.1080/07474930008800459 may be a valid DOI for title: Bootstrap tests: How many bootstraps?
- 10.1080/19466315.2020.1736141 may be a valid DOI for title: The Use of a Variable Representing Compliance Improves Accuracy of Estimation of the Effect of Treatment Allocation Regardless of Discontinuation in Trials with Incomplete Follow-up
- 10.1111/j.1540-5907.2010.00447.x may be a valid DOI for title: What to do about missing values in time-series cross-section data
- 10.1214/aos/1043351257 may be a valid DOI for title: A unified jackknife theory for empirical best prediction with M-estimation
- 10.1080/10543406.2015.1094810 may be a valid DOI for title: On analysis of longitudinal clinical trials with missing data using reference-based imputation
- 10.1177/009286150804200402 may be a valid DOI for title: Recommendations for the primary analysis of continuous endpoints in longitudinal clinical trials
- 10.1177/2168479019836979 may be a valid DOI for title: Aligning estimators with estimands in clinical trials: putting the ICH E9 (R1) guidelines into practice
- 10.1093/biomet/58.3.545 may be a valid DOI for title: Recovery of inter-block information when block sizes are unequal
- 10.1080/19466315.2019.1689845 may be a valid DOI for title: Aligning Treatment Policy Estimands and Estimators—A Simulation Study in Alzheimer’s Disease
- 10.1080/10543406.2014.928306 may be a valid DOI for title: Comment on “Analysis of longitudinal trials with protocol deviations: A framework for relevant, accessible assumptions, and inference via multiple imputation,” by Carpenter, Roger, and Kenward
- 10.1080/10543401003777995 may be a valid DOI for title: MMRM versus MI in dealing with missing data—a comparison based on 25 NDA data sets
- 10.1177/0962280216683570 may be a valid DOI for title: Should multiple imputation be the method of choice for handling missing data in randomized trials?
- 10.1111/biom.12702 may be a valid DOI for title: On the multiple imputation variance estimator for control-based and delta-adjusted pattern mixture models
- 10.1214/20-sts793 may be a valid DOI for title: Maximum likelihood multiple imputation: Faster imputations and consistent standard errors without posterior draws
- 10.1093/biomet/85.4.935 may be a valid DOI for title: Large-sample theory for parametric multiple imputation procedures
- 10.1080/10543406.2019.1684308 may be a valid DOI for title: A causal modelling framework for reference-based imputation and tipping point analysis in clinical trials with quantitative outcome
INVALID DOIs
- None
@nociale - please add dois to the cited references. I suggest removing from the bib file any references that aren't actually used in paper.md. It looks like you have 9 references in the pdf. We need only those to have accurate DOIs, if they're available. Thanks again!
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1080/10543406.2013.834911 is OK
- 10.1002/sim.8569 is OK
- 10.1002/9781119013563 is OK
- 10.1214/20-STS793 is OK
MISSING DOIs
- 10.1002/pst.2234 may be a valid DOI for title: Standard and reference-based conditional mean imputation
INVALID DOIs
- None
@nociale - Thank you for adding the DOIs. Can you please check to see if the above listed DOI is valid for the reference listed under "MISSING DOIs"? Please comment here with more information.
Thanks again!
Hi @fboehm
That reference under "MISSING DOIs" was recently published (May 19th, 2022). We have just added its DOI. I hope that is good now. I just double checked and the inserted DOI correctly links to the web page of the reference which is this one: https://onlinelibrary.wiley.com/doi/10.1002/pst.2234.
Kind regards!
@editorialbot check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1080/10543406.2013.834911 is OK
- 10.1002/sim.8569 is OK
- 10.1002/9781119013563 is OK
- 10.1214/20-STS793 is OK
- 10.1002/pst.2234 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@editorialbot generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
the dois in the pdf all resolve to the correct targets.
@nociale - great! Please make a tagged release of your package and archive it, for example, with zenodo. Please report here the version number for the release and the doi for the archive. Please let me know if I should provide instructions for either of these tasks. Thanks again!
@nociale - are there any questions about the next steps? Once I get the archive doi and release version number, we should be ready to publish the submission.
@fboehm Please apologize for the large delay (I was on annual leave for a while). I provide below the information about version number of the release and the doi of the archive:
I hope we addressed the tasks correctly, please let me know whether the information provided are enough or we need to provide something more.
Thanks! Kind regards, Alessandro
@editorialbot set 10.5281/zenodo.6632154 as archive
Submitting author: !--author-handle-->@nociale<!--end-author-handle-- (Alessandro Noci) Repository: https://github.com/insightsengineering/rbmi Branch with paper.md (empty if default branch): Version: v1.1.4 Editor: !--editor-->@fboehm<!--end-editor-- Reviewers: @DanielRivasMD, @JoranTiU Archive: 10.5281/zenodo.6632154
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@DanielRivasMD & @JoranTiU, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @fboehm know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @JoranTiU
📝 Checklist for @DanielRivasMD