openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
706 stars 37 forks source link

[REVIEW]: CRE: An R package for interpretable discovery and inference of heterogeneous treatment effects #5587

Closed editorialbot closed 9 months ago

editorialbot commented 1 year ago

Submitting author: !--author-handle-->@naeemkh<!--end-author-handle-- (Naeem Khoshnevis) Repository: https://github.com/NSAPH-Software/CRE Branch with paper.md (empty if default branch): JOSS Version: ver0.2.5 Editor: !--editor-->@spholmes<!--end-editor-- Reviewers: @salleuska, @carlyls Archive: 10.5281/zenodo.10278296

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/86a406543801a395248821c08c7ec03d"><img src="https://joss.theoj.org/papers/86a406543801a395248821c08c7ec03d/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/86a406543801a395248821c08c7ec03d/status.svg)](https://joss.theoj.org/papers/86a406543801a395248821c08c7ec03d)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@salleuska & @carlyls, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @spholmes know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @salleuska

📝 Checklist for @carlyls

editorialbot commented 1 year ago

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf
editorialbot commented 1 year ago
Software report:

github.com/AlDanial/cloc v 1.88  T=0.17 s (1021.3 files/s, 168117.3 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
HTML                            75           2449            270           7631
JavaScript                       9           2142           1962           7220
R                               64            482           1067           2805
Markdown                         7            165              0            621
CSS                              6             99             48            451
XML                              1              0              0            228
TeX                              1             17              0            159
YAML                             4             25              6            157
JSON                             2              1              0             95
Rmd                              3            101            326             58
Dockerfile                       1              7              1             37
SVG                              1              0              1             11
-------------------------------------------------------------------------------
SUM:                           174           5488           3681          19473
-------------------------------------------------------------------------------

gitinspector failed to run statistical information for the repository
editorialbot commented 1 year ago

Wordcount for paper.md is 1206

editorialbot commented 1 year ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1111/insr.12427 is OK
- 10.1214/21-aoas1579 is OK
- 10.1073/pnas.1510489113 is OK
- 10.1016/j.artint.2018.07.007 is OK
- 10.1145/2939672.2939874 is OK
- 10.1287/ijoc.2021.1143 is OK
- 10.48550/arXiv.2009.09036 is OK
- 10.1002/sim.4322 is OK
- 10.48550/arXiv.2008.00707 is OK
- 10.1214/19-BA1195 is OK
- 10.1287/ijoc.2021.1143 is OK
- 10.1145/3368555.3384456 is OK
- 10.1214/aos/1032181158 is OK
- 10.1007/978-0-387-21606-5 is OK
- 10.1007/978-1-4614-6849-3 is OK
- 10.1002/sim.8924 is OK

MISSING DOIs

- None

INVALID DOIs

- None
editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

salleuska commented 1 year ago

Review checklist for @salleuska

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

carlyls commented 1 year ago

Review checklist for @carlyls

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

Naeemkh commented 1 year ago

@spholmes, I'd like to inquire about the review status of this submission. I understand and respect the amount of time and effort that the peer review process requires, especially considering the multiple commitments and schedules of our respected reviewers. For the purpose of planning, I was wondering if a rough timeline for the completion of the review process could be provided. I realize it's a challenge to provide an exact timeframe due to the voluntary and often busy nature of reviewers' roles. However, any insights or a general idea of the timeline in these circumstances would be highly appreciated. Thank you all for your attention and work in facilitating the review process of this submission. I'm looking forward to any updates.

spholmes commented 1 year ago

Dear @carlyls and @salleuska , could you please fill in the checklist and let us know of the issues that you encounter testing the software or reading the paper, the current checklists show up as incomplete. Many Thanks susan

spholmes commented 1 year ago

@spholmes, I'd like to inquire about the review status of this submission. I understand and respect the amount of time and effort that the peer review process requires, especially considering the multiple commitments and schedules of our respected reviewers. For the purpose of planning, I was wondering if a rough timeline for the completion of the review process could be provided. I realize it's a challenge to provide an exact timeframe due to the voluntary and often busy nature of reviewers' roles. However, any insights or a general idea of the timeline in these circumstances would be highly appreciated. Thank you all for your attention and work in facilitating the review process of this submission. I'm looking forward to any updates.

Hi @Naeemkh , I'll send out a request tot he reviewers to try and complete the reviews and checklists and send you any issues they have. best wishes

carlyls commented 1 year ago

@spholmes @Naeemkh I apologize for the delay - I have been finishing up an internship and then traveling. I will finish the review by the end of this week. Thank you for checking in!

carlyls commented 1 year ago

@spholmes @Naeemkh can you point me to the relevant manuscript? I am not sure where to find that. And also, I do not see a LICENSE file - am I missing it?

Naeemkh commented 1 year ago

Hi @carlyls, sure.

Naeemkh commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

carlyls commented 1 year ago

@spholmes @Naeemkh I have finished a preliminary review of the code and the software paper. I think this package is very helpful and well-documented, and I plan to use it in the future! I have some notes below - let me know if I should put these somewhere else. When some of these are addressed, I will feel comfortable checking off more of the boxes in my checklist above. I reference the checklist box with bolded text when relevant.

Paper:

spholmes commented 1 year ago

@Naeemkh : Could you take care of addressing the little issues mentioned by Carly?

Naeemkh commented 1 year ago

Hi @carlyls, Thank you for the time you allocated to review our package. All the comments and feedback were extremely valuable and helped us a lot in further improving the package and its documentation.

We provide detailed responses to the comments, addressing each one individually. Additionally, please note that we have been developing a new package version over the past few months. Specifically, we:

For more information about this new release, please refer to https://github.com/NSAPH-Software/CRE/blob/main/NEWS.md. All examples have been updated to reflect these changes.

Detailed responses to the questions:

  1. License: Missing LICENSE file
    • The package is licensed under GPLv3, and we've included the corresponding license file.
  2. Installation Instructions: I don’t see a clear list of dependencies in the README
    • The complete list of dependencies is available in the DESCRIPTION file. We've also directed users to this information in the README.
  3. Community guidelines: the code of conduct is good, but can there be something in the README with instructions for contributing or highlighting issues?
    • The Community Guidelines can be found on the CRE website. We've also included a link to this page in the Code of Conduct section of the README.
  4. Citations: Under the readme -> Notes, can you add citations for the methods listed, like the S-learner etc?
    • We added the link to the corresponding paper in the readme for each ITE estimator.
  5. Reproducibility: (i) I can mostly reproduce the plot in the paper using their code but the ATE at the top is different – can you add a seed so that it is reproducible? (ii) In example 2, "ite_pred <- …" could you put in some code here so that the example can be fully reproduced? (iii) In example 3 hyper_params, there is not a comma after t_decay = 0.025 in the README. (iv) Also in example 3, when I run plot(cre_results), I get “Visualization not available (0 causal decision rules discovered).” Just want to make sure that is expected.
    • i. We added a seed.
    • ii. We added a customized ITE estimator for this example.
    • iii. Fixed.
    • iv. If no rules are discovered, that warning is expected. Anyway, in the new examples, cre() should always retrieve more than 1 rule.
  6. Paper general: Slightly confusing phrasing at the end of the second sentence in the Summary section
    • We modified the confusing statement. Also added a few other minor changes according to the last release.
  7. *Statement of need: why reference causalTree but not the grf package?
    • We are not directly comparing our method with grf::causal_forest because we are focusing here only on interpretable methods for HTE.
  8. State of the field: I think it’s important to refer to other packages like Kunzel’s causalToolbox package (https://github.com/soerenkuenzel/causalToolbox) and Athey’s grf package (https://grf-labs.github.io/grf/REFERENCE.html). It also looks like you use the SuperLearner package? Should that be cited somewhere?
    • We added the references to all the R packages we rely on for ITE estimation and Stability Selection. We are not relying on causalToolbox.
  9. Computation Time: Would be helpful to mention expected computation time for the cre() function
    • We added a new section reporting the expected computation time of cre(). Please see Figure 2.
carlyls commented 1 year ago

Thank you for the thoughtful responses and for the updates on the new package version!

On Aug 28, 2023, at 4:30 PM, Naeem Khoshnevis @.***> wrote:

Hi @carlyls https://github.com/carlyls, Thank you for the time you allocated to review our package. All the comments and feedback were extremely valuable and helped us a lot in further improving the package and its documentation.

We provide detailed responses to the comments, addressing each one individually. Additionally, please note that we have been developing a new package version over the past few months. Specifically, we:

consider also internal node nodes for rules generation (to facilitate the discovery of rules with different complexity, i.e. length), added the new ‘max_rulesfiltering (while extracting the candidate rules from the trained forest, only consider the topmax_rules` rules most frequent.. This step is useful to pre-filter useless features, speeding up the following steps), added (vanilla) Stability Selection (without Error Control) for rules selection, added Uncertainty Quantification in estimation by bootstrapping, added predict() general function for treatment effect estimation via CRE, updated accordingly and simplified several hyper and method parameters. For more information about this new release, please refer to https://github.com/NSAPH-Software/CRE/blob/main/NEWS.md https://github.com/NSAPH-Software/CRE/blob/main/NEWS.md. All examples have been updated to reflect these changes.

Detailed responses to the questions:

License: Missing LICENSE file The package is licensed under GPLv3, and we've included the corresponding license file. Installation Instructions: I don’t see a clear list of dependencies in the README The complete list of dependencies is available in the DESCRIPTION file. We've also directed users to this information in the README. Community guidelines: the code of conduct is good, but can there be something in the README with instructions for contributing or highlighting issues? The Community Guidelines can be found on the CRE website https://nsaph-software.github.io/CRE/articles/Contribution.html. We've also included a link to this page in the Code of Conduct section of the README. Citations: Under the readme -> Notes, can you add citations for the methods listed, like the S-learner etc? We added the link to the corresponding paper in the readme for each ITE estimator. Reproducibility: (i) I can mostly reproduce the plot in the paper using their code but the ATE at the top is different – can you add a seed so that it is reproducible? (ii) In example 2, "ite_pred <- …" could you put in some code here so that the example can be fully reproduced? (iii) In example 3 hyper_params, there is not a comma after t_decay = 0.025 in the README. (iv) Also in example 3, when I run plot(cre_results), I get “Visualization not available (0 causal decision rules discovered).” Just want to make sure that is expected. i. We added a seed. ii. We added a customized ITE estimator for this example. iii. Fixed. iv. If no rules are discovered, that warning is expected. Anyway, in the new examples, cre() should always retrieve more than 1 rule. Paper general: Slightly confusing phrasing at the end of the second sentence in the Summary section We modified the confusing statement. Also added a few other minor changes according to the last release. *Statement of need: why reference causalTree but not the grf package? We are not directly comparing our method with grf::causal_forest because we are focusing here only on interpretable methods for HTE. State of the field: I think it’s important to refer to other packages like Kunzel’s causalToolbox package (https://github.com/soerenkuenzel/causalToolbox https://github.com/soerenkuenzel/causalToolbox) and Athey’s grf package (https://grf-labs.github.io/grf/REFERENCE.html https://grf-labs.github.io/grf/REFERENCE.html). It also looks like you use the SuperLearner package? Should that be cited somewhere? We added the references to all the R packages we rely on for ITE estimation and Stability Selection. We are not relying on causalToolbox. Computation Time: Would be helpful to mention expected computation time for the cre() function We added a new section reporting the expected computation time of cre(). Please see Figure 2. — Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/5587#issuecomment-1696369225, or unsubscribe https://github.com/notifications/unsubscribe-auth/APHKYG5KSKZ5OM5LPVJISNLXXT5W7ANCNFSM6AAAAAAZRTVVAY. You are receiving this because you were mentioned.

salleuska commented 1 year ago

@spholmes @Naeemkh

Apologies for the delay. I traveled and moved to a new role over the summer and I had missed some notifications.

I did a preliminary review of the paper and software. I think that the package is interesting, but I think both the paper and package documentation are not immediate to understand.

I'll make another thread for code related issues; @Naeemkh I have been looking at the code in the main branch in the past days, but I am assuming that the latest version is in the JOSS one now?

Here my suggestions for the paper and the README.md.

Software Paper

  1. The text in the Summary section seems more like a statement of need. I'd suggest revising to better highlight the package features rather than the motivations for the methodology. It is mentioned that the CRE package is a flexible implementation of the Causal Rule Ensemble method (cit). What makes the package flexible? For example, it seems that the package allows to accommodate user-defined estimation methods for the individual treatment effects. Also, what are the available models/algorithms for estimating the other quantities? Can users specify which learners to use in the ensemble?

  2. The Algorithm section gives a brief overview of the main steps involved in the estimation algorithm of the CRE method. Although I agree it should be a summary, some of the things stated lack context, making it unclear. Here are my comments:

  1. Examples in the Usage section need some more context.

Minor

README

Note: I have updated this comments in view of your updates in the [https://github.com/NSAPH-Software/CRE/tree/JOSS](JOSS branch)

  1. Give some context to https://github.com/NSAPH-Software/CRE/tree/main#simulations ; what are those files for? What are the simulations for?

  2. What are the available options for the models? It seems that there are $3$ models:

and these are used twice, under the discovery and inference phases of the algorithm. So what models are available for each estimated quantity? Is the same model used in both phases or one can specify different models in the two phases? Maybe a table of some sort could help to clarify this.

spholmes commented 1 year ago

Thanks so much @salleuska for your help in making the paper better and I await a note from @Naeemkh to let me know when the issues mentioned have been dealt with, especially the JOSS version of the software.

Naeemkh commented 1 year ago

Hi @salleuska, Thank you for the time you allocated to review our package. All the comments and feedback were extremely valuable and helped us a lot in further improving the package and its documentation. We report here our answers to our comments point by point.

Software Paper 1. The text in the Summary section seems more like a statement of need. I'd suggest revising to better highlight the package features rather than the motivations for the methodology. It is mentioned that the CRE package is a flexible implementation of the Causal Rule Ensemble method (cit). What makes the package flexible? For example, it seems that the package allows to accommodate user-defined estimation methods for the individual treatment effects. Also, what are the available models/algorithms for estimating the other quantities? Can users specify which learners to use in the ensemble?

2. The Algorithm section gives a brief overview of the main steps involved in the estimation algorithm of the CRE method. Although I agree it should be a summary, some of the things stated lack context, making it unclear. Here are my comments: Not all elements of the equation are defined. Here is a suggestion: Line 50 "Causal Rule Ensemble relies on the Treatment Effect linear decomposition assumption, which characterizes the Conditional Average Treatment Effect (CATE) tau(x) as the sum of M+1 distinct contributions"

I think that a high-level overview of the algorithm in pseudo-code (as the one given in Bargagli-Stoffi et al. (2023) ) would help readers to understand what the package does.

Line 55. "each observation is used for only one of the two steps (honest splitting)." -> It would be useful to mention that the user can control the proportion of data used for the discovery and the inference procedures. Also, is the splitting done at random?

What is a fit-the-fit procedure? This is non-standard terminology. I'd suggest giving a brief explanation and precise references for the details (e.g. section xx of paper abc).

Similar to above. What is the "Stability Selection algorithm"?

_3. Examples in the Usage section need some more context. Line 66. The function generate_credataset is a function to generate a synthetic dataset. According to which model?

Example 1. Define what y, x, z arguments are in cre() function.

Example 2. Explain what the ite argument expects and what is the default.

Example 3. No need to explain all the arguments but here it would be helpful to describe what the chunk of code does and refer to the algorithm phases (e.g. discovery vs inference).

Line 54. "CRE procedure" -> The CRE procedure

README Note: I have updated this comments in view of your updates in the [https://github.com/NSAPH-Software/CRE/tree/JOSS](JOSS branch)

Give some context to https://github.com/NSAPH-Software/CRE/tree/main#simulations ; what are those files for? What are the simulations for?

What are the available options for the models? It seems that there are 3 models: a model to estimate the individual treatment effect, a model for propensity score and a model for outcome regression and these are used twice, under the discovery and inference phases of the algorithm. So what models are available for each estimated quantity? Is the same model used in both phases or one can specify different models in the two phases? Maybe a table of some sort could help to clarify this.

Thank you, The authors.

salleuska commented 1 year ago

Thanks @Naeemkh. I'll go through your replies in the next few days. Can you generate the updated pdf version of the paper?

I'll leave some issues regarding the code in the JOSS branch and reference those here.

salleuska commented 1 year ago

Hi @Naeemkh,

I have some comments on the R help for the core functions of the package. I think the format needs to be revised and more information is needed to make the package accessible to a general user. As a target help documentation I recommend looking at the help for the optim base function.

Here some advice:

generate_cre_dataset

  1. as for the paper add a description of what is the generating process for the synthetic data; what is the model used?
  2. in the arguments description "effect_size The effect size magnitude in"-> in what? I guess the description misses something
  3. revise the Value section to list first the name of variables containing the output and then the description (see help(optim))

cre

  1. in method_params you should provide all the available models for ite_method, learner_ps and leaner_y. As it is, I have no idea how to use the package; for example 'How do I use the Poisson regression for the ite estimator?'

  2. I recommend to include in the Details section a precise reference for each available model (see help(optim))

  3. for the hyper_parameters list, it seems that not all the parameters are always necessary in the list. Some of them are used only when a certain method is specified in method_params; for example offset seems to be used only for the Poisson regression. I would suggest to do something as for the control argument in help(optim), maybe grouping parameters according to whether they are "global" (e.g., max_rules or t_decay) and the ones that are model-specific

  4. revise the Value section to list first the name of variables containing the output and then the description (see help(optim))

Finally get_logger, set_logger needs a bit more exaplanation (e.g. what is the meaning of the levels avaialable in set_logger)?

salleuska commented 1 year ago

Also your answers to my points seems reasonable, but I will have to see the new version of the paper. I have one comment about this.

The same IATE estimator is used for both the discovery and inference steps. The list of implemented IATE estimators is already available in the README. Each IATE estimator also requires an outcome learner and/or a propensity score learner. Both these models are simple classifiers/regressors. By default, XGBoost from the superlearner package is used for both these steps. We have just added these additional details in the main README file and the list of available IATE estimators in the paper.

I saw this in the Github readme, but I still have a couple of questions

  1. I see for each method for the ITE estimation you linked the original paper; I would recommend to give a reference of the package you are using.
  2. What are the methods available for the propensity score model and the outcome model? I saw that you have multiple option available randomly inspecting R/check_method_params.R but this should be clearly stated across the documentation material.
Naeemkh commented 12 months ago

Hi @salleuska

Thank you for the time you allocated to further extend your review of our package. We acknowledge the limitation in the previous documentation for the help functions. We updated it according to your comments. In particular:

Regarding generate_cre_dataset function:

1. as for the paper add a description of what is the generating process for the synthetic data; what is the model used?

_2. in the arguments description "effectsize The effect size magnitude in"-> in what? I guess the description misses something

3. revise the Value section to list first the name of variables containing the output and then the description (see help(optim))

Regarding cre function:

_1. in method_params you should provide all the available models for ite_method, learner_ps and leaner_y. As it is, I have no idea how to use the package; for example 'How do I use the Poisson regression for the ite estimator?_

2. I recommend to include in the Details section a precise reference for each available model (see help(optim))

_3. for the hyper_parameters list, it seems that not all the parameters are always necessary in the list. Some of them are used only when a certain method is specified in method_params; for example offset seems to be used only for the Poisson regression. I would suggest to do something as for the control argument in help(optim), maybe grouping parameters according to whether they are "global" (e.g., max_rules or tdecay) and the ones that are model-specific

4. revise the Value section to list first the name of variables containing the output and then the description (see help(optim))

_Regarding get_logger and set_logger function:_

Regarding the ReadMe documentation: 1. I see for each method for the ITE estimation you linked the original paper; I would recommend to give a reference of the package you are using.

_2. What are the methods available for the propensity score model and the outcome model? I saw that you have multiple options available randomly inspecting R/check_methodparams.R but this should be clearly stated across the documentation material.

Thank you, The authors.

Naeemkh commented 11 months ago

Hello @carlyls,

Could you please let us know if there are any outstanding items on your end that we haven’t addressed? If there aren’t any, we would appreciate it if you could complete the checklist, as it currently appears to be incomplete. Thank you!

carlyls commented 11 months ago

Hi @Naeemkh - no outstanding items left on my end. I completed the checklist. Thanks!

salleuska commented 11 months ago

@Naeemkh thanks for the answers, I will look into them in detail.

Can you make the pdf for the revised JOSS paper? I won't be able to go throught the checklist until I see that!

Naeemkh commented 11 months ago

@editorialbot generate pdf

editorialbot commented 11 months ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

Naeemkh commented 11 months ago

Hello @salleuska,

Could you please let us know if there are any outstanding items on your end that we haven’t addressed? If there aren’t any, we would appreciate it if you could complete the checklist, as it currently appears to be incomplete. Thank you!

salleuska commented 10 months ago

Hi @Naeemkh,

I finally was able to go through paper and software updates.

Paper

Thanks for incorporating my suggestions, especially regarding the algorithm section. I have found some minor typos that you may want to address, but thechecklist for the paper is complete for me.

Minor

Software documentation

Note: Next time you are submitting at JOSS and reviewing your code, I would recommend pointing out the branch containing the updated version in your reply. It is also good practice to include a link to the commit addressing each of the reviewer's comments; it makes the process way easier.

General comment

I think that the help documentation is now way clearer, but there are still some odds and ends to wrap up in terms of style. I think is important in order to make the package easily accessible to users in the R community.

In particular,

References: https://style.tidyverse.org/documentation.html https://developer.r-project.org/Rds.html

generate_cre_dataset

"The covariates matrix is generated"

"In summary, this function empowers researchers and data scientists to simulate complex causal scenarios with various degrees of heterogeneity and confounding, providing valuable synthetic datasets for causal inference studies."

which is unnecessary and non-informative.

cre function

Thanks for clarifying the arguments' values; I think it is now clearer. As above, make sure those are rendered using backticks (ratio_dis, ite_method, etc..). This includes also all the character values these can take. E.g. ite_method values such 'aipw', 'cf', etc. This would allow one to easily spot what are the options available.

Naeemkh commented 10 months ago

Thank you, @salleuska, for the informative review. We will go through your comments and address them in the final draft. Just for your information, since you mentioned that "the checklist for the paper is complete for me," it seems there is one unchecked box: "Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?" Please check that one as well. Thanks.

salleuska commented 10 months ago

Hi @Naeemkh,

that was on purpose. I think the software paper is now good, but as I mentioned, I think there are still some things to address for the software documentation.

Naeemkh commented 10 months ago

Hi @salleuska,

Thank you for taking the time to provide feedback. We have polished the paper and the documentation in accordance with your suggestions. Please let us know if there are any other issues. If so, please open a new issue for each individual task, as JOSS recommends. Adopting this approach will make it easy to track the issues and changes for each one.

Best regards, The authors

salleuska commented 10 months ago

Hi @Naeemkh,

thanks for addressing the typos in the paper and referencing the commits.

Please see the issues referenced above regarding the documentation, hope now it is clearer. Again the rationale behind these requests is to allow R users to easily spot arguments and options of your functions. I made examples in the commits, but please be sure that the format is consistent across all the documentation. Once these are addressed I'll be happy to check out the last box.

Naeemkh commented 10 months ago

Thanks, @salleuska. Please see the updated JOSS branch.

salleuska commented 10 months ago

Hi @Naeemkh, could you point out the issues?

salleuska commented 10 months ago

Ok, I found the reply and checked the last box. @spholmes I am done on my side.

spholmes commented 10 months ago

Thanks Sally I will take it from here Susan

On Tue, Nov 14, 2023, 15:38 Sally Paganin @.***> wrote:

Ok, I found the reply and checked the last box. @spholmes https://github.com/spholmes I am done on my side.

— Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/5587#issuecomment-1811564041, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJFZPIHAZLRIFZGKIHUN7TYEP6FXAVCNFSM6AAAAAAZRTVVA2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMJRGU3DIMBUGE . You are receiving this because you were mentioned.Message ID: @.***>

Naeemkh commented 9 months ago

Hi @spholmes, Could you please inform us of the next steps in this submission? Thank you.

spholmes commented 9 months ago

Post-Review Checklist for Editor and Authors

Additional Author Tasks After Review is Complete

Editor Tasks Prior to Acceptance

spholmes commented 9 months ago

@editorialbot generate pdf

editorialbot commented 9 months ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

spholmes commented 9 months ago

@Naeemkh : After the editorial bot checks off the DOIs and references, I will ask you to make a tagged release and archive, and report the version number and archive DOI in the review thread.

spholmes commented 9 months ago

@editorialbot check references

editorialbot commented 9 months ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1111/insr.12427 is OK
- 10.1214/21-aoas1579 is OK
- 10.1073/pnas.1510489113 is OK
- 10.1016/j.artint.2018.07.007 is OK
- 10.1145/2939672.2939874 is OK
- 10.1287/ijoc.2021.1143 is OK
- 10.48550/arXiv.2009.09036 is OK
- 10.1002/sim.4322 is OK
- 10.48550/arXiv.2008.00707 is OK
- 10.1214/19-BA1195 is OK
- 10.1287/ijoc.2021.1143 is OK
- 10.1145/3368555.3384456 is OK
- 10.1214/aos/1032181158 is OK
- 10.1007/978-0-387-21606-5 is OK
- 10.1007/978-1-4614-6849-3 is OK
- 10.1002/sim.8924 is OK
- 10.1111/j.1467-9868.2010.00740.x is OK
- 10.1198/jcgs.2010.08162 is OK
- 10.1214/18-AOS1709 is OK
- 10.2307/2290910 is OK
- 10.1002/dir.10035 is OK
- 10.1073/pnas.1804597116 is OK

MISSING DOIs

- None

INVALID DOIs

- None
spholmes commented 9 months ago

@Naeemkh :The DOIs are OK, some references come out as incomplete in the paper, could you complete the ones that have a journal and a year as well as a DOI. For instance: Athey, S., Tibshirani, J., & Wager, S. (2019). Generalized random forests. Ann. Statist. 47(2): 1148-1178 (April 2019). The last line is missing. Thanks.

Naeemkh commented 9 months ago

@editorialbot check references