Closed whedon closed 3 years ago
Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @stulacy, @dormezil it looks like you're currently assigned to review this paper :tada:.
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
:star: Important :star:
If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿
To fix this do the following two things:
For a list of things I can do to help you, just type:
@whedon commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@whedon generate pdf
Reference check summary:
OK DOIs
- 10.1111/j.1540-5907.2007.00307.x is OK
- 10.1017/s0022381612000473 is OK
- 10.32614/RJ-2018-076 is OK
- 10.1177/1536867X1801800409 is OK
- 10.1017/psrm.2017.4 is OK
- 10.1111/ajps.12318 is OK
- 10.1016/j.scitotenv.2019.02.432 is OK
- 10.1007/s11356-019-05640-x is OK
- 10.3390/ijerph16183269 is OK
- 10.1016/j.jclepro.2019.119533 is OK
- 10.1016/j.heliyon.2020.e03747 is OK
- 10.1016/j.scitotenv.2020.137530 is OK
- 10.1177/1354816619894080 is OK
- 10.1111/ajps.12491 is OK
MISSING DOIs
- None
INVALID DOIs
- None
This was an interesting read and the software looks to be useful in helping extract measures of interest from ARDL models using simulation. My comments are:
dynamac
package compares to other available software, such as ARDL
or dlagM
, which are both also available on CRANdynardl.simulation.plot
function produces. However, 3 of these (top left, top middle, bottom middle) all look identical. Would it be possible to use an example model and scenario where these 3 plots differ to better identify the purpose of each?Thanks for reviewing, @stulacy
This is still waiting on @dormezil for review.
Hi @sbenthall, Just checking...we're still waiting on one more review right? No rush....
@andyphilips Yes, this is still waiting on a review from @dormezil.
@dormezil , I wonder if you could give an update on when you think you will be able to get to this?
@andyphilips It looks as thought there were some issues raised by @stulacy 's review. I wonder if you could address them.
@sbenthall Reviewing this weekend.
The software works and is useful for working with ARDL models. There is a typo in the 2nd paragraph, "Next, the parameters [are] used ...."
@andyphilips can you please make the corrections based on the second review?
@sbenthall @dormezil @stulacy Thank you all for the thoughtful reviews. We outline our particular response and revisions below. We have also uploaded all relevant files to the repository.
@stulacy:
Point 1. We appreciate the opportunity to differentiate dynamac from other R packages. @stulacy is correct in that multiple packages estimate ARDL models: dynlm [-@dynlmCRAN] (which is just model estimation), ardl [-@ardlCRAN] (a front-end for dynlm that extends it to the bounds test for cointegration [which dynamac also provides]), and dlagM [-@dlagMCRAN] (which offers basic model forecasting but in a clunky ARDL(p, q) estimation framework). We credit these packages as alternatives for users while illuminating the main difference for dynamac, which really is the thrust of the paper: compelling visualizations produced by counterfactual simulation. To illustrate this difference, we've added an additional sentence to the first paragraph.
Point 2. It's correct that three of the plots look very similar (and would look very similar in any example). However, they are more finely differentiated by serving different purposes. We try to clarify these separate purposes through the text. The first panel presents Y in levels. This is interesting for an obvious reason: if the researcher wanted to predict where Y would end as a result to a shock in X propagating through the system, s/he would just want the fitted values of Y.
But suppose the researcher instead wanted to know how much Y changed from its starting value, rather than its new level per se. This would require some calculation on the researcher's end. We aim to minimize this burden in the second panel, ``Changes from Y Mean Value,'' is identical to the first but subtracts out the mean value of Y. This would be especially helpful if the response series re-equilibrated to a new value as a result of the shock to the X series. Instead of having to calculate whether this re-equilibration occurred, subtracting out the mean of Y gives a measure of how much Y changed from its pre-shock equilibrium mean.
The fifth plot, ``Cumulative Change in Y Value,'' is different in its calculation. For the second panel, the fitted values are calculated within each period, with the mean of Y subtracted out. This leads to an estimate of the within-period change in Y from its mean value, with confidence intervals taken from the percentiles of the fitted values within each period of the simulation. The fifth plot tracks the simulations across all time periods; the plotted estimates are the cumulative changes in Y across all of the simulations up to the corresponding period. This will almost always be more conservative in the estimation of the confidence intervals than tracking the within-period changes (without considering the individual histories of the simulations to that time period), as in the second panel. It also more closely reflects the logic of the long-run multiplier it is meant to emulate: the cumulative change in Y that results from a shock to X (rather than just the within-period position of Y relative to its mean).
To clarify, we have now constructed a new example that illustrates the more conservative approach to confidence intervals between the second and fifth panels. In the text, we clarify this under the second and fifth bullet points:
``Deviations between the predicted and average values of $y_t$ (i.e., $\hat{y}_t - \bar{y}$). While this plot follows a response path identical to the plot above, if a shock dissipates and the series reverts to its mean, this is easier to observe if we do not have to subtract the stable starting value from the plot. If the series reverts to a value other than its mean, we would be especially interested in this new long-run mean in response to the shock. This is analogous to an impulse response function.''
``The cumulative response in $y_t$ to a shock in $x_t$. This helps us to understand what the "final" effect of a shock to an independent variable does to a dependent variable. An independent variable might first cause a positive response, but ultimately may be overwhelmed by a negative movement as time goes on. Unlike the other plots, this considers the cumulative histories in each individual simulation when plotting the response in $y$ over time, making the estimation of the confidence intervals more conservative. This is analogous to the traditional long-run effect of an ARDL model; however, it requires no analytical calculation.''
Finally, we'd like to note that even though the plots look similar, they are built to serve different logics and different approaches to calculating quantities of interest. They are also completely new and not found in any other package, so we wanted to provide a diverse set of approaches to practitioners without adjudicating for them which plots they should use. And we suspect that most of the time, users will choose one of the plot types to show (through dynardl.simulation.plot) rather than showing all of them, minimizing the redundancy.
Point 3 (which @dormezil also carefully noted).
Thank you for your careful reading: we amended the typo!
Thank you for your detailed response. I am satisfied my points have been addressed.
I believe the reviewers of this article now recommend acceptance.
We will begin the next stage of processing.
@whedon generate pdf
@whedon check references
PDF failed to compile for issue #2528 with the following error:
sh: 0: getcwd() failed: No such file or directory pandoc: 10.21105.joss.02528.pdf: openBinaryFile: does not exist (No such file or directory) Looks like we failed to compile the PDF
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1111/j.1540-5907.2007.00307.x is OK
- 10.1017/s0022381612000473 is OK
- 10.32614/RJ-2018-076 is OK
- 10.1177/1536867X1801800409 is OK
- 10.1017/psrm.2017.4 is OK
- 10.1111/ajps.12318 is OK
- 10.1016/j.scitotenv.2019.02.432 is OK
- 10.1007/s11356-019-05640-x is OK
- 10.3390/ijerph16183269 is OK
- 10.1016/j.jclepro.2019.119533 is OK
- 10.1016/j.heliyon.2020.e03747 is OK
- 10.1016/j.scitotenv.2020.137530 is OK
- 10.1177/1354816619894080 is OK
- 10.1111/ajps.12491 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@whedon generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
I have one comment on the paper: it might help the reader if you provide the ARDL equation early on.
To proceed with the editorial process, @andyphilips would you please:
Thanks @sbenthall!
@whedon set https://doi.org/10.5281/zenodo.4057108 as archive
OK. 10.5281/zenodo.4057108 is the archive.
@whedon set 0.1.11 as version
OK. 0.1.11 is the version.
I recommend this paper for acceptance.
@whedon accept
Attempting dry run of processing paper acceptance...
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1111/j.1540-5907.2007.00307.x is OK
- 10.1017/s0022381612000473 is OK
- 10.32614/RJ-2018-076 is OK
- 10.1177/1536867X1801800409 is OK
- 10.1017/psrm.2017.4 is OK
- 10.1111/ajps.12318 is OK
- 10.1016/j.scitotenv.2019.02.432 is OK
- 10.1007/s11356-019-05640-x is OK
- 10.3390/ijerph16183269 is OK
- 10.1016/j.jclepro.2019.119533 is OK
- 10.1016/j.heliyon.2020.e03747 is OK
- 10.1016/j.scitotenv.2020.137530 is OK
- 10.1177/1354816619894080 is OK
- 10.1111/ajps.12491 is OK
MISSING DOIs
- None
INVALID DOIs
- None
:wave: @openjournals/joss-eics, this paper is ready to be accepted and published.
Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/1783
If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/1783, then you can now move forward with accepting the submission by compiling again with the flag deposit=true
e.g.
@whedon accept deposit=true
@dormezil I see one unchecked box above — can you check it or verify that you are satisfied with the writing?
@andyphilips I'm taking over to finish processing your submission. Please update the metadata on your Zenodo archive so that the title and author list match your paper exactly.
@andyphilips In terms of your paper: They are meant to be 250–1000 words and a quick word count of your paper (including references) was over 2000. So, without references it's not so long, but still is longer than we prefer. Please consider if there is a part that you can leave out.
Also, I saw two references in which "China" was not capitalized. You can force capitalization in Bibtex by putting {} around the word for which you want to preserve capitalization, like {China}. Can you go through your references in detail to check them? I can help in more detail if needed too.
Hi @kthyng, thanks for this. We have now done the following:
@whedon generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@andyphilips I found one more capitalization to fix and did in this PR #8 if you want to merge that on in.
Regarding removing content from your paper, I should have specified that you can certainly keep the content, but maybe move it to your docs. The paper itself is meant to stay brief but any useful information you can can go into the repo somewhere.
Thanks @kthyng for fixing that! I just merged pull request #8 into master.
Ok thanks, that is everything now.
@whedon accept
Attempting dry run of processing paper acceptance...
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.1111/j.1540-5907.2007.00307.x is OK
- 10.1017/s0022381612000473 is OK
- 10.32614/RJ-2018-076 is OK
- 10.1177/1536867X1801800409 is OK
- 10.1017/psrm.2017.4 is OK
- 10.1111/ajps.12318 is OK
- 10.1016/j.scitotenv.2019.02.432 is OK
- 10.1007/s11356-019-05640-x is OK
- 10.3390/ijerph16183269 is OK
- 10.1016/j.jclepro.2019.119533 is OK
- 10.1016/j.heliyon.2020.e03747 is OK
- 10.1016/j.scitotenv.2020.137530 is OK
- 10.1177/1354816619894080 is OK
- 10.1111/ajps.12491 is OK
MISSING DOIs
- None
INVALID DOIs
- None
:wave: @openjournals/joss-eics, this paper is ready to be accepted and published.
Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/1799
If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/1799, then you can now move forward with accepting the submission by compiling again with the flag deposit=true
e.g.
@whedon accept deposit=true
@whedon accept deposit=true
Doing it live! Attempting automated processing of paper acceptance...
🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦
🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨
Here's what you must now do:
Party like you just published a paper! 🎉🌈🦄💃👻🤘
Any issues? Notify your editorial technical team...
Congrats on your new publication @andyphilips! Thanks to @sbenthall for editing and to reviewers @stulacy and @dormezil — we couldn't do this without your time and expertise.
:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:
If you would like to include a link to your paper from your README use the following code snippets:
Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.02528/status.svg)](https://doi.org/10.21105/joss.02528)
HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.02528">
<img src="https://joss.theoj.org/papers/10.21105/joss.02528/status.svg" alt="DOI badge" >
</a>
reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.02528/status.svg
:target: https://doi.org/10.21105/joss.02528
This is how it will look in your documentation:
We need your help!
Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:
Submitting author: @andyphilips (Soren Jordan) Repository: https://github.com/andyphilips/dynamac Version: 0.1.11 Editor: @sbenthall Reviewer: @stulacy, @dormezil Archive: 10.5281/zenodo.4057108
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@stulacy & @dormezil, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @sbenthall know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Review checklist for @stulacy
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
Review checklist for @dormezil
Conflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper