openjournals / joss-reviews

Reviews for the Journal of Open Source Software
Creative Commons Zero v1.0 Universal
707 stars 37 forks source link

[REVIEW]: SSMSE: An R package for Management Strategy Evaluation with Stock Synthesis Operating Models #4937

Closed editorialbot closed 10 months ago

editorialbot commented 1 year ago

Submitting author: !--author-handle-->@k-doering-NOAA<!--end-author-handle-- (Kathryn Doering) Repository: https://github.com/nmfs-fish-tools/ssmse Branch with paper.md (empty if default branch): Version: v0.2.8 Editor: !--editor-->@sbenthall<!--end-editor-- Reviewers: @quang-huynh, @iagomosqueira Archive: 10.5281/zenodo.10014307

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/9af77892379058580aced7199f3dc6dd"><img src="https://joss.theoj.org/papers/9af77892379058580aced7199f3dc6dd/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/9af77892379058580aced7199f3dc6dd/status.svg)](https://joss.theoj.org/papers/9af77892379058580aced7199f3dc6dd)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@seananderson & @quang-huynh, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review. First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @sbenthall know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @seananderson

📝 Checklist for @quang-huynh

📝 Checklist for @iagomosqueira

editorialbot commented 1 year ago

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf
editorialbot commented 1 year ago
Software report:

github.com/AlDanial/cloc v 1.88  T=0.12 s (860.1 files/s, 193990.9 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
R                               58            566           2362           8736
Scheme                          19             18              0           7830
Markdown                         5            176              0           1004
Rmd                              8            336            612            539
CSS                              1             66              1            275
TeX                              1            271              0            275
YAML                            11             17             19            205
HTML                             1             58              8             83
-------------------------------------------------------------------------------
SUM:                           104           1508           3002          18947
-------------------------------------------------------------------------------

gitinspector failed to run statistical information for the repository
editorialbot commented 1 year ago

Wordcount for paper.md is 3415

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

editorialbot commented 1 year ago
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1371/journal.pone.0092725 is OK
- 10.1577/1548-8446(2006)31[590:TCFIE]2.0.CO;2 is OK
- 10.1126/science.1149016 is OK
- 10.1002/fsh.10131 is OK
- 10.1016/j.fishres.2020.105725 is OK
- 10.7755/TMSPO.183 is OK
- 10.1016/j.fishres.2020.105854 is OK
- 10.1139/f03-109 is OK
- 10.1016/j.fishres.2018.12.014 is OK
- 10.1016/j.fishres.2012.10.012 is OK
- 10.2760/18924 is OK
- 10.1016/j.fishres.2022.106229 is OK
- 10.1111/faf.12104 is OK
- 10.1016/j.fishres.2021.106180 is OK
- 10.1006/jmsc.2000.0737 is OK
- 10.1139/cjfas-2020-0257 is OK
- 10.1111/j.1467-2979.2011.00417.x is OK
- 10.1111/faf.12480 is OK
- 10.1016/j.fishres.2021.105924 is OK

MISSING DOIs

- None

INVALID DOIs

- None
sbenthall commented 1 year ago

👋🏼 @k-doering-NOAA @seananderson @quang-huynh this is the review thread for the paper. All of our communications will happen here from now on.

As a reviewer, the first step is to create a checklist for your review by entering

@editorialbot generate my checklist

as the top of a new comment in this thread.

These checklists contain the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. The first comment in this thread also contains links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention openjournals/joss-reviews#4937 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for reviews to be completed within about 2-4 weeks. Please let me know if any of you require some more time. We can also use EditorialBot (our bot) to set automatic reminders if you know you'll be away for a known period of time.

Please feel free to ping me (@sbenthall) if you have any questions/concerns.

seananderson commented 1 year ago

Review checklist for @seananderson

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

quang-huynh commented 1 year ago

Review checklist for @quang-huynh

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

quang-huynh commented 1 year ago

@sbenthall I have published with John Walter, Nancie Cummings, and Cassidy Peterson in the past 4-5 years but the papers are not related to any work in this submission. Can I continue?

seananderson commented 1 year ago

@sbenthall Regarding potential conflicts of interest: I have published in the last ~3 years with K. Marshal but on work unrelated to this. I've published with several of the co-authors at ~6 years ago on work related to this paper (ss3sim), but that should be outside the 4-year window. I also have some level of ongoing collaboration with co-authors K. Johnson, I. Taylor, and C. Wetzel but that is unrelated to this paper or topic and would not impact my impartial scientific judgment or evaluation.

sbenthall commented 1 year ago

@quang-huynh and @seananderson -- Thank you so much for reporting these COIs.

@Kevin-Mattheus-Moerman Given the difficulty in finding qualified reviewers in this area, I would like to request that these COI's be waived. I believe this is your decision to make as track editor? Please let us know.

(See this Slack thread for discussion with @danielskatz .)

sbenthall commented 1 year ago

Hello. @Kevin-Mattheus-Moerman has weighed in on this.

@quang-huynh We can waive your COI in this case. Please proceed with your review!

@seananderson Unfortunately, we cannot waive your COI in this case. Thanks so much for reporting it. We will look for a different reviewer to replace you. If you could recommend anybody, that would be very helpful.

seananderson commented 1 year ago

OK. Other recommendations that may be less likely to have conflicts of interest:

sbenthall commented 1 year ago

Dear @ejardim,

I am reaching out as an Editor of the Journal of Open Source Software, an on-line journal of scientific research software.

The submission tracked in this issue is from a team led by Kathryn Doering (@k-doering-NOAA) and is an R package about fishery stock assessment. We have been struggling to find qualified reviewers as this is a specialized area. You were suggested as a reviewer by Sean Anderson, who was not able to review the submission because of a conflict of interest.

On behalf of the journal, I would like to ask you if you'd be willing to review this submission. The process takes place via GitHub and I'll be happy to walk you through it.

Best regards,

Sebastian Benthall @sbenthall

quang-huynh commented 1 year ago

Here is my written review:

The authors present SSMSE, a R package that facilitates use of SS3, an executable software package, for management strategy evaluation, a simulation exercise which necessitates iterative application of SS3. I am familar with SS3 and a somewhat frequent user of the software.

Manuscript comments

Line 20: Recommend "..that can dynamically..." It is also possible to compare static harvest strategies (fixed catches) in MSE. Line 22: ..may specify how a stock assessment model is configured.. Line 24: I recommend "hypothesized" over "true". If we knew the true dynamics, we wouldn't need to generate multiple OMs and evaluate misspecification.

Figure 3. Expand 'User inputs' and 'Models' bubbles to include setup of management procedures and operating models with create_om_list and sample_struct. Figure 4. Specify that this figure describes the case study in the caption. Also state the assumption of constant M in all EMs.

The paper lists five types of uncertainty that should be considered in MSE. The example demonstrates 1 and 2. Implicitly, one conditions a different SS3 model in advance to address model uncertainty (point 3). I couldn't figure out how to specify catch bias and implementation error in the operating model? (Bullets 4 and 5). I think I'd have to set up a model with these adjustments in the control or forecast file? I'm not sure. Example code in the manual would be useful to cover these last two bullets.

Comments related to checklist

Reproducibility I ran the script in: https://nmfs-fish-tools.github.io/SSMSE/manual/M-case-study-ex.html which I believe replicates the cod case study. No problems or convergence issues detected although I only ran 5 iterations instead of 100 due to runtime. The ggplot code to generate Figure 5 should be added on that web page.

Functionality documentation Projected dynamics in the operating model can be tinkered with a list, with a template provided by SSMSE::create_future_om_list. However, there's no documentation in the help file on what information is needed and what options are possible. What are the required values for 'pars', 'scen', 'pattern', and 'input' entries in the list? I can figure it out with pattern recognition from the examples but that limits me to the case study.

Intimate familiarity with the SS3 model is a prerequisite to use SSMSE. To adjust natural mortality, I needed to know that the corresponding internal parameter name in this cod model is "NatM_p_1_Fem_GP_1". State where to find this in the r4ss output.

create_sample_struct seems to create a list that specifices the future data observations that will be simulated for the EM. State the options (catch, index, ages, etc) in the help file.

Examples can be documented in the help files with URL links to the online manual.

Additional nice things to add

A simple wrapper function to generate the file structure needed for the analysis (model_runs, figures, input_models) would be helpful.

Due to long runtime, I recommend progress bars for run_SSMSE to inform users of elapsed and projected time. See the pbapply R package.

Are catch limits the only type of management advice that can be tested with SSMSE? Size limits are used to manage U.S. fisheries but I'm not sure if it can be implemented here?

k-doering-NOAA commented 1 year ago

Thank you @quang-huynh for the thorough review! we plan to incorporate the suggestions once we have both reviews. @sbenthall I think we are still waiting to identify an additional reviewer? Please let me know if providing more suggested reviewers would be helpful. Thanks!

sbenthall commented 1 year ago

@k-doering-NOAA Yes, please, suggesting more reviewers would be very helpful. I have reached out to several people based on your earlier recommendations and the JOSS staff, but I'm afraid the fish haven't been biting ;-)

k-doering-NOAA commented 1 year ago

@sbenthall here are 2 suggestions for reviewers:

kellijohnson-NOAA commented 1 year ago

@sbenthall here are a few more reviewer suggestions, I was on a review paper with all of them that was published in 2021 so I am unsure if that invokes a conflict of interest

sbenthall commented 1 year ago

Hello @iagomosqueira ; would you be able to review this submission to the Journal of Open Source Software?

iagomosqueira commented 1 year ago

I would be willing to do so, but it will be more than impossible before the second half of May. That might be too late?

sbenthall commented 1 year ago

Thanks @iagomosqueira . I'll keep you in mind if we haven't found an alternate reviewer by May. But I'll keep looking for now...

kthyng commented 1 year ago

@sbenthall Looks like it has been hard to find reviewers for this submission! Any ideas at the moment?

sbenthall commented 1 year ago

@kthyng I apologize. Yes, I have had a hard time finding a stand-in reviewer for this submission.

k-doering-NOAA commented 1 year ago

@sbenthall, I am sorry it has proved challenging to find a reviewer - perhaps given we are already in April, Iago's offer to review in May would work well?

sbenthall commented 1 year ago

Thanks for your patience @k-doering-NOAA .

@iagomosqueira if you are willing to review this submission, we can accomodate your timeline! May I add you as a reviewer and expect your review in May?

iagomosqueira commented 1 year ago

Right now I could only do it by the end of June, if that is still useful.

sbenthall commented 1 year ago

Yes, @iagomosqueira . Please, that would be useful.

iagomosqueira commented 1 year ago

Sure, will do my best

sbenthall commented 1 year ago

Hello @iagomosqueira . Are you still on track to provide this review?

iagomosqueira commented 1 year ago

Yes

iagomosqueira commented 1 year ago

@editorialbot generate my checklist

editorialbot commented 1 year ago

@iagomosqueira I can't do that because you are not a reviewer

iagomosqueira commented 1 year ago

Shouldn't I be able to fill up the checklist, @sbenthall?

sbenthall commented 1 year ago

@editorialbot add @iagomosqueira as reviewer

editorialbot commented 1 year ago

@iagomosqueira added to the reviewers list!

sbenthall commented 1 year ago

You should be able to generate the checklist now @iagomosqueira I apologize for not having that ready earlier.

sbenthall commented 1 year ago

@editorialbot remove @seananderson from reviewers

editorialbot commented 1 year ago

I'm sorry human, I don't understand that. You can see what commands I support by typing:

@editorialbot commands

iagomosqueira commented 1 year ago

Review checklist for @iagomosqueira

Conflict of interest

Code of Conduct

General checks

Functionality

Documentation

Software paper

iagomosqueira commented 1 year ago

Review

The authors present SSMSE as package to conduct a subset of Management Strategy Evaluation (MSE) analysis in which the Operating Model (OM) is constructed around an stock assessment developed using the Stock Synthesis software. I am a MSE practitioner, familiar with the use of Stock Synthesis as a tool for conditioning OMs, and a developer of one of the MSE toolsets cited in the paper, FLR.

Package

Checklist

General checks

Documentation

Software paper

author = {Jardim, JE and Scott, F and Mosqueira, I and Citores, L and Devine, J and Fischer, S and Ibaibarriaga, L and Mannini, A and Millar, C and Miller, D and Minto, C and De Oliveira, J and Osio, GC and Urtizberea, A and Vasilakopoulos, P and Kell, LT}
sbenthall commented 1 year ago

Thank you for this review @iagomosqueira !

@k-doering-NOAA do you think it will be possible to address the reviewer's concerns?

k-doering-NOAA commented 1 year ago

Thanks for the reviews, @iagomosqueira and @quang-huynh!

@sbenthall, yes, I am working with my co-authors on making changes and putting together responses. I think we should be able to finish within the next month, if that sounds reasonable.

k-doering-NOAA commented 1 year ago

@quang-huynh, thank you for the thorough review and helpful suggestions that improved our manuscript and documentation. Edits suggested from the review were made on the joss-review branch in the SSMSE repository. These will be merged into main after wrapping up the review process (unless there is a better time to merge these in; EDIT: these changes were already merged into main so that the changes are incorporated in the JOSS manuscript rendered by editorial bot). To see all differences from the main branch, the compare page is helpful: https://github.com/nmfs-fish-tools/SSMSE/compare/main...joss-review . Note that changes to the manual and readme will not be shown in the rendered readme and manual until the changes are pushed to main.

We hope that we have addressed your suggestions sufficiently to satisfy the checklist, but please let us know if there are additional changes we need to make.

Here is my written review:

The authors present SSMSE, a R package that facilitates use of SS3, an executable software package, for management strategy evaluation, a simulation exercise which necessitates iterative application of SS3. I am familar with SS3 and a somewhat frequent user of the software.

Manuscript comments

Line 20: Recommend "..that can dynamically..." It is also possible to compare static harvest strategies (fixed catches) in MSE. Line 22: ..may specify how a stock assessment model is configured.. Line 24: I recommend "hypothesized" over "true". If we knew the true dynamics, we wouldn't need to generate multiple OMs and evaluate misspecification.

Changes made in commit https://github.com/nmfs-fish-tools/SSMSE/commit/fb6b26f. Thank you for these helpful recommendations!

Figure 3. Expand 'User inputs' and 'Models' bubbles to include setup of management procedures and operating models with create_om_list and sample_struct. Figure 4. Specify that this figure describes the case study in the caption. Also state the assumption of constant M in all EMs.

While we see value in adding create_om_list and sample_struct in Figure 3, we are concerned that adding them in will over complicate the figure. Instead, we opted for an additional sentence in the caption highlighting the two functions (change made in https://github.com/nmfs-fish-tools/SSMSE/commit/a00f1b6). We hope this addresses the suggestion still!

For Figure 4, we edited the caption to include that the figure describes the case study and that the EMs assume constant M (https://github.com/nmfs-fish-tools/SSMSE/commit/fb6b26f).

The paper lists five types of uncertainty that should be considered in MSE. The example demonstrates 1 and 2. Implicitly, one conditions a different SS3 model in advance to address model uncertainty (point 3). I couldn't figure out how to specify catch bias and implementation error in the operating model? (Bullets 4 and 5). I think I'd have to set up a model with these adjustments in the control or forecast file? I'm not sure. Example code in the manual would be useful to cover these last two bullets.

Implementation error can be specified by using the future_om_list, as outlined in the structure of future_om_list section of the user manual. So, no need to add adjustments in the control or forecast file.

For catch bias, you could create a custom management procedure and add code there to create catch bias. There is an example of a custom management procedure in the custom management procedure section of the SSMSE User Manual.

Comments related to checklist

Reproducibility I ran the script in: https://nmfs-fish-tools.github.io/SSMSE/manual/M-case-study-ex.html which I believe replicates the cod case study. No problems or convergence issues detected although I only ran 5 iterations instead of 100 due to runtime. The ggplot code to generate Figure 5 should be added on that web page.

Good point, we added the code to create Figure 5 in https://github.com/nmfs-fish-tools/SSMSE/commit/a99510c

Functionality documentation Projected dynamics in the operating model can be tinkered with a list, with a template provided by SSMSE::create_future_om_list. However, there's no documentation in the help file on what information is needed and what options are possible. What are the required values for 'pars', 'scen', 'pattern', and 'input' entries in the list? I can figure it out with pattern recognition from the examples but that limits me to the case study.

It’s a really good point - we added a link to the SSMSE user manual in the R help documentation (https://github.com/nmfs-fish-tools/SSMSE/commit/1452b0a).

Intimate familiarity with the SS3 model is a prerequisite to use SSMSE. To adjust natural mortality, I needed to know that the corresponding internal parameter name in this cod model is "NatM_p_1_Fem_GP_1". State where to find this in the r4ss output.

We address the parameter names in the structure of future_om_list section of the SSMSE user manual, which reads:

“The first list item is named “pars”. It contains a vector of parameter name(s) to apply the change to. The names should be the same as the names in r4ss::SS_read_pars()

create_sample_struct seems to create a list that specifices the future data observations that will be simulated for the EM. State the options (catch, index, ages, etc) in the help file.

Examples can be documented in the help files with URL links to the online manual. The options and a link to the SSMSE manual are now stated in the help file for create_sample_struct (https://github.com/nmfs-fish-tools/SSMSE/commit/1452b0a)

Additional nice things to add

A simple wrapper function to generate the file structure needed for the analysis (model_runs, figures, input_models) would be helpful.

We can see the appeal, but we are a little concerned about requiring people to use a particular way of organizing their code (figures and input_models in separate folders from where the model_runs take place). I’m not sure this workflow is the way everyone likes to work.

Due to long runtime, I recommend progress bars for run_SSMSE to inform users of elapsed and projected time. See the pbapply R package.

Thanks for suggesting the {pbapply} package! The other reviewer suggested expanding the options for running in parallel, so we think adding progress bars would best be done when refactoring the code to add in more parallel options. We opened an issue for the progress bar: https://github.com/nmfs-fish-tools/SSMSE/issues/151

Are catch limits the only type of management advice that can be tested with SSMSE? Size limits are used to manage U.S. fisheries but I'm not sure if it can be implemented here?

For now, yes, only catch limits can be used. Nathan has been working on adding size limits to SSMSE, but it is still in development. It can be implemented, but would require some careful changes and testing to the current codebase. We agree that this would be useful to add.

k-doering-NOAA commented 1 year ago

@iagomosqueira, thank you so much for your review that improved our paper and source code. We hope that the edits we make are sufficient to fulfill the checklist, but please let us know if there are additional changes to make.

Edits suggested from the review were made on the joss-review branch in the SSMSE repository. These will be merged into main after wrapping up the review process (unless there is a better time to merge these in; EDIT: these changes were already merged into main so that the changes are incorporated in the JOSS manuscript rendered by editorial bot). To see all differences from the main branch, the compare page is helpful: https://github.com/nmfs-fish-tools/SSMSE/compare/main...joss-review. Note that changes to the manual and readme will not be shown in the rendered readme and manual until the changes are pushed to main.

Review

The authors present SSMSE as package to conduct a subset of Management Strategy Evaluation (MSE) analysis in which the Operating Model (OM) is constructed around an stock assessment developed using the Stock Synthesis software. I am a MSE practitioner, familiar with the use of Stock Synthesis as a tool for conditioning OMs, and a developer of one of the MSE toolsets cited in the paper, FLR.

Package

  • Packages 'foreach' and 'doParallel' are only 'Suggests' in DESCRIPTION, but foreach() is used in package code, although its presence is checked. If option 'parallel=TRUE' is chosen, the code sets up a parallel cluster using parallel::makeCluster, when other backends could be employed, e.g. doFuture. An alternative option would be to import the 'foreach' package, and leave the setup to the user. For example, a Linux user would choose a 'FORK' cluster rather than the default 'PSOCK', for efficiency reasons. Great points about the parallel code - we changed foreach, parallel, and doParallel to imports instead of suggests, and removed the checks for presence since they were no longer needed (done in commit https://github.com/nmfs-fish-tools/SSMSE/commit/619e729).

We agree that having more flexible parallel options would be helpful for the users of SSMSE. However, we think that making this change in the future would make the most sense. Nathan is working on parallel options for a different project, and we can apply his work to SSMSE at a later time, if this seems acceptable. We opened an issue (https://github.com/nmfs-fish-tools/SSMSE/issues/152).

In the meantime, there are additional ways to run parallel simulations using options outside of SSMSE; for instance, a user could use the furrr package to run scenarios in parallel (example: https://github.com/k-doering-NOAA/ssmse-afs/blob/master/code/run_parallel_example.R).

  • Can the functions source in the example from here be added to the package?

The functions were added in commits https://github.com/nmfs-fish-tools/SSMSE/commit/97b2e34, https://github.com/nmfs-fish-tools/SSMSE/commit/98052da, and https://github.com/nmfs-fish-tools/SSMSE/commit/bb770f3. Documentation and tests were added as well. The example code was changed to make it clear the functions have been added to SSMSE (https://github.com/nmfs-fish-tools/SSMSE/commit/cde3ffc).

Checklist

General checks

  • Repository: The package includes binary executables for which the source code location is not specified. I could not find any way of checking what version of Stock Synthesis is included in the package.

Thanks for pointing this out. We added some information to the readme on the version of the Stock Synthesis binaries (3.30.18) in commit https://github.com/nmfs-fish-tools/SSMSE/commit/8b461cf. We also included a simple startup message to the user upon loading the package what the version of SS3 is (commit https://github.com/nmfs-fish-tools/SSMSE/commit/583d934).

  • Reproducibility: The example run takes a very long time (days according to the author). I run it without problems but with only 5 iterations. I suggest a shorter example, even if less complete, so that an user can run it in a reasonable time frame, e.g. less than 2 hours. The author recommended the example contained in the README.md file, but it is a bit too limited.

The goal of the example in the paper is to provide a use case for SSMSE rather than a simple example for a user to follow along with. Because the SSMSE examples run full assessment models, the run times are naturally long for a complete simulation study. We added a note to readers so they are aware of the long run times and that they can reduce the number of iterations to decrease runtime (https://github.com/nmfs-fish-tools/SSMSE/commit/0c07321).

We could potentially create a simpler example that avoids running Estimation Models for the assessment, but it would not demonstrate the full capabilities of SSMSE as well.

Documentation

  • Functionality documentation: The set of available management procedures appears to be limited to those based on running SS as estimation method. Is the only way to run an alternative MP to write a function to takes SS dat and ctl files and input? A brief example would be an useful way of backing up the possible extensibility of the package.

There is an option to write your own management strategy that need not run SS3 as the estimation model, see the custom management strategy section of the SSMSE user manual. We added a link from the Roxygen documentation to the manual to make this option more discoverable to users (commit https://github.com/nmfs-fish-tools/SSMSE/commit/29498e4).

Software paper

  • State of the field: I find the comparison with both OpenMSE and FLR too centered on their ability to build MSEs around Stock Synthesis, ignoring that they offer a wider range of procedures and models.
  • Line 115: "but offer limited capacity to use existing stock assessment products created using SS3 for OM development". SS3-based OMs are being used in the FLR platform in at least two MSE exercises: IOTC albacore and swordfish tuna (Sharma et al, 2020; [Brunel and Mosqueira, 2023] (https://iotc.org/sites/default/files/documents/2023/03/IOTC-2023-WPM14MSE-05_0.pdf)).

Thanks for providing input. We respect the hard work that has gone into FLR and OpenMSE, and fully agree that they have extensive capabilities. The goal of SSMSE was Stock Synthesis-centric, but we want to highlight the capabilities that OpenMSE and FLR have, which are more general than SSMSE’s features. We made some edits to try to address this, but please let us know if it still doesn’t sound correct and feel free to suggest additional edits (edit made in commit https://github.com/nmfs-fish-tools/SSMSE/commit/7d25025).

We already have cited Sharma et al. 2020 in the paper, but for the references provided there was no mention of FLR - for example, we didn’t know that the Sharma et al. 2020 paper operating models had used FLR as a framework because there was no mention of FLR, unfortunately. We thought from reading the paper that analysts came up with their own bespoke frameworks for the cases using SS3 operating models.

We also think (but may have misunderstood) that SS3 is not used directly as an operating model in the FLR framework, but rather an SS3 run is converted to an FLR Operating model, which may result in some loss of complexity. While there are many situations where this is advantageous or sufficient, SSMSE took a different approach to fill a gap that we thought existed in generalized tools - we directly use SS3 as an operating model. We tried to explain this difference in the paper, but may have not worded it correctly (or perhaps we have misunderstood how FLR’s MSE package works). Please feel free to correct us if the language still does not accurately reflect how FLR’s MSE package works.

  • References: Authors in entry a4amsecite should be corrected:
author = {Jardim, JE and Scott, F and Mosqueira, I and Citores, L and Devine, J and Fischer, S and Ibaibarriaga, L and Mannini, A and Millar, C and Miller, D and Minto, C and De Oliveira, J and Osio, GC and Urtizberea, A and Vasilakopoulos, P and Kell, LT}

Thank you for catching this and apologies for the mistake - Change made in commit https://github.com/nmfs-fish-tools/SSMSE/commit/c57bc0c.

sbenthall commented 1 year ago

@quang-huynh @iagomosqueira Can you please confirm that the changes made to the submission are satisfying, or otherwise provide more feedback to the authors?

quang-huynh commented 1 year ago

@editorialbot generate pdf

editorialbot commented 1 year ago

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

quang-huynh commented 1 year ago

How do I download the updated manuscript? The editorialbot gave me the original submission

k-doering-NOAA commented 1 year ago

@editorialbot generate pdf