pacific-hake / hake-assessment

:zap: :fish: Build the assessment document using latex and knitr
MIT License
13 stars 6 forks source link

Steps for "simpler" assessment #469

Closed andrew-edwards closed 5 years ago

andrew-edwards commented 5 years ago

Here's what Chris and I can do (without the US ages that we cannot get):

Do what the Pacific Council calls a "catch-only projection". This is described briefly on page 26 of this terms of reference document: https://www.pcouncil.org/wp-content/uploads/2018/10/Stock_Assessment_ToR_2019-20_SEPT2018_Final.pdf

andrew-edwards commented 5 years ago

Notes for write up (and documenting what we've done - also see Google Docs readme), and points out that bridging analysis will just be point 1. below (and we can do 5 also):

  1. Last year's base model is 2018.40_base_model (and will be used to compare new results to 2018's), but we started with 2018.40.25_alternative_Catch which has the correct catch numbers (the 'alternative catch stream' in number 8. on p61 of 2018 assessment) - that sensitivity run (changes <0.5% of catch from 2007-2017) was too small to make a noticeable impact in time series of spawning biomass. (We can verify this further by doing a bridging).

  2. Added total catch for 2018, increment end year.

  3. Age data not available, so unable to add for 2017.

  4. Control file - unchanged (normally it would - see first bullet above).

  5. wtatage.ss - weight-at-age data not available for 2018. Therefore for 2018 we used the average of the weights-at-age from 2015-2017; these years were chosen based on p23 of 2018 assessment (alternative SRG run, these years used for forecasts). We took these averages from sensitivity run 2018.40.31 - such averages are not simple averages of the 2015-2017 vectors, but depend on the original sample sizes (which we do not have, and so cannot try, say, 2010-2017).

This '2018' weight-at-age vector is also used for projections (to remove the inconsistency discussed on p23 of 2018 assessment with respect to the alternative run). So, the projections use the short-term averages of 2015-2017. This makes sense to use rather than the full average over all years.

We have kept the -1940 values as for last year's base model; i.e. average of all weights-at-age - it does not get updated because no new age data. The -1940 values get used for 1966-1974 and for unfished equilibrium. We have not done the 1975-1979 average for -1940 (like we did for the sensitivity 31 last year, as described in the 'inconsistency') because, as Aaron showed at the JTC in December, there are very few samples for the early years (I think just one age-15 in 1975, a fat fish, that then gets used for 1976 as 1976 had no age-15s).

Thus we stuck with the 2018 base model assumption of using the average of all weights-at-age for the unfished calculations and 1966-1974. The 2015-2017 data are better sampled, and it makes sense to use this average for the projections for 2019 and 2020 (and for 2018 since we do not have 2018 data available).

We wrote code to calculated the averages in utilities.R, the function avg.wtatage(). When testing with last year's files (e.g. -1940 weights-at-age) we realised that the averages are not simple averages of the annual vectors. They use the sample sizes, but we do not have these. However, we have the available averages as noted above.

That was all for 2019.00.base_model.

  1. Fecundity. For 2019.01.base_model we first corrected the (confusing) comment in wtatage.ss
    #Maturity x Fecundity: Fleet = -2

    to:

    # Fecundity = Maturity x Weight-at-age: Fleet = -2

Last year the SRG-requested alternative model run used time-varying fecundity. There are four components (see p23 of 2018 assessment):

a) For 1975-2017 (now 2018) multiply weight-at-age matrix by the maturity ogive to give time-varying fecundity. We have done this in the new 2019.01.base_model because it makes more sense than sticking with the previous average weight-at-age to give a non-time-varying fecundity vector. This was discussed and agreed upon by the JTC at our planning meeting in Dec 2018.

b) "Set equilibrium and 1966-1974 fecundity [-1940 row in wtatage.ss] (where empirical data are not available) to the product of weight-at-age averaged over 1975-1979 and maturity. At our JTC planning meeting, Aaron Berger showed plots of sample sizes (unavailable for this document). For instance, there was only one (if we remember correctly) age-15 fish in 1976, which then gets used as the 1975 value (see the bold value for 1975 in XXFigure 13). The same holds for age-14. If we (CG and AE) remember correctly, the sample sizes were generally low for the early years. thus, using the average of the first five years of data for all of the early years does not seem justified. In particular, the single age-15 fish in 1976 is the heaviest fish on record (by 360g). Our JTC notes say that: "Aaron’s analysis suggests that there’s not compelling evidence that we should use those short-term averages for the pre-data years or the forecast as opposed to the status-quo of the long-term average across all years.". Thus, although we are not able to present these analyses, we are using the long-term average (across all years) for the equilibrium and 1966-1974 fecundity, as for the 2018 base model.

For reference, our JTC notes say: "For 2019 allowing fecundity-at-age for the years with data seems reasonable, but Aaron’s analysis suggests that there’s not compelling evidence that we should use those short-term averages for the pre-data years or the forecast as opposed to the status-quo of the long-term average across all years."

c) "Set forecast-year fecundity (including 2017 due to current configurations in Stock Synthesis) weight-at-age to the product of maturity and mean weight-at-age over 2015-2017." Aaron's analysis did not find compelling evidence to use a short-term average for the forecasts (see above quote). However, since the 2018 age data are unavailable for the assessment, we have used the 2015-2017 average weight-at-age for the 2018 weight-at-age and therefore for the 2018 fecundity (this serendipitously avoids the Stock Synthesis configuration issue). It therefore seems sensible to use the same fecundities for the projections (rather than the average over all years). A sensitivity analysis (2019.01.34) will test this (will have to have 2018 as the average of all years, due to the configuration in Stock Synthesis just mentioned).

d) During the 2018 SRG meeting it was further realised that: "An inconsistency in this \altshort\ is that the the mean weight-at-age across all years is still used for the calculation of stock biomass in the years outside the range with empirical data (1975-2017), rather than the short-term averages (1975-1979 or 2015-2017). A brief examination of the sensitivity of the alternative run [2018.40.31] to removing this inconsistency showed relatively little change in results." Therefore, for consistency in the new base run we use the 2015-2017 mean weight-at-age for the forecasts (but stick with the long-term average for the early years, as per (b) above). [Hence wtatage.ss has -2018 rows for fecundity and for the weight-at-age matrices].

So using time-varying fecundity like in 2018.40.29, with 2015-2017 average like in 2018.40.31 (but not the early years as short-term early average). Sensitivity test names have been set up on Google Drive.

To calculate the new fecundities we wrote the function fec.wtatage() in utilities.R. We verified that this gave last year's model 2018.40.31 from 2018.40.base_model. We then used the function to generate the fecundity matrix for the new 2019.01.base_model.

Also verified that the changes from 2018.40.base_model to 2018.40.29_fecundity_matrix_1 wtatage.ss are the same as the changes from 2018.40.base_model to 2019.01.base_model (plus the addition of 2018 'data'). Minor differences in 4th decimal places due to rounding. And verified that the control and data files did not change from 2018.40.base_model to 2018.40.29_fecundity_matrix_1 (or to 2018.40.30_fecundity_matrix_3).

  1. These changes meant that SS3 did not converge for the new base run (2019.01.base_model), but did for some of the sensitivities (see #479). Changing the INIT value of h from 0.88 to 0.95 ensured convergence, and was done for the new base model (2019.02.base_model) and all sensitivities (except for 2019.02.02_h_fix_high that fixes steepness at 1 - that one worked for 2019.01 and was used to figure out the problem).

  2. Also removed years from filenames, so hake_data.SS instead of 2018hake_data.SS (and hake_control.SS), and updated starter.SS. The years are known from the directories they are in, and this makes updating each year easier (and easier to do the sensitvities).

andrew-edwards commented 5 years ago

[deleting text since now updated above]

andrew-edwards commented 5 years ago

Closing this. Though looking at hake-assessment-29-jan-2019.pdf (on the Google Drive, based on no 2018 age data), and comparing with current base model:

                  female spawning biomass (95% cred, 1000 t)       2016 age-0 recruits
no 2018 ages    400-3771                                                       35-33,517
with 2018 ages  471-3601                                                      746-26,085

So, age data obviously reduce the uncertainty, but not by as much as you might hope. (Expect projections get more uncertain though). Presumably you can get away without age data for one year, but soon the uncertainty will become too much. Was just curious to look at the numbers (figures are tricky to compare because the scales can change).