pacific-hake / hake-assessment

:zap: :fish: Build the assessment document using latex and knitr
MIT License
13 stars 6 forks source link

revise selectivity write-up #323

Closed iantaylor-NOAA closed 3 years ago

iantaylor-NOAA commented 6 years ago

The selectivity write-up could use a few changes, regardless of whether on not we switch to a new parameterization (issue #310 ).

Among the changes that could be useful are

aaronmberger-nwfsc commented 6 years ago

yeah, if the selectivity of Amin (survey and fishery) is fixed at a reference level (0 in this case) in parameter space and others estimated relative to that reference, it should be fine as long as the relational nature of the reference level to all other estimated levels remains after back-transformation and rescaling (max selectivity is 1). It sounds like this is the case (as I'm sure it would be) - maybe just use the term 'fixed to a reference level (0 in this case)' instead of fixed at 0. Or maybe it's just my confusion...

andrew-edwards commented 6 years ago

Just a reminder that in the Data section last year there are two pages (39-40) on the selectivity function - presumably that would be a starting point.

iantaylor-NOAA commented 6 years ago

The write-up of selectivity (pages 39-40) should be expanded to include discussion of the extra transformation for the time-varying parameters that was present (but undocumented) in the 2017 model, and no longer available in SS version 3.30 (illustrated in the figure below, previously shared by email).

I previously stated that the slope was about 0.135, but taking the derivative of the equation involved reveals that, ignoring the the tiny added constants it's actually 1/7 ~ 0.1428. This value implies that the old phi = 0.20 should be converted to phi = 0.20*7 = 1.40, rather than the 1.50 value found by trial and error used in previous conversions to 3.30. Indeed, an SS 3.30 model with phi = 1.40 matches our 2017 model using 3.24 a tiny bit better than those before, so I will use that value in the updated bridging steps, starting with model 2018.31. selectivity_transformation_illustration

iantaylor-NOAA commented 6 years ago

The function randWalkSelex.fn (in utilities.r) replicates the calculation of the selectivity from the parameters values with or without deviations. In commit f697a82 I just modified to remove the parameter transformation described in my previous comment on this issue, so that the calculations will work for our new base model. If we need to represent selectivity for earlier models, we can use change the new argument transform=FALSE to TRUE.

aaronmberger-nwfsc commented 6 years ago

Ian:

is this statement still true or has something changed in new version of SS (I didn't think so but thought I'd check)...

"forecasted catches are removed using selectivity averaged over the last five years"

iantaylor-NOAA commented 6 years ago

Yes, still true thanks to this line in forecast.ss:

#_Fcast_years:  beg_selex, end_selex, beg_relF, end_relF, beg_recruits, end_recruits  (enter actual year, or values of 0 or -integer to be rel. endyr)
 -4 0 -4 0 -999 0

A change in 3.30 is the addition of the 5th and 6th inputs there, which control the reference years for recruitment in models where you have time-varying stock-recruit parameters due to regime shifts or something like that (the -999 to 0 default just says average over the whole time series). Our model has time-invariant R0, h, and sigmaR so those new settings don't matter

iantaylor-NOAA commented 6 years ago

I think this is adequately addressed to close the issue. I have a few additional ideas for further improvement but I think the minimum necessary for a complete document has been done.

kellijohnson-NOAA commented 3 years ago

Rick Methot notes, when looking at a very long forecast from the hake model that ...

I note that this model has time-varying selectivity parameters and that the forecast selectivity
is set to average over the last 5 years of the time series.
This means that selectivity in the forecast is much less variable than selectivity in the time series.
An alternative configuration would have the selectivity parameters continue to have random
devs into the forecast period and to change the forecast selectivity to use annual calculations
rather than an average of recent years. I tried this alternative and, as expected,
it had a large impact on the variance patterns in the forecast.

We could set up a model run where the forecast is based on random deviates to see how much the forecasts change, but we aren't using MLE any more so this seems useless. Need to determine if selectivity used the 2dAR process how this would change or need to be set, do we currently do anything different for the sensitivity run with 2dAR selectivity for the forecasting? Moved to a new issue.

RM also noted that the maxF in the forecast is set in the control file to be 1.5. Just something to think about, regarding if it should be increased or not? Some Fs are 0.8. I think it is fine to leave as is given that we only do short-term forecasting.