pacific-hake / hake-assessment

:zap: :fish: Build the assessment document using latex and knitr
MIT License
13 stars 6 forks source link

Trevor's comment about recruitment forecasting #810

Closed cgrandin closed 3 years ago

cgrandin commented 3 years ago

From his email:

One thing struck me this year, which is that the stock is quite likely to decline below B40 in the next 2-4 years unless there is a good recruitment event, almost no matter what the catches are set at. Which makes future projections more important than in the last 5-10 years when the stock was consistently at high levels.

What I was wondering was whether the method you use to sample recruitment deviates matches the actual distribution of recruitment deviates. i.e. I think you currently sample from a lognormal with sigma_r. But what if the actual deviates are less often likely to be large than in a lognormal (or more likely)? In other words, if you just resampled deviates from the set of estimated deviates in the past, and used those "empirical" deviates in the projections, would you get a substantially different estimate of the probability of falling below B40?

cgrandin commented 3 years ago

I'm interested in how you do this in SS. I see in the SS manual that in the Forecast.ss file: image

We currently have: -4 0 -4 0 -999 0

Which I think means that recruitment is average over whole time series (-999, 0). Does this mean we can put year range in here for a period of high recruitment? For example (2010, 2016)

I'm not sure what the goal is here other than to generate a new table with smaller probabilities due to higher recruitment.

kellijohnson-NOAA commented 3 years ago

You also have to turn that on rather than have rec devs be estimated parameters. image

cgrandin commented 3 years ago

So if 'forecast recruitment' is set to 3 then the years defined in the 'forecast years' section I mentioned above will be used for the mean in the forecast period?

cgrandin commented 3 years ago

Also, it sounds like this" 'resampled deviates from the set of estimated deviates in the past, and used those "empirical" deviates in the projections' is not possible in SS, it can only use means of year ranges for forecast recruitment.

aaronmberger-nwfsc commented 3 years ago

Here is what I did.

  1. Forecast file: Change the Forecast Recruitment flag to 0, 1, 2, or 3 depending on what you want to do. Zero is what we do now. The others are either simply multipliers on recruitment (option 1 and 2) or an average over a range of years (option 3).
  2. Forecast file: If using anything other than zero in step 1, then on the very next line input the multiplier (option 1 and 2) or the number of years included in the mean (3) going back in time from the first main rec dev (this year that would be 2018, so putting a 5 here would mean averaging from 2014-2018). If doing option 3, then additionally go to the Fcast_years line and change the existing -4 0 -4 0 -999 0 line to -4 0 -4 0 -X 0, where the X is the number of years going backwards +1 (so for a recent 5 year average it would be: -4 0 -4 0 -6 0
  3. Control file: If using anything other than zero in step 1, then change the "forecast recruitment phase" to a negative number (i.e., turn off estimation)

Note that within SS if you change the phase on forecast recruitments that means that everything from 2019 onwards is specified as a forecasted recruitment (or in other words anything past the main rec dev period). So these models have a few less parameters estimated. Also when doing really small forecast recruitments the whole time series shifts much more than I would have expected, so I'm not sure what is going on there. It might be related to bias adjustment so maybe that weird behavior would go away in mcmc.

In my examples I did early in January, I looked at comparing (in MLE): Base model Age-1 survey model Base model with forecast recruitment averaged over 2014-2018 (5yr) and forecast selectivity averaged over 5 years Base model with forecast recruitment averaged over 2009-2018 (10yr) and forecast selectivity averaged over 5 years Base model with standard forecast recruitment and forecast selectivity a random walk over 5 years

Compare_models.pdf

aaronmberger-nwfsc commented 3 years ago

The other set of options I just was looking at this morning was using Forecast Recruitment flag 1, which means your forecasts can be just a scalar on the stock-recruitment curve. We could use this to just say, for example, the forecast years have X% below average recruitment, such as a string of negative recruitment deviations by 10%, as in the figure below. However, again if the recruitments get too low it alters estimates of R0 (I don't know why) and significantly changes the trend (which it shouldn't). I'll have to try these in MCMC if I have time.

compare9_recruits

aaronmberger-nwfsc commented 3 years ago

The above figure was for a constant rec dev of -10%. When I decrease to an even more set of poor recruitments (-30%) I get this big change in R0. compare9_recruits

I'm running an MCMC on this model to see if results differ.

aaronmberger-nwfsc commented 3 years ago

OK! The MCMC version of the 30% decline in recruitment is doing what I expected (see below). So just be aware that MLE versions have some issues with them likely associated with the bias adjustment stuff. compare9_recruits

So, how do we want to proceed. I think that nobody is going to be interested in increasing recruitment in the forecast period relative to the base model. So for "exploratory" we could do a run where we decrease rec by 10%, 20% and 30% relative to the base model forecast assumptions. The figure here is decrease by 30% so the 10 and 20% lines will be somewhere in between. Then we can just report probabilities of being/going below B40% in 2021, 2022, and 2023. I already have the 30% done, so I'll send off the 10 and 20% mcmc runs today.

kellijohnson-NOAA commented 3 years ago

So ... in the pdf the Rec5yr means average of the last five years?

Regarding the shift in R_0 given recruitment, my guess is that you are crashing the stock so it has to change the time series to support the catches that you want, but that is just a guess. One thing to be cautious of is how catches are being taken. I think we support an F up to 1.5. I haven't looked at the forecast file, but we should make sure that full catches are being taken and not a percentage based on a HCR.

aaronmberger-nwfsc commented 3 years ago

Yes, Rec5yr means average of the last 5 years, where "last 5 years" means 2014-2018 because you start counting backwards from the last main rec dev year.

It's not crashing the stock, the shift in R_0 was due to some bias adjustment stuff because running it in MCMC it does not shift R_0 and performs more like I would have expected (i.e., only affects the trajectory in the forecast years).

kellijohnson-NOAA commented 3 years ago

I worry about the potential number of runs that could be done here where it seems like it is a question for the MSE rather than for the stock assessment. As in, how much do you have to scale down estimates of recent recruitment to ensure that you keep the stock at a certain percentage of B_0 for the next 50 years if management measures are only set one year out.

On Sun, Feb 14, 2021 at 11:35 AM Aaron Berger notifications@github.com wrote:

Yes, Rec5yr means average of the last 5 years, where "last 5 years" means 2014-2018 because you start counting backwards from the last main rec dev year.

It's not crashing the stock, the shift in R_0 was due to some bias adjustment stuff because running it in MCMC it does not shift R_0 and performs more like I would have expected (i.e., only affects the trajectory in the forecast years).

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/pacific-hake/hake-assessment/issues/810#issuecomment-778829076, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA7LCFDY5AA6SKLHCJFVPDDS7AQZLANCNFSM4XRF36GA .

-- Kelli Faye Johnson, PhD Research Fish Biologist Northwest Fisheries Science Center National Marine Fisheries Service 2725 Montlake Boulevard East Seattle, Washington 98112 (206) 860-3490 kelli.johnson@noaa.gov

aaronmberger-nwfsc commented 3 years ago

I can't argue with that - plus the SRGs request to look at different reference points plays into all this too.

andrew-edwards commented 3 years ago

I think this figures helps answer Trevor's question: recdevs

It's a histogram of the base model median recdevs (on log scale) from 1966-2019, with the lognormal (0, 1.4) overlaid in red. So Trevor asked "whether the method you use to sample recruitment deviates matches the actual distribution of recruitment deviates?" which it fairly well seems to. Maybe actual past deviations are a little higher than the lognormal suggests.

That can be seen a little better with finer bin size: recdevs-0 5

So if we resampled from deviates rather than lognormal, looks like we'd get slightly higher recruitments, but nothing crazy (though this is on log scale). Don't forget these plots are just median deviates.

Here's my code just so I don't lose it!

x <- base.model$mcmccalcs$devmed 
xx <- x[-c(1:20, (length(x)-3):length(x))] # restrict to 1966-2019
x_val <- seq(-4, 4, by=0.1) 
hist(xx, breaks = seq(-4, 4, by = 1), main = "Recruitment deviations from 1967-2019")
y <- dnorm(x_val, 0, 1.4) * length(xx) 
lines(x_val, y, col="red")
# Narrower bins (though sample sizes are small)
hist(xx, breaks = seq(-4, 4, by = 0.5), main = "Recruitment deviations from 1967-2019")
y <- dnorm(x_val, 0, 1.4) * length(xx) / 2
lines(x_val, y, col="red")
andrew-edwards commented 3 years ago

Hmmm.... What do we mean by (p62) "(no recruitment deviation)" here?

image

It contradicts p46 and what's in the management presentation, so I assume it should be changed.

kellijohnson-NOAA commented 3 years ago

I think that is a mistake.

andrew-edwards commented 3 years ago

Regarding my question above - or is it just average recruitment assumed for 2022 (and 2021?) because it doesn't really matter, since those fish won't show up in the projections anyway?

Else, we have to explain how the random future recruit deviations jive with the MCMC samples. Either each MCMC sample also has a random deviation, or do one deviation and apply it to all samples (seems bad), or do multiple draws for each MCMC sample (seems tedious). Though, again, I don't think it will actually impact any results given the fish will be so small.

aaronmberger-nwfsc commented 3 years ago

I don't know the answer to that question. Kelli, do you? I would have thought that recruitment forecasts are taken from the stock-recruit curve directly (thus no forecast recruitment deviations estimated) and the variation arises in an MLE context strictly from the assumption of sigmaR and in an MCMC context from different posterior stock sizes that change the recruitment in the future via being on different spots of the stock recruitment curve (hence we have a posterior distribution for forecast recruitment).

andrew-edwards commented 3 years ago

I want to repeat my above figures using all the MCMC samples, not just the medians. Would better address the original comment I think. For 2022.

kellijohnson-NOAA commented 3 years ago

is there still a question here for me? I feel like it is easiest to think of the forecast in terms of recruitment deviations, which are parameters in the forecast (lognormally distributed based on sigmaR) when you do MCMC and they are fixed at zero when in MLE.

aaronmberger-nwfsc commented 3 years ago

I think we can close/archive this. Just chatting with Rick and he said he's thought about resampling rec devs but noted :The problem is that it is not differentiable, so only works with MCMC" - meaning it could work for us if code was developed (and we even thought that was reasonable way forward, not really sure about that myself, but the option might be useful for explorations).

andrew-edwards commented 3 years ago

I also want to plot, say, 100 MCMC samples of the recruitment time series like I did in #747. But next year....

andrew-edwards commented 3 years ago

For 2022: these ideas weren't really discussed in the 2021 SRG meeting, but are in the 2021 SRG report as 'recommendations' to look into. Though the resampling issue may not be possible given Aaron's comment above about differentiability, and maybe isn't needed if I do some plots I suggested above....

aaronmberger-nwfsc commented 3 years ago

I'm involved in a Master's student project that will be starting shortly looking at alternative ways to realistically forecast recruitment (in SS and otherwise), so we could always use that as an ongoing research item if we don't have time to do much of this ourselves before next year.