Closed cgrandin closed 3 years ago
I'm interested in how you do this in SS. I see in the SS manual that in the Forecast.ss file:
We currently have: -4 0 -4 0 -999 0
Which I think means that recruitment is average over whole time series (-999, 0). Does this mean we can put year range in here for a period of high recruitment? For example (2010, 2016)
I'm not sure what the goal is here other than to generate a new table with smaller probabilities due to higher recruitment.
You also have to turn that on rather than have rec devs be estimated parameters.
So if 'forecast recruitment' is set to 3 then the years defined in the 'forecast years' section I mentioned above will be used for the mean in the forecast period?
Also, it sounds like this" 'resampled deviates from the set of estimated deviates in the past, and used those "empirical" deviates in the projections' is not possible in SS, it can only use means of year ranges for forecast recruitment.
Here is what I did.
Note that within SS if you change the phase on forecast recruitments that means that everything from 2019 onwards is specified as a forecasted recruitment (or in other words anything past the main rec dev period). So these models have a few less parameters estimated. Also when doing really small forecast recruitments the whole time series shifts much more than I would have expected, so I'm not sure what is going on there. It might be related to bias adjustment so maybe that weird behavior would go away in mcmc.
In my examples I did early in January, I looked at comparing (in MLE): Base model Age-1 survey model Base model with forecast recruitment averaged over 2014-2018 (5yr) and forecast selectivity averaged over 5 years Base model with forecast recruitment averaged over 2009-2018 (10yr) and forecast selectivity averaged over 5 years Base model with standard forecast recruitment and forecast selectivity a random walk over 5 years
The other set of options I just was looking at this morning was using Forecast Recruitment flag 1, which means your forecasts can be just a scalar on the stock-recruitment curve. We could use this to just say, for example, the forecast years have X% below average recruitment, such as a string of negative recruitment deviations by 10%, as in the figure below. However, again if the recruitments get too low it alters estimates of R0 (I don't know why) and significantly changes the trend (which it shouldn't). I'll have to try these in MCMC if I have time.
The above figure was for a constant rec dev of -10%. When I decrease to an even more set of poor recruitments (-30%) I get this big change in R0.
I'm running an MCMC on this model to see if results differ.
OK! The MCMC version of the 30% decline in recruitment is doing what I expected (see below). So just be aware that MLE versions have some issues with them likely associated with the bias adjustment stuff.
So, how do we want to proceed. I think that nobody is going to be interested in increasing recruitment in the forecast period relative to the base model. So for "exploratory" we could do a run where we decrease rec by 10%, 20% and 30% relative to the base model forecast assumptions. The figure here is decrease by 30% so the 10 and 20% lines will be somewhere in between. Then we can just report probabilities of being/going below B40% in 2021, 2022, and 2023. I already have the 30% done, so I'll send off the 10 and 20% mcmc runs today.
So ... in the pdf the Rec5yr means average of the last five years?
Regarding the shift in R_0 given recruitment, my guess is that you are crashing the stock so it has to change the time series to support the catches that you want, but that is just a guess. One thing to be cautious of is how catches are being taken. I think we support an F up to 1.5. I haven't looked at the forecast file, but we should make sure that full catches are being taken and not a percentage based on a HCR.
Yes, Rec5yr means average of the last 5 years, where "last 5 years" means 2014-2018 because you start counting backwards from the last main rec dev year.
It's not crashing the stock, the shift in R_0 was due to some bias adjustment stuff because running it in MCMC it does not shift R_0 and performs more like I would have expected (i.e., only affects the trajectory in the forecast years).
I worry about the potential number of runs that could be done here where it seems like it is a question for the MSE rather than for the stock assessment. As in, how much do you have to scale down estimates of recent recruitment to ensure that you keep the stock at a certain percentage of B_0 for the next 50 years if management measures are only set one year out.
On Sun, Feb 14, 2021 at 11:35 AM Aaron Berger notifications@github.com wrote:
Yes, Rec5yr means average of the last 5 years, where "last 5 years" means 2014-2018 because you start counting backwards from the last main rec dev year.
It's not crashing the stock, the shift in R_0 was due to some bias adjustment stuff because running it in MCMC it does not shift R_0 and performs more like I would have expected (i.e., only affects the trajectory in the forecast years).
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/pacific-hake/hake-assessment/issues/810#issuecomment-778829076, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA7LCFDY5AA6SKLHCJFVPDDS7AQZLANCNFSM4XRF36GA .
-- Kelli Faye Johnson, PhD Research Fish Biologist Northwest Fisheries Science Center National Marine Fisheries Service 2725 Montlake Boulevard East Seattle, Washington 98112 (206) 860-3490 kelli.johnson@noaa.gov
I can't argue with that - plus the SRGs request to look at different reference points plays into all this too.
I think this figures helps answer Trevor's question:
It's a histogram of the base model median recdevs (on log scale) from 1966-2019, with the lognormal (0, 1.4) overlaid in red. So Trevor asked "whether the method you use to sample recruitment deviates matches the actual distribution of recruitment deviates?" which it fairly well seems to. Maybe actual past deviations are a little higher than the lognormal suggests.
That can be seen a little better with finer bin size:
So if we resampled from deviates rather than lognormal, looks like we'd get slightly higher recruitments, but nothing crazy (though this is on log scale). Don't forget these plots are just median deviates.
Here's my code just so I don't lose it!
x <- base.model$mcmccalcs$devmed
xx <- x[-c(1:20, (length(x)-3):length(x))] # restrict to 1966-2019
x_val <- seq(-4, 4, by=0.1)
hist(xx, breaks = seq(-4, 4, by = 1), main = "Recruitment deviations from 1967-2019")
y <- dnorm(x_val, 0, 1.4) * length(xx)
lines(x_val, y, col="red")
# Narrower bins (though sample sizes are small)
hist(xx, breaks = seq(-4, 4, by = 0.5), main = "Recruitment deviations from 1967-2019")
y <- dnorm(x_val, 0, 1.4) * length(xx) / 2
lines(x_val, y, col="red")
Hmmm.... What do we mean by (p62) "(no recruitment deviation)" here?
It contradicts p46 and what's in the management presentation, so I assume it should be changed.
I think that is a mistake.
Regarding my question above - or is it just average recruitment assumed for 2022 (and 2021?) because it doesn't really matter, since those fish won't show up in the projections anyway?
Else, we have to explain how the random future recruit deviations jive with the MCMC samples. Either each MCMC sample also has a random deviation, or do one deviation and apply it to all samples (seems bad), or do multiple draws for each MCMC sample (seems tedious). Though, again, I don't think it will actually impact any results given the fish will be so small.
I don't know the answer to that question. Kelli, do you? I would have thought that recruitment forecasts are taken from the stock-recruit curve directly (thus no forecast recruitment deviations estimated) and the variation arises in an MLE context strictly from the assumption of sigmaR and in an MCMC context from different posterior stock sizes that change the recruitment in the future via being on different spots of the stock recruitment curve (hence we have a posterior distribution for forecast recruitment).
I want to repeat my above figures using all the MCMC samples, not just the medians. Would better address the original comment I think. For 2022.
is there still a question here for me? I feel like it is easiest to think of the forecast in terms of recruitment deviations, which are parameters in the forecast (lognormally distributed based on sigmaR) when you do MCMC and they are fixed at zero when in MLE.
I think we can close/archive this. Just chatting with Rick and he said he's thought about resampling rec devs but noted :The problem is that it is not differentiable, so only works with MCMC" - meaning it could work for us if code was developed (and we even thought that was reasonable way forward, not really sure about that myself, but the option might be useful for explorations).
I also want to plot, say, 100 MCMC samples of the recruitment time series like I did in #747. But next year....
For 2022: these ideas weren't really discussed in the 2021 SRG meeting, but are in the 2021 SRG report as 'recommendations' to look into. Though the resampling issue may not be possible given Aaron's comment above about differentiability, and maybe isn't needed if I do some plots I suggested above....
I'm involved in a Master's student project that will be starting shortly looking at alternative ways to realistically forecast recruitment (in SS and otherwise), so we could always use that as an ongoing research item if we don't have time to do much of this ourselves before next year.
From his email: