CamFreshwater / synchSalmon

0 stars 0 forks source link

Parameterizing Outcome Uncertainty #9

Closed CamFreshwater closed 5 years ago

CamFreshwater commented 5 years ago

I'm attempting to parameterize outcome uncertainty for Fraser sockeye using the TAC and catch data present in the Pacific Salmon Commission management tables. However I've run into a few issues and want to run my plan by everyone.

First, TACs are calculated each year based on abundance minus the escapement target, management adjustment, test fishing and the aboriginal fishery exemption. For now I'm excluding the AFE, even though its technically catch, because it seems to be treated separately in these management tables. Alternatively I could recalculate TAC and catch to include the AFE.

Second, TACs vary over the year and frequently drop to zero by the end of the season as forecasted abundances fail to materialize. Since catch occurs throughout the season this means that outcome uncertainty will not only be variable, but also likely biased. I'm not sure that this is appropriate because it seems to be a post-season recognition that the TAC should have been zero rather than a management decision intended to constrain catch at the time.

One alternative would be to use the TAC and catch achieved at the 50% migration date. Note that we are parameterizing forecast uncertainty based on run size estimates at the 50% point under the assumption it marks a point when in-season management actions can be altered. In my mind these values may represent a more reasonable proxy for outcome uncertainty because they represent decisions made with partial but incomplete information; however we're still selecting a value somewhat arbitrarily. Thoughts?

seananderson commented 5 years ago

To throw something else into the mix, in our 2015 Ecol. Appl. paper we based our approach on Pestes et al. 2008 using a beta distribution. The problem with a lognormal distribution is that it could end up implementing a realized catch that exceeds abundance, right? Within small values I'm sure it would be fine. The beta will give you a number between 0 and 1 centered on the TAC, so to use that you would have to arrange the TAC as a fraction of true abundance. The SD of the beta would then either have to be based on past work (e.g. Pestes et al.), or eye balled to real data (which is what they did), or by fitting a beta distribution.

CamFreshwater commented 5 years ago

Good point on beta vs. log-normal distribution. It will be easy enough to switch to a beta distribution unless anyone feels strongly about sticking with log-normal. I've already converted the retrospective TAC and catch estimates to harvest rates and have been playing around with different SD values for beta. By eyeballing histograms I've found found a range that seem to be plausible (~0.06 slightly lower than Pestes et al. 0.1).

Unfortunately the underlying data are still making me pause. Specifically, even mid-season TAC values are often zero, but realized harvest rates in those years are still at least 0.02-0.1. So it seems as though we should include a bias in outcome uncertainty. This is easy enough for log-normal and I believe can be done for a beta using a non-centrality parameter (though I'm not sure about the best way to estimate what this should be).

However, it doesn't seem realistic to implement the same bias in every year. In the observed data years with zero TAC are the primary reason for the positive bias and 0 TAC years are relatively rare in the simulated data. Is it unnecessarily complex to specify different outcome uncertainties depending on whether the fishery is open/closed? Alternatively we could just introduce a slightly weaker bias than the observed data would suggest. I can't imagine either option will make a huge difference, but I'd like to make this as robust as possible.

carrieholt commented 5 years ago

I used beta distribution for outcome uncertainty in harvest rates in Holt and Peterman 2008, but stopped after that because the distribution generates shapes that are non-plausible at very low and very high mean harvest rates (e.g., <~5% and higher 95%). In that paper, I inputted mean & variances for realized harvest rates to derive beta parameters, but it generated U-shaped distributions on occasion. Just be careful. Also, we do want to apply OU to TAC correct, not harvest rates, as TAC is what is controlled not harvest rates? I thought you wanted to avoid switching back and forth too much? I can dig back in my notes to when we last switched OU...

CamFreshwater commented 5 years ago

You're correct that OU is currently being applied to TAC because we would otherwise have to do multiple transformations in the TAM rule calculation. When we simplified it, I double checked that the parameterization we selected based on harvest rate was consisted with catches and it was, but this clearly wouldn't be the case with a beta distribution and we'd have to switch back to a more complicated TAM rule function. There's nothing necessarily wrong with that I think, but it will make the methods (and code) a little more cumbersome to explain. So I guess the question is whether the benefits of using a beta distribution outweigh that increased complexity.

Regardless of distribution though, if we want to reparameterize outcome uncertainty so that it's derived from recent data we should probably address the bias issue. Does anyone have a preference on:

  1. Keeping OU centered on 0.
  2. Using two distributions, one for years where TAC = 0 and one where TAC > 0.
  3. Using one distribution with a weak positive bias that straddles estimates that do or do not exclude TAC = 0 years.

If not I'll go with option 3.

seananderson commented 5 years ago

@CamFreshwater , I wouldn't get too fancy with introducing bias here. Unless we think this is a main feature driving the conclusions of the paper (I doubt it is given the point of the paper) then it would just be a distraction. (That decision might be different in the context of using this model for other purposes in real life.)

@carrieholt , good point on the U shape. Based on the following parameterized in terms of a mean and standard deviation, I think as long as the standard deviation is kept small (< 0.1) it should be fine:

library(manipulate)
x <- seq(0.01, 0.99, length.out = 300)
manipulate({
  alpha <- mu^2 * (((1-mu)/sigma^2) - (1/mu))
  y <- dbeta(
    x = x,
    shape1 = alpha,
    shape2 = alpha * (1/mu - 1))
  plot(x = x, y = y, type = "l", ylim = c(0, max(y)))},
  mu = slider(0.02, 0.98, 0.8),
  sigma = slider(0.05, 1, 0.08))

(must be run in RStudio)

I agree switching back and forth with TAC and harvest rate isn't ideal, and the lognormal will be fine as long as TAC is far from true abundance and the SD isn't too big, so it could be fine either way here. As TAC approaches true abundance, though, using something capped at 100% becomes important.

CamFreshwater commented 5 years ago

In case anyone wants to explore this further, I just pushed an Rmd to the repo that walks through the different options, i.e. apply normal OU to TAC, normal OU to HR, and beta OU to HR. The first two options converge on one another, which is good, and the third basically demonstrated to me that it will be feasible to adjust the model to accept a beta distribution (but will involve back calculations since we need to do all the TAM bookkeeping with TACs).

As expected realized catches for beta and normal are similar at moderate ERs, but diverge at lower/higher levels. For this particular paper I don't think there's going to be a huge impact on performance metrics when picking one over the other. However I can't easily parameterize OU using observed Fraser sockeye TAC and catches (bimodal and messy), but can using target vs. realized ERs. If that's a reason to adjust the model to apply OU to ERs (rather than TACs), I would lean towards using a beta distribution to be more robust at very low exploitation rates.

seananderson commented 5 years ago

The beta is going to break down right at 0 (as would the lognormal). You'd have to turn zeros into some small number or use a ZOIB (zero-one-inflated beta). I don't think you want the complexity of using a ZOIB here!

carrieholt commented 5 years ago

The rmd says "Simulate the harvest process under following constraints:

  1. Always start with a target ER". However, in our case aren't we starting with a TAC, since the ER from the TAM rule is converted to a TAC before removing US catch (& text fishery, etc), which happens before OU? Also I noticed that the beta distribution of single-CU catches has a very long left tail (in the last historgram). Is this realistic? When you parameterize the beta OU for FR CUs, look to make sure the tails realistic.
CamFreshwater commented 5 years ago

For the the first point, it sort of depends on how you define start. In this case what I meant is that even with the TAM rule the first point is selecting an exploitation rate based on the FRPs (i.e minimum, maximum or something in between based on an escapement goal). In other words, regardless of whether your HCR is fixedER or TAM you begin by transforming an exploitation rate into TAC.

As for the long tails, they do seem to be realistic. Below are histograms for realized harvest rates and realized catches over the ~10 years for the Fraser aggregate as a whole, as well as the late MU.

image

ann-marieH commented 5 years ago

Have I mentioned/explained the role of "LAERs" as part of the TAM rule?

But.

CamFreshwater commented 5 years ago

Good point. The LAERs are in the model's TAM rule so that some exploitation is simulated even when the TAC should theoretically be 0, however I didn't account for them when parameterizing outcome uncertainty.

At this point I don't think it's worth trying to parameterize OU in catches (rather than ER) because I've already tweaked the model to draw from beta distributions. That being said I'll double check how we're parameterizing the beta distribution for outcome uncertainty on ER by adjusting the target ERs.

Thanks for the flag!