Closed andrew-edwards closed 6 years ago
I think the Row g (default HR) 2018 catch value is
catch.limit.quantiles["median"]
which is 725,984 t.
There are some notes somewhere (readme maybe) to say how to calculate all the values. Just flagging for now.
Based on Tables g and h, plan is to update the following:
Involves editing forecast-catch-levels.r, which explains it all nicely. Hopefully....
ss3 -mceval
is starting the whole MCMC from scratch (it takes a while). Thanks. ss3 -mceval should only take about a minute and not re-do any estimation If you change the starter file to have
0 # run display detail (0,1,2)
You minimize the output reported, but it should look something like the following (I complained about the extra messages and the next version of SS will have the "finished..." messages removed):
mceval counter: 1199
finished benchmark for STD quantities
finished forecast for STD quantities
process STD
mceval counter: 2000
finished benchmark for STD quantities
finished forecast for STD quantities
process STD
Your top line is what I have - it actually just hangs but is running along according to my Task Manager (sorry, I should have been more specific). It was taking longer than I'd expected. I copied all the files from mcmc/ into the new folder - that should be good, yes ?
I get
reading from STARTER.SS
reading from data file
Data read sucessful 999
reading forecast file
then it hangs.
Got it. That's likely an issue with an incorrectly formatted forecast file. I'm not sure why it would have worked previously and not now, but if you open echoinput.sso in the directory where it is running, you can follow how SS is interpreting the input (search for "forecast" within the big file to get to that section).
The only thing that should be changing is the fixed catches at the very bottom, which should appear as
(example text copied from 2018.40_base_model/mcmc/forecasts/forecast-year-2020/03-350000/forecast.ss
):
#_Year Seas Fleet Catch_or_F
2018 1 1 350000
2019 1 1 350000
2020 1 1 350000
-9999 1 1 0
Thanks. I didn't have a -9999.
Good to figure it out. The -9999 line replaces the need for the old requirement to input a value for "Number of forecast catch levels to input"
Okay, I've done the default HR. End up with
> head(out[,c("ForeCatch_2018","ForeCatch_2019","ForeCatch_2020")])
ForeCatch_2018 ForeCatch_2019 ForeCatch_2020
1 725983 600991 2.42711e+05
2 725983 600991 5.94064e+05
3 725983 520943 9.17780e-02
4 725983 600991 1.07505e+01
5 725983 600991 8.51849e+05
6 725983 600991 6.93928e+05
But I thought the second column would be constant (because I've now set the 2019 catch to the median of the HR value). Also get one value of -10^32 in the final column. Am going to re-run all this anyway to double check, but if that sounds normal/crazy then let me know (don't spend long on it).
I suspect that row 3 in what you showed is a case where that MCMC sample of the population can't support a catch of 725983 in 2018 AND a catch of 600991 in 2019, so it just removes as much as possible in 2019 and goes to essentially 0 in 2020.
I don't think it makes sense to filter those results out, but if the resulting tables or figures look strange as a result of this issue, we maybe want to make note of the potential reason why.
Thanks - that makes sense. I hadn't looked at the 2020 catch for row 3, but yes, it's essentially 0.
The -10^32 is a bit strange but again won't matter for calculating medians and quantiles. It was actually the second column (not third, as I said), and is for the MCMC sample 143:
> out[143,c("ForeCatch_2018","ForeCatch_2019","ForeCatch_2020")]
ForeCatch_2018 ForeCatch_2019 ForeCatch_2020
143 725983 -1.75009e+32 -9.99999e-07
Not going to worry about it. Thanks.
Doing this again to verify it, I get
> median(out$ForeCatch_2020)
538263.5
whereas the first time I got
532476.5
I expected exactly the same answer as there's no fitting going on (so no MCMC). Could it be due to different recruitment assumptions? Forecasts for 2018 and 2019 are the same (recruitments not kicking in yet).
Rather than 'different recruitment assumptions' I should have said 'different realised recruitments'. And these occur because I didn't set a seed.
I'm a little lost on what the process that you just repeated and got different results.
I would have thought that any two runs of -mceval should produce identical results if they have the same input files, or if you're repeating the same process to change the input files.
I'm going to run again anyway (need to round the 0.5 tons). Will let you know.
Useful exercise for me to be doing (even though quicker for one of you guys to have done it!).
When I repeated my earlier process I got the same answer as when I did it the second time (both different - to the first time). The first time I ran things I wasn't quite sure exactly what to do - having documented what I did and repeated it I now get repeatable results. I just made an error before.
So if no catch is specified it does the default harvest rule, yes? I think the caption for Table g for row h needs that added (and some further rewording maybe).
In forecast-catch-levels.r we now have (commit f3be4b2 but I haven't pushed yet):
list(c(725984, 600991, 538263), "Default Harvest Policy", "07-default-hr"),
Previously we had the description in the form
"Default Harvest Policy: 725,984 t"
but I took that out because that value is only valid for the first year's catch. The text appears in the legend of Figure i.
Seems logical to me to take it out, but let me know if should stay.
Once I've pushed the new catch levels, don't forget to do
source("all.r")
before doing
build(run.fore=TRUE)
I only did the latter, so it was building tables with the old catch levels but labelling some of them with the new ones. Thoroughly confused me....
I just pushed 12 commits (some of which I referred to above). Tables g and h look correct I think, with a calculated 100% prob in the right place (I think just for 5, 50% column of Table h).
However, Tables i and j seem to have funny catch values, for some reason changing from the ones I defined. I won't get this figured out now (off home). Not sure why it doesn't use the correct values (but haven't delved into the code). I may just need to rebuild more than I thought. Have merged and may re-run everything from scratch at home (or at least build(T, T, T) ).
Thanks Andy for working through this iterative, and sometimes not straightforward, forecasting specific catch value process!
My build(T,T,T) didn’t finish properly last night but I didn’t look into why. Prob me -maybe didn’t do all the steps on the readme.
I think extra-mcmc doesn't need to be run again, so build(T,T) would be enough. If it fails on the loading part, i.e. the forecasting and retrospectives were run correctly, try build() to re-run the creation of the RData files again.
Thanks, trying that. The error I got last night was:
create.rdata.file: RData file found in ../../models/2018.40_base_model. Deleting...
run.extra.mcmc.models: Running model for each mcmc sample to get additional mcmc outputs...
Error in scan(file = file, what = what, sep = sep, quote = quote, dec = dec, :
line 2001 did not have 226 elements
This was after I merged everyone (could be due to my changes). Will update....
I think line 2001 did not have 226 elements
implies that either posteriors.sso or derived_posteriors.sso didn't get deleted before a new -mceval was run, so that the file has more than 2000 rows (and the additional rows may be different length).
I don't know why those files sometimes get wiped out and other times not, but manually deleting them from any folder where -mcevals were run should correct this. Or we can try to build those into our scripts.
build()
seems to have worked okay. Chris was writing another comment (about that happening occasionally), but I went to chat to him so I think he didn't finish it!
Tables aren't quite right but I missed a change of index - fixed in commit 02d9a86 . Still get catches changing in Table j though. Working on it.
This all works now, and Chris fixed #368. I'll close, through Aaron/Ian may want to check my calculations and notes in https://github.com/cgrandin/hake-assessment/blob/master/doc/r/forecast-find-decision-table.r
But catch numbers are now correct in tables and there are some 50% and 100% values that make sense, so I'm pretty sure the calculations are good.
I did look into this more but didn't come to any conclusive explanations of the weird values. I also looked at posteriors for fixed catches from the 2017 model and see that there were some mismatched values there too, but it seems that they were all below the fixed catch, rather than above, so didn't mess up the table.
Currently they're just set the same as last year's. Also update caption of Table g concerning justification for the constant catch value in Row e (if we keep it at last year's then doesn't need explaining).