Closed jeffeaton closed 3 months ago
Suggest we merge this now; eventually we will remove all dependency on eppasm. _Question: To avoid warnings, we could either export createbeers() or copy that code into here.
Could we add the function we need into the beers package? Is there something different about the method? Would what eppasm does be useful for others?
We should be mindful that our tests didn't identify that difference between leapfrog and frogger. I just had a bit further look at this. I think that the frogger tests are testing that the model produces something, but not that it produces the same thing as leapfrog for the whole simulation. Makes sense this impact was missed based on the tests. We'll get some tests against PJNZ files written.
Yep agreed, we have some reference tests that check exactly this in tests/testthat/test-reference.R
but these are skipped at the moment because of known differences between leapfrog & frogger. (Though I can't remember details of what the difference was) Are you expecting them to be identical after this PR? If so, we should remove the skip.
Note, just changed the base branch temporarily whilst we wait for #43 to be merged
Suggest we merge this now; eventually we will remove all dependency on eppasm. _Question: To avoid warnings, we could either export createbeers() or copy that code into here.
Could we add the function we need into the beers package? Is there something different about the method? Would what eppasm does be useful for others?
eppasm::create_beers()
treats the last value like an open-ended group rather than an additional five year group.
It is doing the equivalent of:
eppasm_beers <- function(x) {
c(beers::beers_sub_ordinary(x[-length(x)]), x[length(x)])
}
See verify:
> set.seed(1)
> x <- rnorm(7)
> Amat <- eppasm:::create_beers(7)
> eppasm_beers <- function(x) {
+ c(beers::beers_sub_ordinary(x[-length(x)]), x[length(x)])
+ }
> all.equal(eppasm_beers(x), c(Amat%*% x))
[1] TRUE
We should be mindful that our tests didn't identify that difference between leapfrog and frogger. I just had a bit further look at this. I think that the frogger tests are testing that the model produces something, but not that it produces the same thing as leapfrog for the whole simulation. Makes sense this impact was missed based on the tests. We'll get some tests against PJNZ files written.
Yep agreed, we have some reference tests that check exactly this in
tests/testthat/test-reference.R
but these are skipped at the moment because of known differences between leapfrog & frogger. (Though I can't remember details of what the difference was) Are you expecting them to be identical after this PR? If so, we should remove the skip.Note, just changed the base branch temporarily whilst we wait for #43 to be merged
Yes, I have just checked and they should be aligned now. The reference data needs to be updated and the call to run_model()
needs to be updated to current parameter list.
Merge after #43
This addresses:
eppasm::create_beers()
andbeers::beers_sub_ordinary()
I am slightly confused about the last bullet. In Leapfrog code, it uses
eppasm:::create_beers()
: https://github.com/mrc-ide/leapfrog/blob/21386731f0fc4e1932b11ed05cd5273478b49eaa/R/spectrum-inputs.R#L178-L181In frogger, that function is replace by
beers::beers_sub_ordinary()
: https://github.com/mrc-ide/frogger/blob/a783f8e6a228bf441522316c9ea05bab5ff4f7dc/R/spectrum_inputs.R#L254-L256But I can't find in the git blame where that change comes.Edit: On further review, I see in this PR the change tobeers::beers_sub_ordinary()
is described in this PR: https://github.com/mrc-ide/frogger/pull/48 (sorry).create_beers()
or copy that code into here._