Open pratikunterwegs opened 9 months ago
I'm not a fan of (1) as it's lossy. For simplicity, I would likely go with (2) but there could also be ...
I'd still lean toward (2) for the simplicity but could be nudged towards (3) and (4) if you thought this was something users would want.
If you went with confidence intervals I'd consider adding upper/lower CI argument to function signature (with defaults).
Tagging @Bisaloo / @chartgerink for whole system overview as would be good to land on a consistent approach (where possible) across whole ecosystem of packages.
Thanks @TimTaylor - I think (2) is then the best option. (3) would make filtering and summarising (a bit) more tedious, while (4) would add a hard dependency.
I think (2) works well for {finalsize} as the data.frame size is restricted to $N*M$ rows, for $N R_0$ samples and $M$ demography-susceptibility groups (in contrast the {epidemics} output has multiple timepoints as well, making it much longer).
IMO if we move towards passing a list of susceptibility, p_susceptibility, or contact matrices similar to 'scenarios' in {epidemics}, a nested <data.table>
would be the way to go.
Thanks @TimTaylor for the tag.
My main question is: What need does this solve for whom?
I know there is an implicit need you know of. If we are doing agile development, a clearly articulated underlying user story makes it easier to meaningfully contribute. I have to fill in a lot of gaps now.
In case the user story is more along the lines of "As a researcher, I want to provide multiple values of $R_0$, so that I can generate data in one function run that I can process further" I would opt for option 2.
If the user story is more along the lines of "As a researcher, I want to model average estimates of final size given a set of $R_0$, so that I can use these estimates in policy papers directly" I would opt for option 1.
If the need and benefit is completely different, I dunno what I'd prefer.
PS: "Lossy" here means loss of information, like in compression algorithms @TimTaylor?
PS: "Lossy" here means loss of information, like in compression algorithms @TimTaylor?
In effect yep. Going from N outputs to 3 (lower, mean, upper) and then not being able to go the other way.
My main question is: What need does this solve for whom?
@chartgerink the user requirement is laid out in this Discussion. This is updated in the issue text.
Since the included code snippet also tends towards option (2), we'll provisionally go with that one.
More generally, since our packages are relatively new and have few users (that I know of), we try to anticipate user requirements within dedicated discussion groups, and raise relevant issues.
Since the included code snippet also tends towards option (2), we'll provisionally go with that one.
There is no urgency to implement it in a very short time frame. Please let's use a couple of days to let this important design decision simmer. This will allow us to calmly think about all the implications of all solutions and avoid potential implement/revert cycles in the future.
That's fine by me - am I correct in understanding that this is mostly to do with the return type, or does this also relate to the inputs? If only the return, I can get working on the internal changes for now. No rush either way.
Thanks for these suggestions. I agree that we should avoid (1) – in general, I don't think we should provide summary statistics as an output of a modular simulation model – if the user puts a vector of 10 $R_0$ values into a simulation function, I think they should get 10 sets of results out by default. (2) seems OK as option, although think would still be useful to have some cross-package functions that minimise user effort (i.e.. lines of code, format wrangling) required to achieve what they want.
In case useful, some common use cases I'd anticipate for this vector functionality in finalsize (and other packages):
Some of these are no doubt relevant to other packages too, so would be nice to have consistency for users across packages (e.g. if they've got some pipelines set up for {epidemics}, can just drop the $R_0$ vector into {finalsize} and use same summarisation functions on the output)
Thanks, just to clarify (for myself mostly), in {finalsize}:
- Scenarios: this issue would allow passing
Personally, for above bullet point, as a user of a vectorised finalsize
I'd define some kind of object to store my scenario parameters (maybe a data.frame, with scenario
as a column), then pass the R_0
column to finalsize, then attach the output to the storage data.frame in some way – but obviously trickier when dealing with more complex scenarios/outputs, in which case list of vectors may be more sensible, especially if we're standardising this step elsewhere...
Thanks - there will be some differences with {epidemics} in terms of how vectors of parameters are passed then, as {finalsize} really focuses on $R_0$, but equally could pass lists of susceptibility
and p_susceptibility
as well, while keep existing functionality to pass a single matrix, $R_0$ etc. Will make a small Gist soon.
This issue requests that
final_size()
should accept a vector of $R_0$ in the argumentr0
. This stems from this Discussion and parallels similar changes coming to {epidemics}.Two return type options I can think of:
I think option 1 is neat and compact, but happy to implement (2) or something else. Thoughts @adamkucharski, @TimTaylor?