DARPA-ASKEM / program-milestones

Repository for materials related to program milestone hackathon and evaluation events
6 stars 10 forks source link

Epi Scenario 1: Modeling SARS-CoV-2 infections in White-tailed Deer #71

Open djinnome opened 6 months ago

djinnome commented 6 months ago

To date, COVID-19 modeling efforts have focused almost exclusively within human populations. There is now compelling evidence that SARS-CoV-2, the virus that causes COVID-19, spreads from humans to white-tailed deer (Odocoileus virginianus), as well as between white-tailed deer and possibly from white-tailed deer back into human populations. There is evidence that infected white-tailed deer may serve as a reservoir for nearly extinct variants of concern (e.g., Delta variant), some of which are associated with greater clinical severity in human populations. Additional mutations to these nearly extinct variants of concern within the wildlife reservoir could make them more transmissible in addition to causing increased severity of infection.

Decision makers are interested in (1) better characterizing infection dynamics within the white-tailed deer (WTD) population to understand risks such as re-importation of nearly extinct variants of concern back into human populations; (2) understanding the potential efficacy of interventions targeted towards decreasing the deer population infected with SARS-CoV-2, in order to decrease future likelihood of transmission from deer population back to human population (which could potentially drive new Covid waves in the future, were this to happen at any meaningful level). To do this, they have asked you to find a model of Covid in the WTD population, that already can support, or can be modified to support, these types of interventions.

You have identified three compartmental models of SARS-CoV-2 transmission within the WTD population that are relevant to your task. Two are published as pre-prints, and one is a very recent publication. Given the novelty of these models, you want to better understand how they are similar and different in terms of their assumptions, strengths, limitations, and fit-for-purpose.

For Q1-5, use only the above 3 publications as source material.

  1. Model Extraction: Begin by extracting the three models, available at the links above. For each model, note the time to extract the model and get it into an executable state that can run a simple test simulation and get sensible results. You may choose the initial conditions and parameter values for the test simulation; they don’t need to be realistic, but the results do need to make sense given the values you choose. For workbench modelers, model extraction time may include human-in-the-loop curation, and for baseline modelers, this time may include debugging code. For each model, provide simulation results from your test simulation.

For baseline modelers, model extraction is defined as the following: • Writing out or capturing the equations describing the model, or drawing out the model structure. (You can write them out by hand, but be sure to capture the image for the work product). • Writing out definitions of all variables and parameters, with units • Finding default values for parameters, initial values for variables, and whatever else is needed to initiate/run the model • If the model is not already installed on the VM, find and install code to run it, or produce your own code to run the model. The code should be deposited in your work product SharePoint folder. For the workbench modelers, model extraction is defined as the following: • Ingesting the model from source paper or code, into the workbench • Capturing the set of equations describing the model in the workbench • Gathering definitions of all variables and parameters, with units • Gathering default values for parameters, initial values for variables, and whatever else is needed to initiate/run the model • Ensuring the model is executable in the workbench

  1. Model Comparison: Do a model comparison based on key differences in assumptions, strengths, limitations, and distinguishing characteristics. Based on this information, rank each model in terms of their relevance and fit-for-purpose in this context.
Model Distinguishing characteristics Assumptions Strengths Limitations Rank fit-for-purpose (1 = most suitable; 3 = least suitable), with reasoning
  1. Structural Model Comparison: Now perform structural model comparison between each pair of models. By structural comparison, we seek to understand how compartments and transition pathways overlap or diverge between models. Feel free to create diagrams or use equations in your response.

  2. Gap Analysis: Referring to the high-level decision maker objectives you have been given, what are the key gaps between these existing models and your modeling needs (if any)?

  3. Based on Q1-4, select the model you think is the most appropriate starting point for supporting decision makers, and explain your reasoning. (Note that there is no single right answer – this scenario is about selecting a model, and being able to justify your choice with evidence.)

For Q6 and beyond, you may use any additional materials you find in the literature or associated with the 3 models linked at the beginning of the scenario.

  1. Find Parameters: Find relevant parameter values for the chosen model in Q5. and fill in the following information about sources and quality. You may use any of the papers linked in this scenario, as well as any other literature on SARS-CoV-2 or human/WTD population dynamics. If relevant, you may include multiple rows for the same parameter (e.g. perhaps you find different values from different reputable sources), with a ‘summary’ row indicating the final value or range of values you decide to use. If there are required parameters for your model that you can’t find sources for in the literature, you may find data to calibrate your model with, or make reasonable assumptions on what sensible values could be (with rationale).
Parameter Parameter Definition Parameter Units Parameter Value or Range Uncertainty Characterization Sources Modeler Assessment on Source Quality
  1. Model Forecast: Now assume you’re at the start of a Covid-19 wave in WTD in New York state in the beginning of November, 2021. Set appropriate initial conditions, parameterize your model given the information you found in Q6 and make a 2-month forecast of Covid-19 dynamics in WTD. At the very least, your forecast should include susceptible population, infectious population, and recovered population. If the model you are using has additional components, please include those in the forecast.

  2. Model Modification (Stratification): If the model that you selected is not able to distinguish between multiple variants, stratify the model to demonstrate that it can be used to simulate at least three (alpha, delta, and omicron) variants.

  3. Interventions: a. Brainstorm three different interventions you could implement in your model that would lower the infectious population in your forecast from Q7. For this problem, you can ignore the feasibility of implementing the interventions in the real world and make hypothetical assumptions (e.g. you can assume treatments or vaccines exist even if they do not in reality). For each intervention, how would you implement it in your chosen model? Which model components are involved? b. Implement one of the interventions you considered in the model, redo the forecast from Q7, and comment on the impact of the intervention, compared to the baseline forecast in Q7.

<table class=MsoTableGrid border=1 cellspacing=0 cellpadding=0 width=672 style='width:503.75pt;border-collapse:collapse;border:none'>

Question

Inputs

Tasks

Outputs

Q1

Linked papers

·  Extract equations

·  Extract parameter values

·  Iterate/curate extraction and execute model until a test simulation gives reasonable results

·  3 extracted models with all variables and parameters defined, and with units

·  Test simulation plots

·  Time to do model extraction

·  Time to execute extracted model and plot results

Q2-3

Extracted models

·  Model comparison based on assumptions, limitations, strengths, defining characteristics

·  Structural model comparison

·  Completed comparison table

·  Time to complete table

·  Time to do structural model comparison

Q4

Results from Q1-3

Identify gaps in the candidate models

Explanation of gaps between candidate models and decision-maker objectives

Q5

Results from Q1-4

Select a model

Selected model, with explanation

Q6

Chosen model from Q5

Find parameters, from publications or other sources

·  Completed parameter table

·  Time to complete table

Q7

·  Chosen model from Q5

·  Parameters from Q6

 

·  Parameterize model

·  Set initial conditions

·  Create 2-month forecast

·  Forecast results that include susceptible, infected, and recovered deer populations.

·  Time to generate forecast

Q8

Parameterized model from Q7

Stratify model by 3 variants

·  Stratified model

·  Time to stratify model

Q9

·  Selected model from Q5

·  Parameters from Q6

 

·  Brainstorm potential interventions and how they would be implemented

·  Implement one intervention and compare results with Q7

·  Forecast results that include susceptible, infected, and recovered deer populations, with intervention implemented.

·  Time to generate forecast

 

 

 

 

Decision-maker Panel Questions

What is your confidence that the modeling team selected a model and associated data appropriate for the decision-making questions under consideration? Select score on a 7-point scale.

1.     Very Low

2.     Low

3.     Somewhat Low

4.     Neutral

5.     Somewhat High

6.     High

7.     Very High

 

Explanation: Decision makers, evaluate whether results include an appropriate model, with relevant parameters, starting values, etc.

 

The decision-maker confidence score should be supported by the answers to the following questions:

·      Did modelers clearly communicate key differences between the chosen model and other candidate models? What kinds of information were provided to help you understand the differences? What kinds of information would you have wanted?

·      Are results traceable to sources and did modelers assess source quality?

·      Is the model chosen for the scenario appropriate/fit-for-purpose for the given problem? Were assumptions and data associated with the model clearly communicated?