impactlab / caltrack

Shared repository for documentation and testing of CalTRACK methods
http://docs.caltrack.org
6 stars 5 forks source link

What are the stages for methodological inspections, and what benchmark values would be most valuable to develop? #34

Closed matthewgee closed 7 years ago

matthewgee commented 7 years ago

This issue is in response to the discussion among beta testers that the monthly analysis summary statistics we used during the first round of beta tests were too noisy a signal for identifying issues with the specification. We need a sequence of outputs testers can produce that make is easier to identify problems with the spec at various stages of testing. In response to this issue, we should identify specific stages of analysis for producing outputs, and a list of the outputs at each stage.

houghb commented 7 years ago

@matthewgee Can you write a description of this issue? I'm not clear exactly what this issue is referencing...

matthewgee commented 7 years ago

@houghb I updated the description. Let me know if that helps. To get things started, here's what I propose for the five stages of outputs that would be helpful to generate.

  1. Summary statistics prepared data.
  2. Summary statistics of fit models in stage one
  3. Summary statistics of selected models in stage one
  4. Summary statistics of stage two values
  5. Summary statistics of aggregation step on stage two values

Before we spend time coming up with a list of summary statistics for each of these modeling stages, let's first decide on two key things for this issue:

  1. Does it still make sense to recommend/dictate a set of tests statistics that get generated by testers given the goals of the daily methods tests?
  2. If it makes sense, are these the right 5 stages to generate outputs? Are some of these stages largely redundant or undifferentiated and therefore can be taken out? Are there stages of analysis that are missing that need to be added?