CDCgov / multisignal-epi-inference

Python package for statistical inference and forecast of epi models using multiple signals
https://cdcgov.github.io/multisignal-epi-inference/
9 stars 1 forks source link

Clean up Rt infections renewal model #204

Closed damonbayer closed 1 week ago

damonbayer commented 1 week ago

Cleaning up this model in preparation for https://github.com/CDCgov/multisignal-epi-inference/issues/202.

codecov[bot] commented 1 week ago

Codecov Report

Attention: Patch coverage is 92.85714% with 1 line in your changes missing coverage. Please review.

Project coverage is 92.51%. Comparing base (2eeddc6) to head (8d82b48). Report is 2 commits behind head on main.

Files Patch % Lines
model/src/pyrenew/latent/infectionswithfeedback.py 66.66% 1 Missing :warning:
Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #204 +/- ## ========================================== - Coverage 94.76% 92.51% -2.26% ========================================== Files 40 40 Lines 898 895 -3 ========================================== - Hits 851 828 -23 - Misses 47 67 +20 ``` | [Flag](https://app.codecov.io/gh/CDCgov/multisignal-epi-inference/pull/204/flags?src=pr&el=flags&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=CDCgov) | Coverage Δ | | |---|---|---| | [unittests](https://app.codecov.io/gh/CDCgov/multisignal-epi-inference/pull/204/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=CDCgov) | `92.51% <92.85%> (-2.26%)` | :arrow_down: | Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=CDCgov#carryforward-flags-in-the-pull-request-comment) to find out more.

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

damonbayer commented 1 week ago

Main comment: Why is the test image being removed?

  1. The test was failing and can't get the image I produce locally to match the one produced in CI.
  2. I don't really understand the point of the test.
  3. If we want to keep a version of the test, we should just evaluate the underlying data the figure is based on, not the actual image file.
AFg6K7h4fhy2 commented 1 week ago

Main comment: Why is the test image being removed?

1. The test was failing and can't get the image I produce locally to match the one produced in CI.

2. I don't really understand the point of the test.

3. If we want to keep a version of the test, we should just evaluate the underlying data the figure is based on, not the actual image file.

Agree on point 3 unless there is something I am missing @gvegayon .

gvegayon commented 1 week ago

Main comment: Why is the test image being removed?

1. The test was failing and can't get the image I produce locally to match the one produced in CI.

2. I don't really understand the point of the test.

3. If we want to keep a version of the test, we should just evaluate the underlying data the figure is based on, not the actual image file.

Agree on point 3 unless there is something I am missing @gvegayon .

I 100% Agree with both, so then, instead of removing the test completely, I would drop in a replacement test that checks the data under the figure.

damonbayer commented 1 week ago

I'm just not yet sure what a good test would be. We don't test anything related to the posterior anywhere else, so I would be more inclined to push "develop tests related to posterior inference" to a separate issue.

gvegayon commented 1 week ago

How about saving the data that it generates now? Just like the image approach, but with model outcomes

damonbayer commented 1 week ago

Willing to try it, but I worry that we could get test failures based on very minor package updates. I'm also not sure if the rng would lead to the same output on all platforms.

There is a Teams thread about this somewhere (for Stan). I'll try to find it.

Edit: https://teams.microsoft.com/l/message/19:064fb29dee654148b647dc8b9bab0782@thread.tacv2/1717450231741?tenantId=9ce70869-60db-44fd-abe8-d2767077fc8f&groupId=4f34eba7-8fcb-4a26-9b34-a434ea777f0c&parentMessageId=1717450231741&teamName=CFA-Predict&channelName=Help&createdTime=1717450231741

damonbayer commented 1 week ago

Based on the Teams discussion I referenced, I believe this could be merged (but I would need an approving review), and we can defer a wider discussion of testing posterior inference to some other time and place.

Perhaps it would be a good topic for an STF team meeting? @dylanhmorris

dylanhmorris commented 1 week ago

I agree on both counts.

dylanhmorris commented 1 week ago

Perhaps it would be a good topic for an STF team meeting? @dylanhmorris

Can you open an issue for this in the team materials repo @damonbayer?