Closed damonbayer closed 1 week ago
Attention: Patch coverage is 92.85714%
with 1 line
in your changes missing coverage. Please review.
Project coverage is 92.51%. Comparing base (
2eeddc6
) to head (8d82b48
). Report is 2 commits behind head on main.
Files | Patch % | Lines |
---|---|---|
model/src/pyrenew/latent/infectionswithfeedback.py | 66.66% | 1 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Main comment: Why is the test image being removed?
Main comment: Why is the test image being removed?
1. The test was failing and can't get the image I produce locally to match the one produced in CI. 2. I don't really understand the point of the test. 3. If we want to keep a version of the test, we should just evaluate the underlying data the figure is based on, not the actual image file.
Agree on point 3 unless there is something I am missing @gvegayon .
Main comment: Why is the test image being removed?
1. The test was failing and can't get the image I produce locally to match the one produced in CI. 2. I don't really understand the point of the test. 3. If we want to keep a version of the test, we should just evaluate the underlying data the figure is based on, not the actual image file.
Agree on point 3 unless there is something I am missing @gvegayon .
I 100% Agree with both, so then, instead of removing the test completely, I would drop in a replacement test that checks the data under the figure.
I'm just not yet sure what a good test would be. We don't test anything related to the posterior anywhere else, so I would be more inclined to push "develop tests related to posterior inference" to a separate issue.
How about saving the data that it generates now? Just like the image approach, but with model outcomes
Willing to try it, but I worry that we could get test failures based on very minor package updates. I'm also not sure if the rng would lead to the same output on all platforms.
There is a Teams thread about this somewhere (for Stan). I'll try to find it.
Based on the Teams discussion I referenced, I believe this could be merged (but I would need an approving review), and we can defer a wider discussion of testing posterior inference to some other time and place.
Perhaps it would be a good topic for an STF team meeting? @dylanhmorris
I agree on both counts.
Perhaps it would be a good topic for an STF team meeting? @dylanhmorris
Can you open an issue for this in the team materials repo @damonbayer?
Cleaning up this model in preparation for https://github.com/CDCgov/multisignal-epi-inference/issues/202.