opensafely-core / interactive-templates

Code to generate the reports generated by OpenSAFELY Interactive
Other
0 stars 0 forks source link

Update dependencies 2024-07 #368

Closed StevenMaude closed 1 week ago

StevenMaude commented 1 week ago

This should fix #362, but needs reviewing post-merge to confirm that the security alerts are cleared.

Dependabot fails to generate updates for several dependencies, so let's update everything manually.

StevenMaude commented 1 week ago

This doesn't yet update the test dependencies because that's a little more gnarly:

Edit: this was an active decision made in #40. But maybe this doesn't apply following the move of interactive into job-server?

StevenMaude commented 1 week ago

So I think my questions here reduce down to:

StevenMaude commented 1 week ago

Oh, the analysis tests are specifically for actions running with the Python image. But that also begs the question: we have two versions of the Python image; which one should be used, or both?

That also relates to #12.

StevenMaude commented 1 week ago

Running locally with the v2 Python image using Python 3.10, this takes much longer to run (more than 10 times longer) and results in:

FAILED tests/test_measures.py::test_calculate_total_counts - AssertionError: assert True == 'total'
FAILED tests/test_measures.py::test_calculate_group_counts - AssertionError: assert True == '2022-01-01'

Edit: I think this is related to .all() behaviour changes in pandas to be less quirky; see pandas-dev/pandas#12863. As specified in the documentation, even for the v1.0.3 version of pandas in the latest image, .all() is supposed to return whether all elements of the Series are True, but actually it ends up returning the value. We can fix this by checking equality with .eq instead of relying on this quirky behaviour.

The slowness of the tests is then Hypothesis generating examples, I think. With the fix, the tests run quickly in Python 3.8 or Python 3.10.

StevenMaude commented 1 week ago

In conclusion: