Closed Jiaming1999 closed 3 years ago
Thank you for this proposed fix. You're correct that the generator used for the fixture will exhaust its data on multiple runs. This is by design. That means that running the tests multiple times will fail because there will be no data re-loaded from the fixture. It's a corner-case that I don't believe needs addressing in the tests.
I appreciate you pointing this out and proposing a fix. I also appreciate you sharing how you came to this fix and sharing flake-finder with us. However I'm closing this pull request as the code improvement could introduce other issues with the tests (namely that we keep a metrics list that is not representative of what is actually being returned.
Thank you.
Tool: pytest flake-finder: https://github.com/dropbox/pytest-flakefinder To run, using cmd:
Problematic test: test_multiple_same_key
Error: Assertion error, len(metrics) == 0
Possible cause: When rerun the test, it is the same session and the thermostats_multiple_same_key is done iterating. So the metrics is empty.
My fix:
(old one) change fixture scope to function so everytime the test run, a new iterable thermostats_multiple_same_key is returned.
(new one) I feel it might not be proper to change the scope of fixture, so I made metrics a global variable so it will not be flushed in the same session.
Both changes pass pytest and pytest flake-finder.