Closed teutoburg closed 3 months ago
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 75.02%. Comparing base (
4228950
) to head (9f3c668
). Report is 4 commits behind head on main.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
I didn't know it was possible to xfail parameterized tests like that.
Yeah, me neither until last week, when I just googled to see if this can be done 😅 I might go over our whole test suite in a separate PR and check for anything the XPASSes consistently and remove the xfail from those, or parametrize where needed...
Is the dit-and-ndit-in-kwargs-while-also-having-autoexposure case missing?
Indeed, I think I missed that. Thanks for spotting it, I'll add.
Maybe we could add Quantiazation to the optical train and in each test call
_should_apply()
and verify that it is properly applied or not?
Seems useful 👍
In general I think once we manage to find a better solution to the whole "cmds" thing (i.e. #387 et al.), a lot of these things might be smoother as well. But it should be possible to solve the issue at hand without that for now...
Added missing combinations and Quantization. All tests that pass without Quantization also pass with it. It's a bit harder to tell which of the xfail ones would trip on the Quantization once they pass otherwise, but we'll see that when we implement the missing stuff.
I tried to keep the reason
in the xfail as precise as possible (as far as I can tell why they're failing), so that should be a good starting point to summarize what's actually not working (getting closer to TDD I guess).
I could say that we should also test the Quantization when the AutoExposure is not included
It wasn't much work to add this, so I did...
Even after some recent improvements to AutoExposure (#424, #396), there are still some inconsistencies, which are also preventing #426 from moving forward.
To solve this systematically, I came up with the following tree of possible combinations:
(using bestagons for decisions instead of diamonds to limit vertical extent of the chart...)
I created
TestDitNdit
intest_basic_instrument
to (hopefully) cover all of this. I'm open to discussion about the location of this test class, but since it's imo more of an integration test (simulating the full optical train rather than an individual effect), I decided to put it here. It might also be worth considering to put theAutoExposure
effect in the basic instrument test package, maybe in a separate mode (e.g.autoimg
) instead of hacking it in as is done currently. Oh and (as commented), all the["!OBS.xy"]
meddling should really be done with context managers usingunittest.mock.patch.dict
, but for some (nested mapping) reason, I couldn't get that to work 🙄Anyway, some of these tests currently xfail, which is expected. I'll put the actual solution(s) in a separate PR, this is just so we at least know what's working and what isn't...