AstarVienna / ScopeSim

A telescope observation simulator for Python.
GNU General Public License v3.0
14 stars 10 forks source link

Test all cases of DIT & NDIT vs. exptime and AutoExposure #428

Closed teutoburg closed 3 months ago

teutoburg commented 3 months ago

Even after some recent improvements to AutoExposure (#424, #396), there are still some inconsistencies, which are also preventing #426 from moving forward.

To solve this systematically, I came up with the following tree of possible combinations:

flowchart TD
  ditkw{{"dit & ndit in kwargs"}} -- yes --> done1["Just use those"]
  expkw{{"exptime in kwargs"}}
  ae1{{"AutoExposure in
           optical train?"}} -- yes --> ae1b["`Get dit & ndit form AutoExposure using exptime from kwargs`"]
  ditobs{{"dit & ndit in !OBS"}} -- yes --> done2["Use dit & ndit
                                                                              form !OBS"]
  expobs{{"exptime in !OBS"}} -- no  --> ValueError
  ae2{{"AutoExposure in
           optical train?"}} -- no --> ValueError
  ae2b["`Get dit & ndit form AutoExposure using exptime from !OBS`"]
  ditkw  -- no  --> expkw
  expkw  -- yes --> ae1
  expkw  -- no  --> ditobs
  ditobs -- no  --> expobs
  expobs -- yes --> ae2
  ae2 -- yes --> ae2b
  ae1 -- no --> ValueError

(using bestagons for decisions instead of diamonds to limit vertical extent of the chart...)

I created TestDitNdit in test_basic_instrument to (hopefully) cover all of this. I'm open to discussion about the location of this test class, but since it's imo more of an integration test (simulating the full optical train rather than an individual effect), I decided to put it here. It might also be worth considering to put the AutoExposure effect in the basic instrument test package, maybe in a separate mode (e.g. autoimg) instead of hacking it in as is done currently. Oh and (as commented), all the ["!OBS.xy"] meddling should really be done with context managers using unittest.mock.patch.dict, but for some (nested mapping) reason, I couldn't get that to work 🙄

Anyway, some of these tests currently xfail, which is expected. I'll put the actual solution(s) in a separate PR, this is just so we at least know what's working and what isn't...

codecov[bot] commented 3 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 75.02%. Comparing base (4228950) to head (9f3c668). Report is 4 commits behind head on main.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #428 +/- ## ========================================== + Coverage 74.83% 75.02% +0.19% ========================================== Files 56 56 Lines 7823 7823 ========================================== + Hits 5854 5869 +15 + Misses 1969 1954 -15 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

teutoburg commented 3 months ago

I didn't know it was possible to xfail parameterized tests like that.

Yeah, me neither until last week, when I just googled to see if this can be done 😅 I might go over our whole test suite in a separate PR and check for anything the XPASSes consistently and remove the xfail from those, or parametrize where needed...

Is the dit-and-ndit-in-kwargs-while-also-having-autoexposure case missing?

Indeed, I think I missed that. Thanks for spotting it, I'll add.

Maybe we could add Quantiazation to the optical train and in each test call _should_apply() and verify that it is properly applied or not?

Seems useful 👍

In general I think once we manage to find a better solution to the whole "cmds" thing (i.e. #387 et al.), a lot of these things might be smoother as well. But it should be possible to solve the issue at hand without that for now...

teutoburg commented 3 months ago

Added missing combinations and Quantization. All tests that pass without Quantization also pass with it. It's a bit harder to tell which of the xfail ones would trip on the Quantization once they pass otherwise, but we'll see that when we implement the missing stuff.

I tried to keep the reason in the xfail as precise as possible (as far as I can tell why they're failing), so that should be a good starting point to summarize what's actually not working (getting closer to TDD I guess).

teutoburg commented 3 months ago

I could say that we should also test the Quantization when the AutoExposure is not included

It wasn't much work to add this, so I did...