Closed jeanconn closed 6 years ago
That's not good. It looks like the issue is in converting between the 64-bit representation of the literal 0.7
and the 32-bit in the value:
In [8]: star['COLOR1'] == np.float32(0.7)
Out[8]: True
In [9]: star['COLOR1'] == 0.7
Out[9]: False
In [10]: star['COLOR1'] == np.float64(0.7)
Out[10]: False
In [15]: np.float32(0.7) == 0.7
Out[15]: False
So probably we need to scrub code for this and do:
np.isclose(star['COLOR1'], 0.7, atol=1e-6, rtol=0)
There might also be a one-to-one correlation with COLOR1_ERR
being -9999? That might be a more direct test for "color1 is missing".
Fortunately we've only used such a star 90 times in the mission and only 8 times since 2016:001, almost all ERs.
I was specifically looking at why the aca_lts_eval code looks to have overestimated (by a tiny bit) the allowed warmest temperature for 20757 for products today. It looks like a combination of this and the cacheing used (I'm not really letting the dark current evolve over the year).
I haven't plugged it in to star_probs, but I'm guessing this is also why the aca_lts_eval code had an overestimation of acquisition probability for 20765 for the 96.5 roll catalog
Yes, I just saw this problem doing explicit evaluation on the side of the catalog probabilities.
I'm concerned that the aca_lts_eval code when run against
https://github.com/sot/chandra_aca/blob/310a532159e737279b578a193f6d3727f55d056a/chandra_aca/star_probs.py#L260
might not be marking the 0.700 stars as fetched on the Python side