sot / chandra_aca

Chandra Aspect Camera Tools
https://sot.github.io/chandra_aca
BSD 2-Clause "Simplified" License
0 stars 0 forks source link

Add spline acq probability model and refactor to handle multiple models #52

Closed taldcroft closed 6 years ago

taldcroft commented 6 years ago

To do:

taldcroft commented 6 years ago

I think I've addressed your comments, but please have another look and don't be afraid to be picky.

jeanconn commented 6 years ago

My remaining comments aren't liens on the code. I'm also running a set of aca_lts_eval observations with it and will comment if I see anything truly unexpected.

jeanconn commented 6 years ago

Oh, and the only thing nit-picky thing overall that I note is that some of the kind of global response of the models are captured in those global variables, but other, very specific behaviors like the response to stars brighter than 8.5 mags, still requires the user to dig in to the code, and most of that is not strongly captured in docs. This seems fine for this very-specific application.

jeanconn commented 6 years ago

This seems more unstable than I had hoped.

In [61]: mags = np.array([6.3, 10.0, 10.0, 10.3, 10.4, 10.4, 10.3, 10.5])

In [62]: t_ccd_warm_limit(mags, date='2018:119', min_n_acq=(2, 8e-3), model='spline')
Out[62]: (-11.912491655260908, 0.0080063168277299084)

In [63]: t_ccd_warm_limit(mags + .03, date='2018:119', min_n_acq=(2, 8e-3), model='spline')
Out[63]: (-21, 0.0081571640700874287)

[per phone conversation, the t_ccd_warm_limit root-finding will only work for the spline model if the bound of the interval is set to something reasonable for the model. In this case, cold_t_ccd needs to be no less than -16]

In [8]: t_ccd_warm_limit(mags, date='2018:119', min_n_acq=(2, 8e-3), cold_t_ccd=-16, model='spline')
Out[8]: (-11.913215298581532, 0.0079926808023672458)

In [9]: t_ccd_warm_limit(mags + .03, date='2018:119', min_n_acq=(2, 8e-3), cold_t_ccd=-16, model='spline')
Out[9]: (-12.134004556213828, 0.0079984321622713281)
taldcroft commented 6 years ago

This seems more unstable than I had hoped.

Yikes, will investigate!

taldcroft commented 6 years ago

At your nudging I decided to just drop the whole date-based auto model selection. Less is more. Explicit is better than implicit.

taldcroft commented 6 years ago

After dc8bb48:

In [3]: star_probs.t_ccd_warm_limit(mags + 0.03, date='2018:119', min_n_acq=(2, 8e-3), model='spline', cold_t_ccd=-21)
/data/baffin/tom/git/chandra_aca/chandra_aca/star_probs.py:275: UserWarning: 
Spline model is not calibrated below -16 C, so take results with skepticism!
For cold temperatures use the SOTA model.
  warnings.warn('\nSpline model is not calibrated below -16 C, so take results with skepticism!\n'
Out[3]: (-21, 0.0081571640700874287)
taldcroft commented 6 years ago

I was just now also thinking about starcheck integration / validation (independent of your comment). I think that you should re-run 2 or 3 recent loads and then both you and Eric assess whether the results seems reasonable and expected. The analysis I did on all loads since 2017:001 should also be explicitly mentioned as validation that using the new model is not expected to significantly impact load generation / review. However, there is the real possibility of needing creep-up-and-away on a larger number of observations.

This probably merits a starcheck release twiki page even if the starcheck version itself doesn't change. Or maybe also change the starcheck version, your call. It occurred to me the other day that including versions for key dependent packages in the output might be a good thing.

jeanconn commented 6 years ago

OK.

I'm not sure in which context you'd like to bring up

https://github.com/sot/skanb/blob/master/star_selection/spline-acq-prob-model-impact.ipynb

because it seems like the output I'm generating for aca_lts_eval is more relevant (past performance is less interesting than expected future results)... but a big problem I see here is that we're simultaneously planning to raise the planning limit (so more observations would have been constrained even without the lower acq probabilities). However, since the model is already approved by SSAWG for operations I didn't think that the expected impact of the model actually needed to be handled at Load Review. Should this in fact be more an MPCWG briefing? [I ended up sending a brief note to SOT MP to make sure we're all on the same page]

Regarding a twiki page, so far I have

https://occweb.cfa.harvard.edu/twiki/bin/view/Aspect/Starcheck-1125 .

Regarding ". It occurred to me the other day that including versions for key dependent packages in the output might be a good thing.", I've had that as an issue for a while but haven't gotten around to executing. Your newer code to show this from testr should make this more reliable

https://github.com/sot/starcheck/issues/159

jeanconn commented 6 years ago

I put a new version of the evaluation of the aca_lts_eval runs at

https://github.com/sot/skanb/blob/2b7cdbdc846ac98058eed0b9d5238aa372a853fa/minus_10p2_planning_limit/current_target_feasibility_minus10p2.ipynb