Exo-TiC / ExoTiC-ISM

This is a repository for the reduction pipeline detailed in Wakeford, et al., 2016, ApJ. The method implements marginalization across a series of models to represent stochastic models for observatory and instrument systematics. This is primarily for HST WFC3, however, may be extended to STIS in the future.
MIT License
8 stars 6 forks source link

Inflate errors using beta_noise factor #117

Open hrwakeford opened 3 years ago

hrwakeford commented 3 years ago

This issue originated in PR #116 Comments copied below: @ivalaginja: @hrwakeford is your question about inflating the errors with the beta value still relevant? I can look into it if you want.

@hrwakeford: I was thinking about it and I was not sure how to test it out. I want to see if inflating the input err on the data points after the first fit with the beta value from the noise calculator would be something useful. But was not sure how to test that. You would need to add the following into the first fit loop to calculate beta


        systematic_model = marg.sys_model(phase, HSTphase, sh, tmodel.m_fac.val, tmodel.hstp1.val, tmodel.hstp2.val,
                                          tmodel.hstp3.val, tmodel.hstp4.val, tmodel.xshift1.val, tmodel.xshift2.val,
                                          tmodel.xshift3.val, tmodel.xshift4.val)

        fit_model = mulimb01 * tmodel.flux0.val * systematic_model     #  Issue #36
        residuals = (img_flux - fit_model) / tmodel.flux0.val

        white_noise, red_noise, beta = marg.noise_calculator(residuals)

Then save the beta in an array for each systematic model and then use them individually to inflate the light curve uncertainty for that systematic model fit. I was not sure how to do the last part, save it and then tell the fit to use the new_err

@ivalaginja: I remember we had investigated the scaling of the errors between fits before, likely in one of the notebooks and/or tests when we were checking the correct error propagation when we were migrating to Python. I will try to dig that out and reconstruct it, it will just take me a little bit to remind myself of the structure of the data being passed around. In any case, if we managed to do it once I am confident we can do it again.