Closed mheida closed 8 years ago
Thanks for pointing this out. If you are interested in putting in a PR to fix this please let us know. Otherwise we will fix it as soon as we can as this is a bug.
I think this is already fixed in astropy: https://github.com/astropy/astropy/issues/4152
I'll add logic in ccdproc to handle it if astropy<1.2
in case there are users counoting on the astropy LTS (1.0.x).
@mheida -- sorry for the delay. What version of astropy and ccdproc are you using? It looks like this may already be fixed (or at least when I try to reproduce it with the latest (unreleased) ccdproc it seem to be working).
Both my astropy and ccdproc are version 1.0.1
I'm having trouble reproducing the issue -- would it be a hassle to provide a code snippet that reproduces it? What I've been trying is below, in case I'm not correctly understanding when the issue arises:
import numpy as np
from ccdproc import flat_correct, CCDData
dat = CCDData(np.ones([100, 100]), unit='adu', uncertainty=np.ones([100, 100]))
# Note flat is set to 10, error is set to one.
flat = CCDData(10 * np.ones([100, 100]), unit='adu', uncertainty=1 * np.ones([100, 100]))
res = flat_correct(dat, flat)
res.uncertainty.array # Returns 1.00498756, which is sqrt(1^2+0.1^2)
@mwcraig That's not the complete formula for error propagation, you also need to multiply it with abs(dat/flat)
. (https://en.wikipedia.org/wiki/Propagation_of_uncertainty#Example_formulas)
With astropy-master:
>>> dat = NDDataRef(np.ones([100, 100]), unit='adu', uncertainty=StdDevUncertainty(np.ones([100, 100])))
# Note flat is set to 10, error is set to one.
>>> flat = NDDataRef(10 * np.ones([100, 100]), unit='adu', uncertainty=StdDevUncertainty(np.ones([100, 100])))
>>> res = dat.divide(flat)
>>> res.uncertainty
StdDevUncertainty([[ 0.10049876, 0.10049876, 0.10049876, ...,
0.10049876, 0.10049876, 0.10049876],
[ 0.10049876, 0.10049876, 0.10049876, ...,
0.10049876, 0.10049876, 0.10049876],
[ 0.10049876, 0.10049876, 0.10049876, ...,
0.10049876, 0.10049876, 0.10049876],
...,
[ 0.10049876, 0.10049876, 0.10049876, ...,
0.10049876, 0.10049876, 0.10049876],
[ 0.10049876, 0.10049876, 0.10049876, ...,
0.10049876, 0.10049876, 0.10049876],
[ 0.10049876, 0.10049876, 0.10049876, ...,
0.10049876, 0.10049876, 0.10049876]])
@MSeifert04 -- thanks, but I think that is a slightly different case? flat_correct
divides by flat.mean()
at https://github.com/astropy/ccdproc/blob/master/ccdproc/core.py#L711-L712 (well, technically, it multiplies dat/flat
by flat.mean
).
So the case on master should be, I think >>> res = dat.divide(flat/flat.mean())
(or maybe flat.data.mean()
).
I get the same result as you, but I noticed that I was using a flat frame without uncertainties and apparently then the errors are propagated differently.
dat = CCDData(np.ones([100, 100]), unit='adu', uncertainty=np.ones([100, 100])) flat2 = CCDData(10 * np.ones([100, 100]), unit='adu') res2 = ccdproc.flat_correct(dat, flat2) res2.uncertainty.array array([[ 10., 10., 10., ..., 10., 10., 10.], [ 10., 10., 10., ..., 10., 10., 10.], [ 10., 10., 10., ..., 10., 10., 10.], ..., [ 10., 10., 10., ..., 10., 10., 10.], [ 10., 10., 10., ..., 10., 10., 10.], [ 10., 10., 10., ..., 10., 10., 10.]])
Is this supposed to happen? Anyway, I should probably include uncertainties in the flatfield.
@mwcraig You are right, I missed that. :disappointed: Looking at the example code (@mheida) it seems to be https://github.com/astropy/astropy/issues/4152.
@mheida -- ah, that makes sense! Well, the result makes no sense, but now I understand why it is happening.
@MSeifert04 -- yep, I think that is it, thanks for tracking down the PR. I'm thinking I'll just repeat some of the logic from those fixes in _ccddata_arithmetic
when the astropy version is less than 1.2.
Anyway, I should probably include uncertainties in the flatfield.
Maybe...in practice I don't do that, FWIW.
When using flat_correct on an image with an uncertainty array the flat field is normalized before the data is divided through it, but not for the uncertainty array.