Closed barentsen closed 8 years ago
Directly related to this, a ~10px inconsistency is apparent when the output from fov.getChannelColRow()
is fed as input into fov.getRaDecForChannelColRow()
:
In [1]: from K2fov.fields import getKeplerFov
In [2]: c9fov = getKeplerFov(9)
In [3]: input_ra, input_dec = 265.467250, -25.898028
In [4]: ch, col, row = c9fov.getChannelColRow(input_ra, input_dec)
In [5]: output_ra, output_dec = c9fov.getRaDecForChannelColRow(ch, col, row)
In [6]: (output_ra - input_ra) * 3600.
Out[6]: 29.412958488251206
In [7]: (output_dec - input_dec) * 3600.
Out[7]: -11.650304947170298
getRaDecForChannelColRow()
is being used to make the C9 footprint plots at http://k2c9.herokuapp.com , where inconsistencies at this level are indeed apparent when zooming in.
I verified that K2fov v2.0.1 shows the exact same behavior as described above.
v2.0.1 was released on Dec 4 last year, which is before @fergalm merged kepler-twowheel
into the code, suggesting that those changes have nothing to do with it.
I also verified that the behavior is the same on Python 2 vs 3.
I tried to verify the accuracy of fov.getChannelColRow
and fov.getRaDecForChannelCorRow
by comparing their results against the WCS in ktwo2015127093352-c05_ffi-cal.fits
(sampling all channels at many positions across the detector).
If said WCS is exact then the accuracy of fov.getChannelColRow
is:
Δcol = 1.6 +/- 1.8 px
Δrow = 2.1 +/- 2.0 px
and for fov.getRaDecForChannelCorRow
:
Δra: 3.1 +/- 2.2 arcsec
Δdec: 2.9 +/- 1.9 arcsec
This would suggest that fov.getRaDecForChannelCorRow
is more accurate than fov.getChannelColRow
, but both have big sigma values which is worrying (i.e. across 10% of the focal plane the error would be >5 px).
I have not made any effort yet to understand how accurate the FFI WCS is.
I am finding a consistent result using FFI's of other campaigns, i.e. fov.getChannelColRow
showing an error of 2 +/- 2 px, and >5 px offsets across 10% of the focal plane, compared to the WCS 'truth'.
It's been a while since I've thought about this code, so it took me a while to get to grips with this problem.
TL;DR The accuracy of the conversion between ra/dec and col/row (and back again) is accurate to 10 pixels in the worst case, but usually much better. Accuracy is best near the CCD readouts in each channel, and for channels near the focal plane centre.
Full answer:
The conversion from ra/dec to col/row is performed as a series of steps:
ra/dec --> focal planet coords --> fractional CCD --> column/row
Fractional CCD column/row values run from 0..1. The first step of this process (transformation to image plane coords) assumes a common projection system for every channel. This assumption is slightly invalid because the Kepler focal plane is not flat, and each channel is covered by a field flattener lens which distorts the geometry slightly. By ignoring this effect, KeplerFov introduces a slight error into the conversion to focal plane coords, and also to the inverse transformation.
This left panel of the following plot shows the worst case value for each channel.
The worst case error is <1 pixel for the centre modules, rising to nearly 10pixels for the extreme modules. The panel on the right shows the error for the reverse transformation.
Within a given channel, the error scales with distance from the reference point near the CCD readout amplifier, (col,row) = 17,25). The following figure shows the error in roundtripping a col/row value through ra/dec and back again for a poorly corrected channel, ch 10. The length of the arrows is drawn twice the actual error in the computed col/row for clarity.
Because of how I do the computation, the error in the conversion is essentially zero at the reference point (col,row == 17,25), and increases linearly as distance from the reference point increases. So for every CCD there is at least one pixel where the conversion error is almost zero. The further from the pixel the larger the error becomes.
The solution is to implement a distortion correction polynomial. This would involve fitting a polynomial to the distortion in, e.g a classic Kepler FFI, and applying that correction to each ra/dec <--> col/row conversion. This would take maybe 5-10 days.
I'm happy to implement such a fix, but I think management should decide whether the need to fix this inaccuracy justifies the work involved. I'm assigning this ticket to Tom to decide if the work in necssary.
What is the way forward on this? Should we, at minimum, increase the padding?
The default padding is currently defined in __init__.py
as follows:
# Optical distortions can cause the results from K2fov to be off by a bit.
# The padding parameter compensates for this; setting padding > 0 means
# that objects that are computed to lie a small amount off silicon will
# be considered on silicon.
DEFAULT_PADDING = 3 # pixels
I have now increased the padding to 6px.
I wonder if that closes the issue, or if we want to take further action.
Closing this issue as we have decided not to improve the accuracy of K2fov. Instead we will use Kepler's Matlab-based RaDec2Pix function whenever high-accuracy conversions are needed, e.g. for creating custom masks.
When plotting a campaign using K2fov's
plotPointing
, a gap of ~10-15px is apparent in between channels that are expected to be contiguous.For example, coordinate (ra, dec) = (185.87, -9.44924) falls inside the "gap" between channels 33 and 34 during Campaign 10. There should be no gap between 33 and 34 however, they are the same ccd.
This behavior affects the functionality of
isOnSilicon
:(the expected result above is
True
)The impact on target management is likely non-existent because of padding and inability to observe targets near edges. However it may be worth understanding the behavior as it may expose a minor bug.