Open parkus opened 9 years ago
We've talked about using techniques like this as a way to mine the data for variables but not as a calibration refinement. Off the top of my head, I think that the remaining systematics are going to be from zodiacal light and errors in the flat. My first question would be how big these errors are, and I bet that the answer lives in one of the calibration papers. I seem to remember the quoted error in the flat, discounting the detector edges, as something like 15%. So, I don't know if this is worthwhile. My gut says that we'd get more benefit from refining the flat or error model generally.
To answer your aspect question: You should start with get_aspect() in dbasetools.py. Then look at the guts of source/gCalrun.py in the v1.22 branch which includes a chunk that does more or less what you want. Also, the guts of gFind.py including fGetTimeRanges().
I'm not really working until late this week. We can set up a teleconference to discuss this in detail next week if you want, or we can try to discuss this at the next weekly project meetups.
Flat errors will be location-dependent, so this method shouldn't do anything to remove those (aside from new schemes for making better flats). To elaborate, different sources on different parts of the detector will experience different effects from flat errors, so the effect won't be consistent across the detector and thus not detectable by a method like the one I was considering.
The attractiveness of a scheme like McQuillan's is that it doesn't require us to try to model any specific systematics or even make a predict what systematics might be present. But if I can justify not attempting such a scheme, it would certainly save a lot of work. The main argument I see against one is the potential presence of effects that aren't detector-wide but are likely dominate systematic noise. In particular, dithering over pixels with different, imperfectly measured (thus imperfectly corrected) responses might cause a far larger signal than any global effect like gradual defocussing.
I'm actually abroad right now, so communication could be a challenge due to the time difference (I'm +10 from PST). However, I do have access to internet that should be fast enough for a voice connection. Is chatting in the 7-9 AM window a possibility within the next two weeks? We could also wait until I'm back stateside in early April, but I suspect I'll have a good bit of time on my hands in the interim during which I would like to work on a systematics removal scheme if it's deemed useful.
As part of a variability analysis of the Kepler field stars, I've been thinking I would adapt the technique used by McQuillan et al. 2012 (A&A 539:A137) to minimize the contribution of detector-wide instrumental variations from the lightcurves. In brief, the technique amounts to looking for variations that are present within the lightcurves of multiple sources, under the assumption that if the same wiggle is present in lots of lightcurves then it probably doesn't have an astrophysical source. A representative set of systematic variations is thus created and can be fit and removed from the lightcurve of any specific source.
I was about to post a question about figuring out where GALEX was pointing at any given time to find out what sources were on the detector so that these can be analyzed for systematics. However, as this might just be the first of many questions related to implementing this relatively complicated scheme, I figured it might be wise to get on the same page with everyone about whether this is a worthwhile pursuit. It seems worthwhile to me, as I fear there might be systematics not dealt with by the deadtime corrections and response maps that it could catch. But what do others think? Are there any compelling reasons not to try implementing this?