kirxkirx / vast

Variability Search Toolkit (VaST)
http://scan.sai.msu.ru/vast/
GNU General Public License v3.0
13 stars 4 forks source link

ms function in util/ccd #10

Closed azim7091 closed 4 years ago

azim7091 commented 4 years ago

Dear Kirill,

Thanks for such a fantastic code. I am trying to do calibration on my images before running vast on them, and to do so, I use ms, md, and mk functions in util/ccd directory. There is just one problem to do calibration. Actually, some times there is no dark and dark flat separately (I mean with the exposure time of light and flat frames respectively ), and in this cases I need to consider exposure time differences between light/flat and dark images to do the best calibration (see the formula here: http://www.bu.edu/astronomy/wp-assets/script-files/buas/oldsite/astrophotography/flat.htm).

There is no problem about subtracting bias frames (exposure time=0) with the current ms function, but for subtracting the dark frames one should consider the exposures, is this possible to add a new function to read the exposure times from image header and consider them (with a linear assumption) in subtracting (like the formula in above link)? or improve the current ms function to act differently for bias subtracting and non bias frames subtracting? I think this can improve the quality of calibrated images significantly.

Thanks Azim

kirxkirx commented 4 years ago

Dear Azim,

Thank you for this message!

The thing is that I have personal (and possibly not well-justified) reservations regarding the MaxIM DL style dark scaling and the observing strategy it encourages (fewer darks and more bias frames).

In my view, the best calibration strategy is to have a set of dark frames (that get median stacked with mk) for every exposure time used for light frames during the night, including flat fields. The median stacked dark frames for a given exposure are subtracted from light frames taken with this exposure. The dark-subtracted flat-field frames get median stacked and used to calibrate the science images. This way bias frames are not needed at all (as bias is included in every dark). The advantage is the simplicity of the procedure. I can wrap my head around what's going on: here is a drak frame with a certain number of counts in a typical pixel, the calibration uncertainty (introduced by dark subtraction) in a pixel would be a square root of that count, and actually lower than that as I stack dark frames... kind of simple. The disadvantage is that if many different exposure times were used during the night, collecting dark frames for each exposure time may become annoying. It's even worse if very long exposure times are used (like thousands of seconds).

Of course, twilight flats are typically taken with a number of different exposures. While normally I would go into trouble of collecting a set of dark frames for each exposure time I've used for flats, a possible short cut here is to take a set of bias frames and subtract the median-stacked bias from flat images (instead of subtracting the actual dark frame with the correct exposure time). For many cameras the dark current is negligible if the exposure time is only a couple of seconds, so a dark frame taken with such short exposure is almost indistinguishable from bias frame. If bias frames are unavailable, stacking flat field images without subtracting a dark or bias frame is not a disaster, as the typical counts on bias/dark frames of 100-1000 are so much lower than the typical counts desired on flat fields (20000-40000), so they don't add that much noise... except for the hot pixels that have much higher counts.

My reservation about scaling the dark frame to a different exposure is that I don't have a feeling how much noise is introduced by the scaling. Probably that is a fraction (depending on the dark/light exposure ratio) of the square root of the difference between the (stacked) dark and bias frames... So it actually should be small...

Anyway, I totally agree that a procedure implementing the "bias + scaled dark" calibration strategy instead of the "just dark" strategy needs to be implemented in VaST. So it would be possible to properly process the images taken with this strategy in mind that do not have the matching-exposure darks associated with them.

azim7091 commented 4 years ago

Dear Kirill,

Thanks for your explanations. Actually, I have used this formula myself (with vast):

[(Light - mk(Dark))] md [mk((Flat-mk(Dark)))] Where mk means median combine and md means division. Exp time of light is 25 sec, Exp time of Dark is 25 sec, and Exp time of flat is 5 sec. I have also used MaximDL to see the differences between output results.

The results are very similar in appearance, but number values are not the same and the difference is about 100 per pixel. I thought it maybe the result of different of EXPs in Vast, but exposure cannot make such differences. I do not know where these differences come from.

kirxkirx commented 4 years ago

Right, the trick here is to use a second set of dark frames taken with 5 sec exposure to subtract them from 5 sec flats. Or use bias frames to subtract from flats as the next-best thing (as 5sec darks should be very similar to bias frames thanks to the short exposure).

Anyway, I cannot guarantee that the numerical output will be exactly the same as for MaxIM DL: md divides the input image by the flat field value normalized by the mean value of the flat-field than rounds the result to the nearest integer. I'm not sure if MaxIM DL is doing exactly the same thing. But the subtle differences in how the flat fielding is handled should not result in a noticeable difference in photometry.

azim7091 commented 4 years ago

Thanks. Yes, it seems this difference comes from flat division, and it should not affect the results in final photometry.