foxsi / foxsi-smex

Tools for scientists to investigate the capabilities of FOXSI
MIT License
2 stars 8 forks source link

Add a routine to accommodate spacecraft pointing motion across detector #44

Open ehsteve opened 8 years ago

ehsteve commented 8 years ago

Need at least a routine to accommodate spacecraft pointing motion.

LinErinG commented 8 years ago

@ehsteve @ayshih This isn't titled well. The imaging process isn't broken; it just doesn't include one of the more complicated imaging effects. To use another example, if one wanted an issue to "nail down attenuators" they would not entitle it "fix spectroscopy process."

To implement the fix is a relatively straightforward process that would apply one more function (a "PSF" convolution representing pointing jitter) after the optics PSF convolution. Interpolating to finer pixels before applying this is trivial. The major factor holding us back from doing it is that we don't know the scale or frequency of the jitter.

If the concern is not jitter but instead slow thermal motions that spread the flux on adjacent pixels but which will be "removed" from the image via aspect knowledge, it's my sense that these are not significant on an imaging or spectroscopy time scale (i.e. ten seconds). Solar drift, if we're not tracking the Sun, will be significant on the order of a few seconds, but it's not hard to add that smearing. Again, I think we need much more firm information about what motions we expect before implementing it in our software. For reference, the NuSIM software, which is an extensive, robust application for NuSTAR purposes, does not implement these effects.

LinErinG commented 8 years ago

Based on my above comment, I changed the issue title to better reflect the issue.

ayshih commented 8 years ago

Well, neither title really does it for me. =P

We need to enhance our imaging chain to simulate the variety of effects that will degrade our angular resolution past the PSF, including:

Since our aspect system is in principle designed to measure at a fast-enough frequency to subtract out the boom dynamics*, we expect to be dominated by the measurement uncertainties in the SPS (to determine the optics pointing) and the metrology system (to determine where the detector pixels are in the focal plane). At this stage, we can certainly model each of these as Gaussian blurs with the current estimates of performance, but I think it would behoove us to put some thought into the best places in the imaging chain to implement each blurring.

As for detector-pixel discretization, I want to make sure that we avoid the "easy" approach of simulating this by simply discretizing a final simulated image at the detector-pixel size, because that's only appropriate for an integrating imager. Since we count individual photons and have ample pointing jitter, we will usually be noticeably better: our 3.7-arcsec pixels are effectively equivalent to a 2.5-arcsec-FWHM Gaussian blur. Put another way, we are only hurting ourselves if we show blocky images at the 3.7-arcsec level (except when we are attempting very-low-count imaging).

*I brought up the complication of boom dynamics in a recent discussion, not because I thought it would be beyond the capabilities of our aspect system to correct for, but rather to push back against the suggestion that we could make use of averaging multiple measurements from an aspect sensor to beat down the random noise from, e.g., dark current.

ayshih commented 8 years ago

I should mention that handling the detector-pixel discretization correctly given pointing jitter would mean that it's no longer correct to add Poisson noise as the last step as is currently done, because there is no practical choice of imaging-pixel size for which the counting statistics is uncorrelated between neighboring pixels. I don't have an estimate offhand for how much that might distort the simulations.

LinErinG commented 8 years ago

@ayshih I agree with all of this. But do we know enough about the pointing jitter to actually model this? If we do have concrete information on the amplitudes and frequencies of the pointing jitter, I'd like to see that. Perhaps you could email around some information? If the frequencies are higher than our ~10Hz aspect measurement, then we don't know they're happening and our image will get blurred out. If they're slower than the cadence over which we image then it doesn't matter. So there's a certain frequency range in which this can actually aid us.

If we choose to model this in the software, here's my implementation proposal. The last step needs more thought and input. -- After PSF convolution, apply a Gaussian blur to simulate the pointing jitter. (Call this Transform1.) -- Discretize into our native pixel size and apply Poisson noise. This puts in the random measurement factor in the actual basis in which we do the measurement (detector pixels). -- Rebin to much smaller pixels and attempt to undo Transform1 by applying the inverse (which is under-constrained, but we leverage the fact that we know exactly what we did in Step 1). This simulates applying the pointing knowledge gleaned from aspect/metrology.

Doing something like step 2 is absolutely necessary, because we're going to measure things in terms of which pixel which photon lands in, and Poisson noise is a huge factor in understanding what our images can and can't show us. But Transform1 should contain whatever additional information we have that can help us (like pointing vs time, depending on how complex we get with it).

But a parting thought is that this all may be a moot point while we are not including charge sharing, which will be a dominant factor in how the images look! For the short time, I will just keep applying the 2-pixel smooth so that we don't see the sharp pixel edges.