Open lucabaldini opened 1 year ago
From Fabio
I'm not sure that it is not an hiccup of low statistics or spectral shape, but it seems that background-subtracted MDP may worsen with weights at low energy.
Step to reproduce with attached pcubes (which refers to the latest magnetar, for which we have a source not much brighter than the background): from ixpeobssim.binning.polarization import xBinnedPolarizationCube
def pcube(det, reg, w): file=f'ixpe02002899_det{det}_evt2_v02_rej_csolbar{reg}_{w}_PC_EBINS.fits' return xBinnedPolarizationCube.from_file_list([file])
Prints the MDP_99 in the first energy bin (2.0-3.5 keV) for the source alone:
for _d in [1,2,3]: _uwgt = pcube(det=_d, reg='src', w='uwgt').MDP_99[0] _wgt = pcube(det=_d, reg='src', w='wgt').MDP_99[0] print('**') print(f'* det{_d}_src: {_uwgt:.4f} vs {_wgt:.4f}') print('****')
*** det1_src: uwgt 0.4565 vs wgt 0.4359
*** det2_src: uwgt 0.4759 vs wgt 0.4519
*** det3_src: uwgt 0.4915 vs wgt 0.4728
Then, the MDP improves.
If we now subtract the background:
area_scale = 0.000314159265358979/0.01248783079801943 for _d in [1,2,3]: _uwgt_src = pcube(det=_d, reg='src', w='uwgt') _uwgt_bkg = pcube(det=_d, reg='bkg', w='uwgt') _uwgt_bkg *= area_scale _uwgt_src -= _uwgt_bkg
_wgt_src = pcube(det=_d, reg='src', w='wgt')
_wgt_bkg = pcube(det=_d, reg='bkg', w='wgt')
_wgt_bkg *= area_scale
_wgt_src -= _wgt_bkg
print(f'*** det{_d}_src: {_uwgt_src.MDP_99[0]:.4f} vs {_wgt_src.MDP_99[0]:.4f}')
this prints:
*** det1_src: 0.5455 vs 0.5519
*** det2_src: 0.5840 vs 0.5758
*** det3_src: 0.6111 vs 0.6171
and then only det2 improves.
Cyg X-3 (which is much brighter than the background) works as one would naively expect (pcube here ).
And from Adam
Hi Steve,
I tried this out today, but my results aren’t good. When I run with weightings, I find that the MDP does indeed reduce for each DU compared with the unweighted equivalent. I then run weighted pcubes for source and background regions, combine the DUs and subtract the background. The resulting “background subtracted” result also has lower MDP than its unweighted equivalent. However, the polarisation degree itself is now a little lower, such that the overall signal to noise is slightly lower than it was for the unweighted analysis. I’ve attached contour plots for the weighted and unweighted analyses (contours are 1, 2 and 3 sigma for two parameters of interest; i.e. outer contour is 99%). They are indistinguishable from one other (wpcube is with weightings; cpcube is without).
My exact numbers after combining DUs and subtracting background are:
No weights With weights PD(%)= [3.1359463] PD(%)= [2.7932966] dPD(%)= [1.5182124] dPD(%)= [1.3688865] SNR = 2.06555 SNR = 2.0405611 MDP(%)= [4.6054792] MDP(%)= [4.152501]
As an example of the raw pcube output (i.e. no codes written by me have been used), here is the equivalent only for the DU1 source region:
No weights With weights PD(%) = 1.480672 PD(%) = 1.603723 dPD(%) = 2.511624 dPD(%) = 2.27723 SNR = 0.5895 SNR = 0.70424 MDP(%) = 7.618024 MDP(%) = 6.90714
Something similar happens for each DU: the SNR goes up, MDP down, everything as you would want.
So I’m not sure what is going on. It seems to me that pcube is working fine, and the problem is something I’m doing downstream. Except I also checked that. My code gives exactly the same numbers for source region only as come out of xpbinview. So I know I’m combining DUs correctly. Maybe I’m subtracting background incorrectly, but I’m just subtracting Stokes parameters scaled by the area of the source and background regions.
So maybe it’s best to leave the paper as it is for now whilst I get to the bottom of this; given how close to submission we are. I’m sure lots of other people have background subtraction codes, so maybe we should see what other people get to track down if there’s a bug in my routines.
As I say, as far as I can tell, pcube itself is working like a dream, but I just wanted to flag that something strange is going on that I can’t understand.
Cheers
Adam
@Fabio Before we resort to run an ensamble of Geant 4 simulation with a representative src and bkg spectra (which will take a lot of time), do you have a sense of whether I might be doing something wrong in subtracting the background? If you sum the Stokes parameters by hand by assigning negative weights to bkg events (which is what Fabian originally envisioned), do you get the same answer as the background-subtracted pcube?
See also the message by Adam, which is finding a different (but maybe related) potential issue? Is there any way to make a sensible comparison with XSPEC and see if weights improve things in that case, as Adam seem to suggest?
I am trying to wrap my head around whether this is just a bug in ixpeobssim vs something counter-intuitive in how a weighted analysis interacts with the background, and I am not sure...
@lucabaldini I noted something that I can't understand in line 496-497 of kislat2015.py in xStokesAnalysis.normalized_stokes_parameters:
QN_ERR[mask] = QN[mask] * numpy.sqrt(Q_REL_ERR**2. + I_REL_ERR**2.)
UN_ERR[mask] = UN[mask] * numpy.sqrt(U_REL_ERR**2. + I_REL_ERR**2.)
This means that, when QN or UN are negative, also their error is negative. Is this correct? This function is called when one subtract the background; I'm wondering if it is related to the current issue.
The behavior reported by Adam looks what I might expect: in his case, the best estimate is decreasing, which can be due to statistical fluctuations, but the uncertainties decrease as expected.
@lucabaldini I noted something that I can't understand in line 496-497 of kislat2015.py in xStokesAnalysis.normalized_stokes_parameters:
QN_ERR[mask] = QN[mask] * numpy.sqrt(Q_REL_ERR**2. + I_REL_ERR**2.) UN_ERR[mask] = UN[mask] * numpy.sqrt(U_REL_ERR**2. + I_REL_ERR**2.)
This means that, when QN or UN are negative, also their error is negative. Is this correct? This function is called when one subtract the background; I'm wondering if it is related to the current issue.
The behavior reported by Adam looks what I might expect: in his case, the best estimate is decreasing, which can be due to statistical fluctuations, but the uncertainties decrease as expected.
Interesting---this is something that escaped me. I think we definitely want to fix this along the lines of
QN_ERR[mask] = abs(QN[mask]) * numpy.sqrt(Q_REL_ERR**2. + I_REL_ERR**2.)
UN_ERR[mask] = abs(UN[mask]) * numpy.sqrt(U_REL_ERR**2. + I_REL_ERR**2.)
but I would be surprised if this changed something downstream, as errors are always summed in quadrature.
Would you mind adding the abs() and rerun your cubes to see if this changes anything?
I'll go ahead and create a pull request to fix this anyway.
I noted something that I can't understand in line 496-497 of kislat2015.py in xStokesAnalysis.normalized_stokes_parameters:
QN_ERR[mask] = QN[mask] * numpy.sqrt(Q_REL_ERR**2. + I_REL_ERR**2.) UN_ERR[mask] = UN[mask] * numpy.sqrt(U_REL_ERR**2. + I_REL_ERR**2.)
This means that, when QN or UN are negative, also their error is negative.
This should be fixed in version 30.3.1, see https://github.com/lucabaldini/ixpeobssim/pull/706
I rerun the code with the fix. MDP_99 is not changing (as expected), while QN_ERR and UN_ERR are identical but positive (as expected as well).
I am using this issue to collect feedback on the model-independent weighted analysis with low statistics.