Open cyriltasse opened 7 years ago
Yeah, it is allowed to fit Gaussians smaller than the restoring beam, so this can happen (especially with low S/N sources). You can freeze the size of the Gaussians to the beam with fix_to_beam = True
if you know all sources are point sources.
But isn't pybdsf fitting for Gaussian(Total_flux_deconv,Sig_maj_deconv,Sig_min_deconv) x restoring_beam
where x
is a convolution? In that case how is it even possible? Or maybe it's fitting for Gaussian(Total_flux,Sig_maj,Sig_min)
and then given restoring_beam
it's computing Total_flux_deconv,Sig_maj_deconv,Sig_min_deconv,Peak_flux_deconv
thresholding Sig_maj_deconv,Sig_min_deconv
. Cos I don't get consistent values between Total_flux/Peak_flux
and Sig_maj_deconv*Sig_min_deconv/(restoring_beam**2)
. I'm very confused...
Yes, the second method is used: it fits the source then calculates the total flux as peak*size[0]*size[1]/(bm_pix[0]*bm_pix[1])
where bm_pix
is the restoring beam size and size
is given by Sig_maj
and Sig_min
(i.e., not the deconvolved size).
ok - sorry - I made a mistake above - so I meant Sig_maj*Sig_min/(restoring_beam**2)
and by Sig
one should understand FWHM
. I'll double check - but I wasn't finding total=peak*size[0]*size[1]/(bm_pix[0]*bm_pix[1])
Simulating an image with point source, I was very surprised to see that the estimated peak flux could be greater than the total flux... Is that an expected behaviour?