Open jmangum opened 6 years ago
New version has this:
In [2]: tbl['ampguess', 'peak', 'amplitude']
Out[2]:
<Table length=10>
ampguess peak amplitude
float64 float64 float64
--------------- --------------- ---------------
2.1532895565 4.35684013367 2.36861851215
0.208429336548 0.472078025341 0.187586402893
1.99721932411 3.98094463348 2.18141410208
1.10862857103 2.46066641808 0.99776571393
0.0367745868862 0.0618809685111 0.0330971281976
0.666082799435 0.994039654732 0.71930501079
1.80831575394 3.64649200439 1.98914732933
0.514871001244 1.06097972393 0.463383901119
nan nan nan
3.05733430386 5.95134210587 3.16472439548
'ampguess' is the peak intensity in the map minus the background. Is this what you need?
Thanks Adam. I am not sure if this is what you were referring to back in October when we discussed how to identify bad gaussian fits. Here is the relevant part of the email thread on this subject:
Hi Jeff,
There really is no substitute for by-eye checking unless you want to spend time developing heuristic tests to quantify it.
In that image, the top left is the original data, top right is the model fit, bottom left is the residual, and bottom right is the original image again. The white circles are the centroid and (I think...) 1,2,3 sigma contours of the model. The red ellipse is the beam positioned at exactly the source center.
One heuristic test I can suggest, based on this image, is to compare the peak intensity within the central beam to the peak intensity of the model. If they are far off, it means the model has identified a different peak.
On Wed, Oct 25, 2017 at 5:47 PM, Jeff Mangum jmangum@nrao.edu wrote:
Hi Adam,
I am working through the results from gaussfit_catalog on our NGC253 and NGC4945 ALMA data and doing LTE column density calculations on those derived integrated intensities. Baring actually looking at each fit, I was wondering if there is a good way to identify poor fits? As as example of how just looking at the output .ipac file can lead one astray, attached is a fit to the H2CO 3(03)-2(02) integrated intensity image (from CubeLineMoment) and the resultant gaussfit_catalog output png for one of the sources in the catalog (source number 6). Looking at the .ipac file, other than the fact that the deconvolution failed (which it often does), and maybe the uncertainty on the fit to the centroid and FWHM are a bit on the large side, I don't see any serious red flags for the fit to source number 6. Looking at the png file, though, suggests that this fit is just plain bogus and should be thrown out as there is no peak for this source in this line. Is there something in the .ipac file that would have tipped me off to this bad fit?
This is an example of a good fit:
This is source 6, a questionable-to-bad fit:
And here's the table:
|Name| amplitude| center_x| center_y| fwhm_major| fwhm_minor| pa|deconv_fwhm_major|deconv_fwhm_minor|deconv_pa| chi2| chi2_n| e_amplitude| e_center_x| e_center_y| e_fwhm_major| e_fwhm_minor| e_pa| ampguess| peak|success|
|char| double| double| double| double| double| double| double| double| double| double| double| double| double| double| double| double| double| double| double| char|
| | | deg| deg| arcsec| arcsec| deg| | | | | | | deg| deg| arcsec| arcsec| deg| | | |
|null| null| null| null| null| null| null| null| null| null| null| null| null| null| null| null| null| null| null| null| null|
1 0.0330971281976 11.8876535628 -25.2906894676 1.55463655872 0.969706850759 90.873796339 nan nan nan 4.79905500803 0.0151389747887 0.0128356433253 48.6772034601 85.3686348873 25.0350717427 11.6029546133 18.5427479072 0.0367745868862 0.0618809685111 True
2 0.187586402893 11.8845808051 -25.2886822473 1.41931315381 0.739570465643 153.825261167 nan nan nan 167.147362639 0.528947350124 0.0884778952978 41.817737274 28.2966091056 11.8282734925 5.40346837008 7.71721248973 0.208429336548 0.472078025341 True
3 nan 11.8830655703 -25.2913971382 1.51676917076 0.871051311493 82.154258728 nan nan nan 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 nan nan False
4 2.18141410208 11.8867727741 -25.2892229739 1.32984523682 0.685733112691 163.383251059 nan nan nan 1068.98125104 3.43723874933 0.238630079435 0.606727905731 0.347243722092 0.163079245224 0.073649784648 0.1103204833 1.99721932411 3.98094463348 True
5 2.36861851215 11.887383827 -25.2888052687 1.22941980177 0.736365378632 165.96163434 nan nan nan 1163.57937468 3.69390277675 0.247884593802 0.518989848134 0.327784712474 0.13341493468 0.0734636030933 0.13424605502 2.1532895565 4.35684013367 True
6 0.99776571393 11.8882214716 -25.2880535066 1.38008029732 0.65328848362 147.436840736 nan nan nan 1658.91379361 5.23316654136 0.331369593078 68.7417755404 49.3293709719 20.5264305465 7.03302070636 9.0050297043 1.10862857103 2.46066641808 True
7 3.16472439548 11.8888129301 -25.2877294591 1.28099562938 0.65328848362 174.79273498 nan nan nan 10207.0030293 32.1987477264 22196101.1986 5948133.94914 565170.38285 8814786.47171 6970.14151037 52977.896397 3.05733430386 5.95134210587 True
8 1.98914732933 11.8901868618 -25.2869735305 1.22993147125 0.749323884691 167.798818787 nan nan nan 628.464484344 1.99512534712 0.180656570325 0.45578977783 0.28701698683 0.116974410241 0.0651900883286 0.121538734599 1.80831575394 3.64649200439 True
9 0.71930501079 11.8914471746 -25.2864124987 1.12072516078 0.65328848362 165.298965782 nan nan nan 123.497445356 0.397097895035 0.125986289619 0.808753533942 0.314707495538 0.203430721324 0.0567815465898 0.0788992689132 0.666082799435 0.994039654732 True
10 0.463383901119 11.892284475 -25.2866275582 1.32758941291 0.672688199193 143.586419521 nan nan nan 898.701680423 2.83502107389 0.223141697036 67.6208185512 58.4659221356 20.5635046564 9.48179709038 14.3521478867 0.514871001244 1.06097972393 True
So I guess the answer is 'no', there's nothing obvious in place. The chi^2 is a bit of a hint that the fit wasn't great, but sources 4 and 5 have similar chi^2 and have excellent fits.
Wanted to pump some life into this feature request for gaussfit_catalog. Extracting an idea from @keflavich above regarding how to distinguish between a good and bad gaussian fit when only looking at the .ipac file output:
"One heuristic test I can suggest, based on this image, is to compare the peak intensity within the central beam to the peak intensity of the model. If they are far off, it means the model has identified a different peak."
To better identify poor gaussian fits from gaussfit_catalog, add comparison between central beam peak intensity and model-derived peak intensity to output.