Open keflavich opened 1 year ago
@d-l-walker I assigned this to you because you've already done the QA. But we need someone to do the reimaging still.
spw33+35 continuum reduction was done incorrectly; it used the uncorrected data column. Re-running.
Wild divergence in spw31 (HNCO) at the band edge
I am wondering whether this feature already exists before continuum subtraction. Becasue it appears only in one field, that's odd if it is due to instrumental issues. Maybe it could help to make a cube without continuum subtraction?
Just the notes from earlier - this divergence in spw31 is present at a low level throughout the entire cube and in the dirty image. Setting cyclefactor=2
and "halving" the clean threshold to 18 mJy did not remove it from the cube. Looking back at spw35 though, it seems to have normal divergence, which is good.
I can see what happens on a cube without contsub next.
I think we concluded today that the sinusoidal features in frequency are not caused by tclean divergence but are present in the UV data.
thanks for the clarification. There are 3 executions running consecutively. I am a bit worry this problem was there from beginning to the end. I took a look at the bandpass plots (per antenna) in weblog but they look fine to me, but maybe this feature will show after averaging all data. I could also take a look of the uv-data.
Perhaps we could request a QA3?
Update on spw35: after trying cyclefactor
2 (upper right) and 3 (lower left), 4 (upper left) has improved the divergence, but it's still present in a couple channels. Will try cyclefactor=5
and see how that goes for next week.
cyclefactor=5
seems to have done the trick. Pull request for spw35 incoming
Checked weblogs and didn't find anything obvious that would cause this... Indeed as @d-l-walker pointed out, there is some contamination in the Tsys spectra, but I don't think this would have caused the issue. You can also see this in the findcont stage so indeed I think this is a problem and worth sending to ALMA.
https://github.com/ACES-CMZ/reduction_ACES/issues/255#issuecomment-1513843423
I've just had a look at the actual MSes, and the issue with SPW 31 is definitely a data issue. Here's the amp vs. frequency for this SPW in plotms from EB uid___A002_Xfe83cd_X286_target.ms
(the other 2 EBs are fine).
I guess we dig into this further to see if we can isolate the cause, but we should definitely submit a ticket about this.
Flagging correlation='YY'
seems to fix the issue.
Note that this is likely throwing away plenty of good data -- all of the bad data were YY, but not all YY points were bad data.
Perhaps we can re-image SPW 31 with this flagged EB for the time-being?
Dirty image of SPW 31 with correlation='YY'
flagged for EB uid___A002_Xfe83cd_X286_target.ms
looks much more reasonable, no crazy divergence.
I'll set a full clean of this running now.
Are you using stokes='pseudoI' with tclean? By default tclean does not consider the other polarizations if one polarization is flagged... https://casadocs.readthedocs.io/en/stable/api/tt/casatasks.imaging.tclean.html#stokes
@xinglunju huh, I wasn't aware of that, thanks!
No, I'm not doing that. But the resulting image looks fine (peak intensity below) ...
I can re-run with that option enabled just to be safe.
@d-l-walker the images look nice!
Is the rms level higher than expected, though? Without stokes='pseudoI', for SPW31 we are using 4/6 of the data (or 2 out of the 3 exection blocks), and with pseudoI, we will use 5/6, so the rms may be different by ~10%.
Yeah I agree that's likely what's happening based on the warning you linked. I set the clean re-running earlier, so I'll check when it's finished in a few hours and compare it with the image without pseudoI.
The following has been submitted as a Helpdesk ticket 22864:
We have found the following problems with the 12m data for the ACES field "af":
We would appreciate your help diagnosing (and ideally solving) the problem.
Helpdesk ticket response:
The QA3 for this MOUS is approved: https://jira.alma.cl/browse/PRTSPR-70155 However, take into account that the EB providing the not-uniform PB is due to the EB uid_A002_Xfe62c1X10de7 that was partially observed. You are asking to check the data of uidA002_Xfe83cd_X286, that at the moment does not seem to me an issue. I will update you further as soon as possible.
while spw31 is in QA3, spw33 also had divergence. https://github.com/ACES-CMZ/reduction_ACES/pull/408/commits/c76e57abb8bb04d7813caa992841ddfec1ac27ff proposes a change to fix that.
Cont QA (spw22_25,spw33_35): I'm assuming most of this is free-free filaments, but there does appear to be some slight line contamination in addition to the filaments in both images here too.
I'm just making a note that uid___A001_X15a0_X15a.s38_0.Sgr_A_star_sci.spw35.cube.I.iter1.image.pbcor.statcont.contsub.fits
still has divergence. @djeff1887 fixed this and added cyclefactor=5
to the override commands, but the existing version used 1.5 (according to header history).
Typically we would have to re-run and re-statcont SPW 35 to fix this, however, we still need to download the post-QA3 data and re-run the PL (@keflavich), so no action needed yet. Note that SPW 35 still diverged in the QA3 PL run.
If @djeff1887 has a cleaned version that's good to go on disk, and it's unaffected by QA3 (wasn't that for another window?), we can just move that over - @djeff1887 where is it?
@keflavich it's here: /orange/adamginsburg/sgrb2/d.jeff/X15a/calibrated/working/spw35cycle5/
Trying to run the new pipeline, got this failure:
CASA 6.4.1.12 -- Common Astronomy Software Applications [6.4.1.12]
*** ALMA scriptForPI ***
Found more than one piperestorescript:
['member.uid___A001_X15a0_X15a.hifa_calimage_selfcal.casa_piperestorescript.py', 'member.uid___A001_X15a0_X15a.hifa_calimage.casa_piperestorescript.py']
ERROR: non-unique piperestorescript
@d-l-walker this looks like a bug?
@keflavich the data were re-reduced using a newer version of the PL for QA3 (v6.5.4.9), so I think you'll need to re-run with this version.
It looks like there's a script relating to selfcal. This is present in the new PL, but not supported for mosaics and so wasn't performed here, so I'm not 100% sure what the difference would be between these two. I'll look into it.
@d-l-walker is the required CASA version stored anywhere in machine-readable format?
@keflavich it should be in the QA2 report, but that's a .pdf file, so not the most easily machine-readable. Probably somewhere in the weblog files, I think you can find it in the base index.html file. I'm not sure if it's given anywhere else.
the CASA version isn't the issue, the problem is still the same:
CASA <3>: %run -i member.uid___A001_X15a0_X15a.scriptForPI.py
*** ALMA scriptForPI ***
Found more than one piperestorescript:
['member.uid___A001_X15a0_X15a.hifa_calimage_selfcal.casa_piperestorescript.py', 'member.uid___A001_X15a0_X15a.hifa_calimage.casa_piperestorescript.py']
An exception has occurred, use %tb to see the full traceback.
SystemExit: ERROR: non-unique piperestorescript
Should we be using the selfcal version or the non-selfcal version? And, why were the data distributed with invalid file names?
the QA2 PDF does not mention selfcal
Yeah, there won't be any selfcal here as it's not supported for mosaics, so I guess we just use the non-selfcal version? No idea why the selfcal script is present, I'll do some digging.
OK, I think that indicates some kind of error in the data distribution.
I moved all the selfcal
files to a subdirectory of the script directory called bad/
and triggered the scriptForPI.
Pipeline run finished with:
2024-04-30 09:14:53 INFO: Selecting representative target source Sgr_A_star for data set uid___A002_Xfe62c1_X10de7.ms |
2024-04-30 09:14:53 INFO: Selecting representative target source Sgr_A_star for data set uid___A002_Xfe83cd_X286.ms |
2024-04-30 09:14:53 INFO: Selecting representative target source Sgr_A_star for data set uid___A002_Xfe83cd_X125c.ms |
2024-04-30 09:14:54 INFO: Saving context: pipeline-20240429T155240.context |
Imaging pipeline was used. Will not create uid___A002_Xfe62c1_X10de7.ms.split.cal |
Linking MS uid___A002_Xfe62c1_X10de7.ms into directory "calibrated" |
Imaging pipeline was used. Will not create uid___A002_Xfe83cd_X286.ms.split.cal |
Linking MS uid___A002_Xfe83cd_X286.ms into directory "calibrated" |
Imaging pipeline was used. Will not create uid___A002_Xfe83cd_X125c.ms.split.cal |
Linking MS uid___A002_Xfe83cd_X125c.ms into directory "calibrated" |
Done. Please find results in directory "calibrated".
Should be OK, maybe?
Continuum reclean. 25+27 in the top left, 33+35 in top right, all windows in bottom left
I've highlighted some sources that show up in 25+27 but not 33+35. The noise appears higher in 33+35. Both of these suggest to me that there's a problem with the continuum identification.
@keflavich I'm just checking on the status of the post-QA3 cleaning, and the data still look like pre-QA3? SPW 31 still has the crazy divergence/ringing. The file that I grabbed was ~/member.uid___A001_X15a0_X15a/calibrated/working/uid___A001_X15a0_X15a.s38_0.Sgr_A_star_sci.spw31.cube.I.iter1.image.pbcor.statcont.contsub.fits
, which was created 2 days ago.
Not sure what's going on here. I feel like you downloaded this previously and it was still the pre-QA3 data ... 🤔
Yep, this is still the pre-QA3 data. Maybe I was supposed to keep the selfcal version?
At this point, I think we need to file a ticket, but it would be helpful if we could get someone else to try to restore these data locally and confirm my confusion.
I'd be happy to try, but I need access to the data. Can you do that? Or does Steve need to add me as a delegee?
Any update on fixing the QA3 issue with the reduction rerun @d-l-walker ?
@ashleythomasbarnes it's still running the imaging PL steps. It's on SPW 29 now, which for some reason seems to be taking a very long time, and SPW 31 is up next. I'll kill it now and force only a SPW 31 clean to speed things up.
@ashleythomasbarnes I can confirm that everything looks good
Left = post-QA3 image, Right = old image (note: it looks works because I degraded the resolution to speed it up)
@keflavich not sure what happened on your end, but I'm guessing some issue with the bookkeeping of old and new data. Do you want to try again? Or do you want me to re-do this on my end? (I'll have to re-run the imaging steps again to image at full res, and to account for our internal changes such as fitorder=0
)
Thanks @d-l-walker, this looks good! Are we then okay to close the ticket?
Mosaicking the HCNO chunk of this is still failing, even in micro tests
I have no explanation yet.
@keflavich did you ever manage to resolve the issue with the QA3 data for SPW 31 (HNCO)? Previously it was still looking like the broken pre-QA3 data despite using the new data.
As noted above, the cube in the delivered product folder looks good. These are the MSs that should be used for imaging the cube:
['uid___A002_Xfe62c1_X10de7_targets_line.ms', 'uid___A002_Xfe83cd_X286_targets_line.ms', 'uid___A002_Xfe83cd_X125c_targets_line.ms']
This MS naming convention will be different to the pre-QA3 data due to the use of the newer PL during QA3.
Sgr_A_st_af_03_TM1 uid://A001/X15a0/X15a
[x] Observations completed?
[x] Delivered?
[x] Downloaded? (specify where)
[x] Weblog unpacked
[ ] Weblog Quality Assessment?
[ ] Imaging: Continuum
[ ] Imaging: Lines
Product Links:
Reprocessed Product Links: