ratt-ru / ratt-interferometry-course

Repository for developing an interferometry course for new students
9 stars 8 forks source link

Lecture: Calibration & Imaging #4

Open o-smirnov opened 9 years ago

Trienko commented 9 years ago

I have not yet started with the RIME lecture, the fundamentals is taking me a bit longer than I thought.

modhurita commented 9 years ago

I need some help/guidance in developing my lectures; I am still reading up and having trouble figuring out how to plan/organize my lectures.

This is the plan for the lectures from the email @o-smirnov sent (maybe we should also start thinking about the order/exact times for the lectures?):

  • Roger, Radio science, 45m
  • Kshitij, Radio science, 45m (coordinate topics split between yourselves)
  • Trienko, Fundamentals of interferometry, 90m including practicals
  • Griffin, Technologies: receiver to correlator, 45m
  • Modhurita, Calibration & imaging, 2x90m including practicals (practicals will take longer hence the extra time)
  • Sphe & Ian will assist with development of practicals.

My lectures will be for 2x90 minutes, with practicals. Does that mean 3 hours of lectures, with demos from the python notebooks interspersed?

At last Thursday's meeting, Oleg said the lectures have to be about general principles, with particular emphasis on RATT-interest areas. I am planning to base my lectures on NRAO's summer school slides and the published lectures from the Synthesis Imaging in Radio Astronomy book.

I have identified the following relevant presentations:

  1. Polarization in Interferometry
  2. Calibration
  3. Imaging and Deconvolution
  4. Advanced Calibration Techniques
  5. Wide Field Imaging I: Full Beam Imaging & Surveys
  6. Wide Bandwidth Imaging

And these are the relevant lectures from the Synthesis Imaging in Radio Astronomy book:

  1. Calibration and Editing
  2. Polarization in Interferometry
  3. Imaging
  4. Deconvolution
  5. Self-Calibration
  6. High Dynamic Range Imaging
  7. Special Problems in Imaging
  8. Imaging with Non-Coplanar Arrays
  9. Multi-Frequency Synthesis

All of these aren't going to be covered in equal depth, but I am feeling rather overwhelmed by the amount of material. I have finished reading only a part of these materials, and have some questions/comments:

  1. Do I talk about amplitude and phase calibration like in the scalar formulation, or do I talk about solving for 2x2 gain matrices like in the Measurement Equation formalism? Or do both?
  2. Do I talk about the separate Jones terms (G, B, D, E) and explain what they physically mean and how to solve for them? Or will @Trienko be covering this part?
  3. Should I present the direction-dependent calibration setup in the ME formalism (following the RIME 2 paper)?
  4. Imaging: I am planning to talk about data reliability weights, tapering function, uniform and natural weighting, gridding, FFT, aliasing, dirty image, dirty beam - mostly qualitatively/pictorially, with as little math as possible.
  5. Deconvolution: Talk about non-uniqueness of solutions, regularization constraints used to find plausible images, how a suitable CLEAN beam is chosen, describe the simplest form of CLEAN (Högbom CLEAN), mention other variants of CLEAN without explaining how they work.
  6. Should I say anything about polarization calibration? As parallel- and cross-hand gains are simultaneously solved for in the ME framework, will this be unnecessary?

That's all I have for now. For RATT-specific issues, I also have to add high dynamic range imaging, wide-field imaging/W-projection, wide-band imaging/multi-frequency synthesis, antenna primary beams/A-projection, AW-projection, etc.

o-smirnov commented 9 years ago

My lectures will be for 2x90 minutes, with practicals. Does that mean 3 hours of lectures, with demos from the python notebooks interspersed?

Yep, that's the thinking.

All of these aren't going to be covered in equal depth, but I am feeling rather overwhelmed by the amount of material.

Indeed that's a LOT, and I don't see why we need to explain it all. Rather think of it as follows: what did you need to learn in order to understand the 3C147 pipeline? This is what you need to convey, and not much more than that.

Check also my RIME courses here: https://github.com/ska-sa/waterhole/tree/master/doc/Timba/Courses

You raise good questions -- let's go over them tomorrow.

Cheers, Oleg

Trienko commented 9 years ago

Hi Moderita,

You do raise good points here. I would like to make the following comments which may assist.

  1. I know Griffen said he would deal with the primary beam and polarization.
  2. I intended to make a lecture on the RIME, Fundamentals II is taking me a bit longer than I expected, beginning to realize that I do not understand interferometry [Oleg's first rule of interferometry]. But as soon as I am done with it I will start with the RIME lecture. I do think however that you should also make a general lecture on calibration. Which explains first generation calibration (which uses calibrator sources) and self calibration. But this is merely a suggestion. Maybe stay in stokes I. I will require some assistance with the RIME lecture, as I am lacking in practical experience in this regard (applying it in MeqTrees) which would make my lecture very theoretical. If you feel that you wish to handle the RIME lecture yourself than that is also fine with me.
  3. In terms of imaging and deconvolution, I agree completely with the topics that you have identified.

Lets discuss further tomorrow.

modhurita commented 9 years ago

I have uploaded the completed calibration lecture here. It is probably too elementary (much simpler than the NRAO summer school lecture slides and @o-smirnov's 3GC3 RIME talk), but that is by design - I tried to keep it as simple as possible, with as little math as possible, and focused on RATT-interest aspects of calibration.

o-smirnov commented 9 years ago

Good work! Nice and clear and pitched at the right level. Some comments on @modhurita's lecture:

modhurita commented 9 years ago

I have incorporated @o-smirnov's suggestions and uploaded a new version of the talk, with some additional minor changes/typos fixed.

  • slide on no propagation effects: strictly speaking, there's always a complex phase in e_x e_y due to propagation through free space

I changed it to "Amplitude and direction of electric vector remain unchanged during propagation" - I think that's accurate, since the direction remains unchanged but the sense changes while the electric vector oscillates?

IanHeywood commented 9 years ago

This is really nice work, the slides are very clear.

Slide 31 though: maybe the accompanying narrative will explain this but shouldn't that be the effect of correcting for the primary beam on the noise across the field of view? The beam response itself doesn't affect the thermal noise which is just jitter in the visibility measurement in the complex plane, and so is direction independent.

modhurita commented 9 years ago

I have been thinking about @IanHeywood's comment about the figure in slide 31 - what exactly does that image represent? The noise in the visibility is additive and direction-independent, thus the noise in the image is constant over the field of view. In the image on that slide, is the noise boosted by the primary beam to illustrate that different sources were detected at different SNRs in the source finding step performed on the apparent image? That is, the noise map scaled by the beam has no physical meaning, but reflects the SNR at which the sources were detected in the apparent image? Is "Effect of correcting for the primary beam on the noise across the field of view", as per Ian's comment above, a better heading for the slide?

modhurita commented 9 years ago

I have divided my presentation into two lectures - one on RIME, and another on calibration. There is some overlap between the two (some slides are in both lectures) because I felt that was necessary for providing context for the calibration lecture.

@SpheMakh and I discussed the practicals - he says the students should know the calibration procedure (that it is a least squares minimization process, and the concepts of residual visibilities, residual image, etc.) before he starts the calibration exercises. I agree, but the calibration procedure is on slide 19, and I am reluctant to place it earlier - it is the most complicated/dense slide of the lecture, and I think it should not be any earlier. It would either have to be moved to just after slide 2 to expand on the simple introduction to calibration on that slide, or after slide 11 in which the structure of the G-Jones matrix is introduced, because Sphe plans to start the calibration exercises with a G-Jones example.

Therefore, it seems it would be best if I finish my lecture before Sphe begins his tutorial session. He estimates it will probably take several hours, and plans to illustrate the calibration process for the following cases:

  1. Simulation (of visibilities) with G=1 (trivial case), followed by calibration with:
    1. G not taken into account (i.e., not solved for/applied).
    2. G taken into account (i.e., solved for/applied).
  2. Simulation with a non-trivial G, followed by calibration with:
    1. G not taken into account.
    2. G taken into account.
  3. Simulation with trivial G, non-trivial E, followed by calibration with:
    1. E not taken into account, dEs not solved for or applied.
    2. E not taken into account, dEs solved for and applied.
    3. E taken into account, dEs not solved for applied.
    4. E taken into account, dEs solved for and applied.

The MS will be simulated beforehand by Sphe and will be a small MS (probably a KAT-7 one).

Does this plan sound ok, or is this all too much?

modhurita commented 9 years ago

@o-smirnov, I have uploaded the revised lectures here. Can you please check if these look okay - I moved the slides on the structure of Jones matrices to the RIME lecture, and also added slides on commutation, polarization bases, etc. to that lecture.

IanHeywood commented 9 years ago

@modhurita sorry for the delay in my response.

The usual approach to primary beam correction is to divide the final image by the model of the primary beam. This brings the attenuated sources at the field edge up to their instrinsic flux densities but boosts the background noise by the same factor.

The SNR at which a source is detected is probably easier measured by taking the ratio of its apparent brightness to a measurement of the local noise, which if everything is behaving itself will be uniform across the map. I guess the complication arises for things in sidelobes as they become apparently variable and do not have a constant SNR over time. But I don't think your map captures this in any case, as it looks like it's an average map over the duration of the observation, and the azimuthal variation has been averaged out.

I think the far field noise pattern you have produced is interesting to see but I'm not sure what the practical application of it is, although I’d love to hear about one if I’m just being blinkered. In my opinion the only reason to care about things in sidelobes is to mitigate them during calibration if they happen to be affecting your science goal (or extreme dynamic range goal). If you're doing a wide area survey with numerous pointings and there is something of interest in a sidelobe then it's better studied just by looking at it in the relevant neighbouring pointing. Similarly if it's a targeted observation to study an object of interest then it's not going to be in the sidelobe in the first place.

Trienko commented 9 years ago

I have added a calibration notebook which explains how to perform calibration using LM and StEFCal. This might be useful for students (i.e. @JSKenyon and @sjperkins ) that need to implement these algorithms or variants of them from first principels. I do not discuss the LM algorithm itself, although this can be done in a separate notebook.

See attached calibration notebook : http://nbviewer.ipython.org/github/ratt-ru/ratt-interferometry-course/blob/master/ipython-notebooks/Calibration.ipynb

o-smirnov commented 9 years ago

That's a pretty awesome notebook!

On Mon, Mar 16, 2015 at 3:51 PM, Trienko notifications@github.com wrote:

I have added a calibration notebook which explains how to perform calibration using LM and StEFCal. This might be useful for students (i.e. @JSKenyon https://github.com/JSKenyon and @sjperkins https://github.com/sjperkins ) that need to implement these algorithms or variants of them from first principels. I do not discuss the LM algorithm itself, although this can be done in a separate notebook.

See attached calibration notebook :

http://nbviewer.ipython.org/github/ratt-ru/ratt-interferometry-course/blob/master/ipython-notebooks/Calibration.ipynb

— Reply to this email directly or view it on GitHub https://github.com/ratt-ru/ratt-interferometry-course/issues/4#issuecomment-81677518 .

IanHeywood commented 9 years ago

Seconded, really nice work.