Closed nsbruce closed 3 years ago
I'll preface my response with (a) the imaging inside LSL is a little convoluted and (b) the ImgWPlus
class is based heavily off the Img
and ImgW
classes in the aipy.img
module. It might be helpful to look at what Img
/ImgW
do when interpreting what is below
The image/grid dimensions are determined from round(size/res)
. The l and m values associated with this should range from max([-1, -1/(2*res)
]) to min([1, 1/(2*res)])
. For values of res
less than 0.5 you max out the range of l and m so the field of view is "limited" to the entire visible hemisphere of the sky. As the value of res
increases you begin to limit the field of view. The pixel size in the image is based on the change of the (l,m) values at the center of the field. The step size should be the full (un-max()
'd or min()
'd) range of l/m divided by the image/grid size. That works out to something like 1/size
.
The (u,v) coordinates of the grid span +/-0.5 of the grid dimensions. Ignoring the round()
from above, this corresponds to -size/(2*res)
to size/(2*res)
wavelengths. The values used in imageIDI.py
should be appropriate for most LWA observations to get an image that covers the entire visible hemisphere.
That said, what are you getting for images?
Thanks for the quick reply!
I may be missing something conceptually here. In imageIDI.py
size=350//2=175
and res=0.5
. It follows from your points above that this gives a (u,v) span of [-175, 175] wavelengths which for a 55 MHz signal is ~[-955m, 955m]. The LWA's longest baseline is ~110m right? Wouldn't it make more sense to grid the visibilities onto a grid of approximately that size?
I have a recording at ~5 MHz from LWA-SV and what I'm seeing is that the raw visibilities (as returned fromfxc.FXMaster
) have a (u,v) range of ~-[2, 2] wavelengths and are then being gridded onto a grid spanning [-175,175] wavelengths.
I usually think of approaching the problem of gridding in the image plane rather than the (u,v) plane. Since the final image is what you care about, you need to perform the gridding in (u,v) such that you get the right field of view and pixel scale (degrees/pixel) in the image. This "image parameters" approach is what most other software, e.g., CASA and wsclean, use when imaging.
Could you post of your 5 MHz images?
Sorry for the delay - I've been simulating the signal so I can clearly show what is happening. We have a model for the signal which matches both the visibilities and resulting images for our dataset so I’ll use the model here. The model is vis=e^(j*2*pi*(l*u+v*m))
so we can choose (l,m), do the gridding then compare the image (l,m) with the input.
Knowing that the frequency is 5 MHz, the maximum dimension in (u,v) for LWA-SV baselines is about -2 (in v). So following the docstring for setting the size
parameter, any size
greater than 4 should span the entire baseline set (note that the outrigger was excluded for this), with the outrigger the max dimension is ~6 wavelengths at 5 MHz leading to a minimum size
of 12.
An image resulting from size=10
and res=0.1
is below.
Then in an effort to figure out how there parameters are working, I tried some other imaging parameters
and finally the default imaging parameters from imageIDI.py
The default imaging parameters in that latest image "seems" closes but the image (l,m) results are consistently offset from the input.
As well, the gridded visibilities don't make sense in the default case. I've made a 3D scatter plot of u
versus v
versus np.angle(vis)
and attached it. You can click on the legend to select and deselect datasets. It is apparent that the red ungridded data has a very similar structure to the gridded visibilities up to ~size=20
but beyond that the visibility structure doesn't exist. In the size=175
case, all of the available ungridded visibilities exist between very few grid points. So the convolutional gridding won't really work I don't think.
You may want to move your data into CASA and try imaging from there. You can convert the FITS-IDI files to measurement sets with the importfitsidi
command in CASA and them image it with clean
.
You may also want to compare your images against the LWATV2 images/movies. They should be generated for every TBN run as long as the LASI system is up.
I didn't realize those movies existed - unfortunately we don't have them for our older datasets. We ran another observation this weekend so I collected those videos - thanks for the tip! I look forward to using those as reference.
Using the same simulation as above I tried a series of res
values for the lsl hard-coded size=175
and the results don't seem to have a trend.
The purpose of the all-sky images is to validate a visibility domain DoA approach we're working on. So for longer observations there is a lot of overhead to exporting a fits file for every correlator integration and then identifying the image peak.
I may try and switch to validating with simulation but I am still trying to figure out why utils.build_gridded_image()
is not returning an expected result. If it's OK I'll leave this issue open and update it with whatever I find!
Oh I see it's a bit hard to see in that scatter plot but as res
approaches 0.5 the image max seems to get nearer the model's input value then as it passes 0.51 it heads back the other way.
What is the MJD for the observations you are looking at now?
What is the MJD for the observations you are looking at now?
58846.736253154464
There is a movie for 58846. You can also find the raw LASI-SV images in .pims format here.
There is a movie for 58846. You can also find the raw LASI-SV images in .pims format here.
Thanks I'll take a look and see if there is enough resolution there to see whether it agrees with our gridded images.
If you are only interested in direction of arrival for a handful of sources you might consider the MUSIC algorithm. There are also tasks within CASA, like uvmodelfit, that can fit a source to visibility data. AIPS should also have a visibility-based model fitting tool.
I've been unable to give this time but am planning to in December. If you would like this issue closed go ahead and do so and if I come across an error I'll issue a PR! Otherwise I'll update here as I go.
I've been unable to give this time but am planning to in December. If you would like this issue closed go ahead and do so and if I come across an error I'll issue a PR! Otherwise I'll update here as I go.
I'll help with a reasonable analyst technology.? If needed
Based on a comparison with wsclean it appears there is an error that only manifests at low ( < 8 MHz) frequencies in the lsl gridder somewhere. Reopening this at @jaycedowell's request.
I switched the gridding kernel over to a Kaisser-Bessel window like what wsclean
uses and added a correction to the image for the kernel. Give b46d6a4 in gridding-fix
a shot.
That works much better! Great thank you @jaycedowell . Closing this issue now.
Great, I'm glad to hear that. These fixes will be in the next release of LSL which I hope to get out "soon".
When gridding visibility data, the
lsl.imaging.utils
build_gridded_image
function builds alsl.imaging.utils
ImgWPlus
instance. For arguments it takessize
andres.
The
ImgWPlus
docstring saysI read this that knowing the frequency of my observed signal, I find the size of the LWA in wavelengths and that is my size parameter. A few problems or points of requested clarity:
res
and check the resultingfield_of_view
it does not change (this is unexpected based on the docstring as well as the SIW textbook section 7.3.1). If I increase/decreasesize
it does (which is expected, I think).imageIDI.py
script just fixes bothres
andsize
implying that these are not observation-specific parameters.My output all-sky images do not look correct and I'm trying to understand why. Can you clarify what these parameters do?
Thanks for your time.