Closed o-smirnov closed 4 years ago
@o-smirnov I've added a --min-nchan
option to the BDA mode. You need to pull the xova bda branch and redo the pip install so that it takes the latest from the codex-africanus time-and-channel-bda branch too.
Closing, we can re-open if this needs to still be addressed in xova/codex BDA code.
OK this part actually is functionally suspicious. When I run DDF on the BDA data from #13, splitting the frequency into 8 subbands, it mentions the following in the gridder:
Normally this would indicate that no (unflagged) data points were found in the lower three subbands. With the non-BDA MS, it finds data at all subbands. This is probably why deconvolution falls over.
This raises an interesting issue that we hadn't considered. Let's say our band is 1-2 GHz, 1000 channels, and that channel BDA (on some very short baseline) has resulted in two huge channels, 1-1.5 and 1.5-2 GHz. These channels will be marked as having centres of 1.25 and 1.75 GHz. Now let's say we run an imager with ten subbands. The gridder for the first subband (1-1.1 GHz) will not see the short baselines at all (1.25 GHz is outside the subband...) This is because gridders are written under the assumption that channel width << subband width.
The correct thing to do is for the gridder to consider channel width properly, and grid the data point if the channel width overlaps the subband. DDFacet can be fixed to do this fairly easily.
Anyway that's not the whole problem here... the long baselines should have BDA channels narrow enough that each of the 8 subbands is overlapped, so something else is going on.
But as a workaround, @sjperkins, could you add a "minimal channel count" option to xova? This specifies the minimum number of channels to BDA into. We could align this to imaging subbands, and dodge the nasty gridder issues for now.