EoRImaging / FHD

Fast Holographic Deconvolution
BSD 2-Clause "Simplified" License
18 stars 10 forks source link

Read pyuvdata-formatted beamfits beam models #301

Open rlbyrne opened 1 year ago

rlbyrne commented 1 year ago

This PR allows the user to specify the beam model by supplying the path to a *.beamfits file.

rlbyrne commented 1 year ago

An example beam fits file is available here: https://github.com/rlbyrne/rlb_LWA/blob/1db84dcbec1071b2039886c48130047bc1009b1d/LWAbeam_2015.fits

nicholebarry commented 1 year ago

I've implemented a memory reduction for the imported beamfits that is present for normal beam generation (changes two lines of code).

Memory usage is below 80G for a typical MWA run. Typical MWA runs of this type are below 60G. However, in the memory usage profile of the job, there is a spike during beam generation. If I can find that, it can be just as efficient as a normal run.

Screen Shot 2023-05-24 at 8 08 18 pm

The resulting image is wrong, however. I'm using a beamfits of the MWA produced by pyuvdata. Looks like the beam (right) is 1's within the horizon (data on left). This could be because of the beamfits I've produced, or could be a problem of the code. More investigation is required (advice welcome).

Screen Shot 2023-05-25 at 12 38 23 pm
rlbyrne commented 1 year ago

@nicholebarry Thank you for the memory speedup!

I'm afraid I don't understand the images you posted. How are you finding that the images are wrong? I would expect that the pyuvdata beam wouldn't be exactly equal to the MWA beam contained within FHD.

rlbyrne commented 1 year ago

Unfortunately the "Reduce mem" commit seems to have introduced a bug. Here is an example of an image made without that commit: 458_peeled_calibrated_natural_Dirty_I_no_beam_speedup and with the commit: 458_peeled_calibrated_natural_Dirty_I_with_beam_speedup I'm going to undo that commit for now, but we can revisit if a similar kind of memory speedup would be possible.

nicholebarry commented 10 months ago

Fixed the problem with the mem reduction. There is now some push towards combining some of the beamfits code as pyuvdata_beam_import is quite small. @rlbyrne can you check that the change does not affect LWA data?

I need to rerun, because the beam is ever so slightly off, but I didn't include the recent commits. So I will set that up now and have it finished soon.

rlbyrne commented 10 months ago

@nicholebarry Thank you. I will run an LWA test. Please let me know what you find with your test.

rlbyrne commented 10 months ago

I just tested the latest version of the branch on LWA data and the results look good. @nicholebarry are there any more tests we need to run?