Currently the calibration routine unnecessarily uses too much memory. A full band LWA measurement set just barely squeeks by with 93% memory use on my machine.
Counting in units of the size of the data:
1 for the data itself
1 for the model
2 for the data packed into square matrices
2 for the model packed into square matrices
For a 5 GB dataset we're looking at 30 GB of memory which is way too much.
The last two points are where we can improve because those matrices can be computed as we iterate through frequency channels instead of doing it all at once at the beginning. Note also that the factor of 2 comes because we populate both the upper and lower triangular parts.
I don't think this problem affects the master branch.
Currently the calibration routine unnecessarily uses too much memory. A full band LWA measurement set just barely squeeks by with 93% memory use on my machine.
Counting in units of the size of the data:
For a 5 GB dataset we're looking at 30 GB of memory which is way too much.
The last two points are where we can improve because those matrices can be computed as we iterate through frequency channels instead of doing it all at once at the beginning. Note also that the factor of 2 comes because we populate both the upper and lower triangular parts.
I don't think this problem affects the master branch.