Closed braingram closed 2 months ago
One additional interesting point. Repeating runs in the runs in the following manner only induces a memory leak on the first iteration:
steps = [
GroupScaleStep,
DQInitStep,
SaturationStep,
SuperBiasStep,
RefPixStep,
LinearityStep,
DarkCurrentStep,
ChargeMigrationStep,
JumpStep,
RampFitStep,
]
# Run each step 10 times
for step in steps:
for i in range(10):
_ = step.call(file)
if i == 9:
file = _
Pinging @kmacdonald-stsci for initial impressions.
Pinging @kmacdonald-stsci for initial impressions.
Initial analysis using the shell script the reported provided, I verified there is a memory consumption increases during the OLS_C as it loops over the pixels during analysis, though, this is somewhat expected.
Upon entering the C extension memory increases as it is allocated to contain the return arrays. For each pixel processed some memory is allocated to keep track of the segments computed, which are kept around to fill in return arrays. The list of segments is ultimately freed, but more memory is created to return the data from ramp fitting.
The spike at the end of ramp fitting is right now comes after returning from ramp fitting, most likely caused by the read noise variance recalculation.
I am doing some more detailed analysis of memory consumption running only the ramp fit step right now.
Thanks for taking a look. Any hunches for the gradual increase from repeated runs? It looks like it accumulates and doesn't free just under 1GB each run.
xref for the jwst issue: https://github.com/spacetelescope/jwst/issues/8668
Repeated runs of
Detector1
when run withOLS_C
show a steady increase in memory usage.The increase goes away if the algorithm is switched to
OLS
:The above runs are described in more detail in the linked jwst issue.