Open csnazell opened 1 month ago
I suspect this is due to a shortcut taken during development where the solution item returns from the algorithmic solver is directly kept in the output struct of the simulation frame.
The original MATLAB code captured a matrix of # differential equations x (27 hrs / time step) (typically 540 x # differential equations). Linear interpolation was then used to interpolate between points. In porting to Julia it was decided just to capture the solution item and then use the solution's default handling for interpolation which the docs imply is algorithm-specific. In hindsight this might be the source of the memory consumption. The memory benchmarking indicates consumption is dependent on the algorithm being used.
Assuming linear interpolation of output matrix is good enough then replicating the MATLAB behaviour will probably reduce memory consumption.
This sounds like a good idea, as the very high memory usage will cause more serious issues, especially during parameter estimation.
For the linear interpolation I remember the original code had some issue with this at the 24h timepoint which introduced errors. Is there a way to avoid this when replicating this behaviour here?
During review it was decided that the memory performance of a RODAS5P-based clock + phenology simulation is very high (145 Gb) (using BenchmarkTools.jl).
Identify where memory is being used and optimise memory usage to improve performance.