Closed svlad-90 closed 4 years ago
The analysis was performed. Unfortunately, there is nothing to optimize.
From a "cache fill in speed" perspective, when dlt message reaches the plugin's cache, it's payload is being:
All the above operations make collecting the cash visibly slower. Still, there is no way out to achieve "fast search + low RAM consumption" without that. The identified bottleneck in one of the real projects was a decoder plugin, which implementation can hardly be changed.
From consumed RAM perspective, as the cache is storing decoded and unpacked payload, its size is bigger than the initial size of the message within the QDltMsg. Still, that allows us to achieve faster search, as preparation of the message is done only once when it reaches the cache.
RAM overhead is ~X2 from size of the opened DLT file.
E.g. for DLT file ~420 Mb in size:
Seems to be a fair price for fast search.
Anyway, it is good, that this analysis was done, as now we understand why plugin works in this way. This task can be closed with this covering note as its outcome.
Fix mismatch between cache-size consumption measurement inside the plugin and "Task manager"'s RAM consumption