I tested this on dev. Here's the memory usage before (< 1pm) and after (> 1pm):
In the before graph, the drop around 12:30 is the end of aggregate-known, and the final spike before that is loading serials from Let's Encrypt. We have enough certs from LE now that we were getting consistent OOM failures there.
In the after graph, the initial bump at 1:05 is in aggregate-crls. It has the same issue as aggregate-known, but the number of serials is much less. I'm going to avoid tinkering with it for now. The main point is that the memory usage in aggregate-known is negligible.
The memory usage in the last half hour of both images is when we're actually generating the filter. We haven't seen OOM issues there, but I'm running a heap profiler on a local instance to see if I can reduce the memory usage at all.
Resolves #280
I tested this on dev. Here's the memory usage before (< 1pm) and after (> 1pm):
In the before graph, the drop around 12:30 is the end of aggregate-known, and the final spike before that is loading serials from Let's Encrypt. We have enough certs from LE now that we were getting consistent OOM failures there.
In the after graph, the initial bump at 1:05 is in aggregate-crls. It has the same issue as aggregate-known, but the number of serials is much less. I'm going to avoid tinkering with it for now. The main point is that the memory usage in aggregate-known is negligible.
The memory usage in the last half hour of both images is when we're actually generating the filter. We haven't seen OOM issues there, but I'm running a heap profiler on a local instance to see if I can reduce the memory usage at all.