Closed InfluenceFunctional closed 1 year ago
Appears so far that it's in the initial dataset loading.
Attempted to smooth it out to limited success.
Perhaps need to load and process the dataset piecewise so that we don't have two or three huge duplicates just sitting around.
complimentary to #63
During big runs on cluster, we get infrequent RAM spikes up to 40 GB. I'm assuming during reporting but not sure.
In any case, find the bottleneck and ease it - asking for 40GB that we mostly don't use is quite wasteful.