Closed henrygouk closed 3 years ago
@henrygouk I tried your configuration but I didn't get a negative average of memory. I used this dataset higgs. Can you please confirm that it's the same ?
EvaluatePrequential -l (meta.LimAttClassifier -n 2) -s (ArffFileStream -f higgs.arff -c 1)
learning evaluation instances,evaluation time (cpu seconds),model cost (RAM-Hours),classified instances,classifications correct (percent),Kappa Statistic (percent),Kappa Temporal Statistic (percent),Kappa M Statistic (percent),model training instances,model serialized size (bytes),ensemble size,change detections,[avg] model training instances,[err] model training instances,[avg] model serialized size (bytes),[err] model serialized size (bytes),[avg] tree size (nodes),[err] tree size (nodes),[avg] tree size (leaves),[err] tree size (leaves),[avg] active learning leaves,[err] active learning leaves,[avg] tree depth,[err] tree depth,[avg] active leaf byte size estimate,[err] active leaf byte size estimate,[avg] inactive leaf byte size estimate,[err] inactive leaf byte size estimate,[avg] byte size estimate overhead,[err] byte size estimate overhead
98050.0,185.615550822,0.0024677503362180073,98050.0,66.3,32.46330573079914,31.919191919191924,29.35010482180294,98050.0,5.1391256E7,378.0,16.0,95398.35449735449,14728.311061809321,125446.2010582011,148413.3826626157,174.86772486772483,210.28406571212855,87.93386243386242,105.14203285606428,87.93386243386242,105.14203285606428,12.679894179894168,4.023230683749194,0.0,0.0,0.0,0.0,1.0,0.0
I end up with a mean memory usage of -335.88 when running the following configuration:
EvaluatePrequential -l (meta.LimAttClassifier -n 2) -s (ArffFileStream -f /home/henry/Datasets/higgs/HIGGS.arff -c 1)
Where HIGGS.arff is the popular HIGGS dataset. The problem does not appear to be solely with the GUI, as the results buffer also contains negative values in the
model serialized size (bytes)
column.