MOA is an open source framework for Big Data stream mining. It includes a collection of machine learning algorithms (classification, regression, clustering, outlier detection, concept drift detection and recommender systems) and tools for evaluation.
In the implementation of the LearnNSE available in the MOA 2017.06, there is a chance for the sbkt variable to get really close to zero, leading to the computation of the log of infinity when calculating the ensemble weights.
Dear @all,
In the implementation of the LearnNSE available in the MOA 2017.06, there is a chance for the sbkt variable to get really close to zero, leading to the computation of the log of infinity when calculating the ensemble weights.
this.ensembleWeights.add(Math.log(1.0 / sbkt));
This leaded to problems in the Gaussian problem suggested by the original author of the Learn++.NSE: http://users.rowan.edu/~polikar/research/NSE/
As a workaround, one of the original authors of the NSE check if the "sbkt" is smaller than 0.01. If so, the value is set to 0.01. It can be seen in: https://github.com/gditzler/IncrementalLearning/blob/master/src/learn_nse.m
Check the condition:
if net.beta(net.t,net.t)<net.threshold, net.beta(net.t,net.t) = net.threshold; end
It seems to solve the problem when implemented in the MOA version of the LearnNSE.
Best regars.