tsukuba-kde / papers

Paper introduction
2 stars 1 forks source link

Unsupervised Anomaly Detection in Stream Data with Online Evolving Spiking Neural Networks #6

Open Gashongore opened 4 years ago

Gashongore commented 4 years ago

Paper title Unsupervised Anomaly Detection in Stream Data with Online Evolving Spiking Neural Networks • Authors/Affiliations Piotr S. Maci ˛ag , Marzena Kryszkiewicz , Robert Bembenik , Jesus L. Lobo , Javier Del Ser Warsaw University of Technology, Poland • Paper https://arxiv.org/abs/1912.08785 • Tags [Evolving Spiking Neural Networks] [Anomaly detection] [Outliers detection] [Online learning] [Time series data] • What is it? Online evolving Spiking Neural Networks for Unsupervised Anomaly Detection (OeSNN-UAD) • How is it great compared to the related works? --- The distinctive feature of an OeSNN is the evolving repository of output neurons, which in the training phase of the network is updated with a new output neuron that is created for each new input data sample presented to 0eSNN --- The size of the output neurons repository in OeSNN is limited: older neurons are removed from the repository and are replaced with new neurons • The key technical differentiator They have derived/used an important property of the eSNN neuronal model, which shows that the values of actual post synaptic potential thresholds of all output neurons are the same. This property eliminates the necessity of recalculation of these thresholds when output neurons of eSNN are updated in the course of the learning process and increases the speed of classification of input stream data • How did they validate the advantages? They used two benchmarks with: Numenta Anomaly Benchmark and Yahoo Anomaly Dataset and demonstrated experimental performance of the proposed model/algorithms • Are there any discussions around the proposal? ---* The paper stated that they are working with data stream but they used time-series data and sliding window strategy which doesn't follow following Anomalies/Outliers detection in data stream criteria:

  1. Predictions must be made online; i.e., the algorithm must identify state Xt as normal or anomalous before receiving the subsequent Xt+1. ---> Didn't used this
  2. The algorithm must learn continuously without a requirement to store the entire stream.---> this criteria was achieved at a certain level (the proposed algorithm learn with a big window of input data which is stored)
  3. The algorithm must run in an unsupervised, automated fashion—i.e., without data labels or manual parameter tweaking. ---->Yes the used this criteria
  4. Algorithms must adapt to dynamic environments and concept drift, as the underlying statistics of the data stream is often non-stationary.--->Yes the used this criteria
  5. Algorithms should make anomaly detection as early as possible.--> didn't perform well with this criteria
  6. Algorithms should minimize false positives and false negatives (this is true for batch scenarios as well) ---* I think the model were overfit during the training hence the claim to perform better