lamtung16 / ML_ChangepointDetection

0 stars 0 forks source link

Changepoint detection

Publication

Deep Learning Approach for Changepoint Detection: Penalty Parameter Optimization

Changepoint Detection Dynamic Algorithms

Optimal Partitioning (OPART)

To implement the algorithm, we need to know the definition of this loss function: $$L(x, t_1, t2) = \min{\mu} \sum_{i=t_1}^{t_2} (xi - \mu)^2 = \sum{i=t_1}^{t_2}(xi - \frac{\sum{j=t_1}^{t_2}x_i}{t_2-t1+1})^2=\sum{i=t_1}^{t_2}xi^2-\frac{(\sum{i=t_1}^{t_2}x_t)^2}{t_2-t_1+1}$$ Pseudo code (last changepoint algorithm):

Labled Optimal Partitioning (LOPART)

Similar to OPART, there is only one difference: inside the for loop, instead of consider $\tau \in {0, 1, \dots, t-1}$, we consider $\tau \in T$ which satisfy the number of changepoints from all of the train labels.

Penalty Value Learning

To choose the best value of $\lambda$ (apply for either OPART or LOPART), we use the train set to learn the best $\lambda$ then apply that $\lambda$ to the test set.

Plot Plot Plot

Bayesian Information Criterion (BIC)

Set $\lambda = \log(N)$ where $N$ is the length of the sequence. For example, $N = 100$, then $\lambda \approx 4.6$.

Linear

MMIT

Similar to CART, MMIT using decision tree to predict the value of $\lambda$ using sequences features.

Changepoint detection penalty learning

Plot

Folders:

Python Files and Notebooks:

Generating Figures from Scratch:

Copyright

Unless otherwise stated, all content in this repository is licensed under the Creative Commons Attribution 4.0 International License. You are free to:

Under the following terms:

For any questions regarding licensing or the use of this repository, please contact Tung L Nguyen.