The current temporal anomaly detection method builds a baseline average for each time point by computing the average of all eigenvalues at a given time point. This gives disproportionate weight to trailing eigenvalues (see this issue in the ornet repo: https://github.com/quinngroup/ornet/issues/18 ).
What kind of alternative weighting scheme could be used instead of the current one depends on what the variance-explained plots of the data actually look like. Compute these plots, either directly from the eigenvalue data ( e[i] / sum(e) ) or by performing PCA on the graph Laplacian data ( https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html ) and checking the explained_variance_ property of the trained model afterward.
The current temporal anomaly detection method builds a baseline average for each time point by computing the average of all eigenvalues at a given time point. This gives disproportionate weight to trailing eigenvalues (see this issue in the ornet repo: https://github.com/quinngroup/ornet/issues/18 ).
What kind of alternative weighting scheme could be used instead of the current one depends on what the variance-explained plots of the data actually look like. Compute these plots, either directly from the eigenvalue data (
e[i] / sum(e)
) or by performing PCA on the graph Laplacian data ( https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html ) and checking theexplained_variance_
property of the trained model afterward.I've tagged Meekail and Marcus to help out.