Given input time series x, it identifies local minima and maxima in x.
Counts the number of steps between local extrema, in some way (max-min, min-max, max-max, min-min; we'd model this by having a parameter that can take on these four values).
Construct a new time series y which consists of the number of steps between extrema (so x is drastically shortened in most cases)
Use probabilities(::UniqueElements, y) to get probabilities
Plug these probabilities into the Shannon entropy formula
This can be implemented as an OutcomeSpace. Maybe MotifSpacing is a good name? This method is generalizable to any sort of pattern spacing. It is just a matter of encoding differently. An easy way to do so is just to dispatch on MotifSpacing(::Pattern), where Pattern could be MinMaxSpacing, MaxMinSpacing, MaxMaxSpacing, MeanMeanSpacing, MedianMedianSpacing, MedianQuantileSpacing, etc.
It will not be straight-forward to decode/encode. However, codify can be implemented: it simply returns the encoded time series y.
The "attention entropy" does essentially the following:
x
, it identifies local minima and maxima inx
.y
which consists of the number of steps between extrema (sox
is drastically shortened in most cases)probabilities(::UniqueElements, y)
to get probabilitiesShannon
entropy formulaThis can be implemented as an
OutcomeSpace
. MaybeMotifSpacing
is a good name? This method is generalizable to any sort of pattern spacing. It is just a matter of encoding differently. An easy way to do so is just to dispatch onMotifSpacing(::Pattern)
, wherePattern
could beMinMaxSpacing
,MaxMinSpacing
,MaxMaxSpacing
,MeanMeanSpacing
,MedianMedianSpacing
,MedianQuantileSpacing
, etc.It will not be straight-forward to
decode
/encode
. However,codify
can be implemented: it simply returns the encoded time seriesy
.