johannfaouzi / pyts

A Python package for time series classification
https://pyts.readthedocs.io
BSD 3-Clause "New" or "Revised" License
1.76k stars 163 forks source link

Difference between SAX and KBinsDiscretizer #142

Closed janthmueller closed 1 year ago

janthmueller commented 1 year ago

Is there any difference between the SAX with an 'ordinal' alphabet and the KBinsDiscretizer?

def transform(self, X):
        """Bin the data with the given alphabet.

        Parameters
        ----------
        X : array-like, shape = (n_samples, n_timestamps)
            Data to transform.

        y
            Ignored

        Returns
        -------
        X_new : array, shape = (n_samples, n_timestamps)
            Binned data.

        """
        X = check_array(X, dtype='float64')
        n_timestamps = X.shape[1]
        alphabet = self._check_params(n_timestamps)
        discretizer = **KBinsDiscretizer**(
            n_bins=self.n_bins, strategy=self.strategy)
        indices = discretizer.fit_transform(X)
        if isinstance(alphabet, str):
            return indices
        else:
            return alphabet[indices]

If not can I reproduce the original algorithm by building a pipeline usining:

or should I use the Standard Scaler after PAA?

johannfaouzi commented 1 year ago

KBinsDiscretizer is different from SAX because SAX is a "row-wise" discretization (i.e., discretizing each time series independently) while KBinsDiscretizer is a "column-wise" discretization (i.e., discretizing each column independently). KBinsDiscretizer is used for the Symbolic Fourier Approximation (SFA) algorithm which discretizes the Fourier coefficients of a time series.

If you want to reproduce the original SAX algorithm, you indeed need a pipeline consisting of:

janthmueller commented 1 year ago

So, as I understand it, this means that given an input X in the form (n_samples, n_timestamps), the KBinsDiscretizer discretizes over the individual timestamps while SAX discretizes over the individual samples?

However, this would mean for the KBinsDiscretizer that it could not generate bins with n_samples = 1, which is not true according to my observation.

A further test of the equivalence of both methods over different numbers of bins and strategies suggests to me that these methods are the same. For Example:

strategy = 'normal' 
n_bins = 10

ts = load_gunpoint(return_X_y=True)[0]

SCALER = StandardScaler()
ts = SCALER.transform(ts)
SAX = SymbolicAggregateApproximation(n_bins=n_bins, strategy=strategy, alphabet='ordinal')
KBINS = KBinsDiscretizer(n_bins=n_bins, strategy=strategy)
same = np.all(KBINS.transform(ts) == SAX.transform(ts))
print(same)

Outputs: True

johannfaouzi commented 1 year ago

Sorry for my mistake, what I said is indeed totally wrong.

When you mentioned KBinsDiscretizer, I actually thought about MultipleCoefficientBinning, which performs the discretization column-wise and is used in Symbolic Fourier Approximation.

KBinsDiscretizer is very similar to SAX indeed, it's just that it does not support the alphabet argument. If you look at the source code of SAX, you can see that it just uses KBinsDiscretizer and takes care of returning symbols (instead of integers) if necessary. KBinsDiscretizer is mainly there to provide most of the preprocessing tools in scikit-learn (which are applied column-wise, feature-wise) to pyts (which are applied row-wise, sample-wise).

Edit: most of the tools in the preprocessing module are rarely used in practice (or not really mentioned in the literature I think). They are just there for conveniency and usually just use the implementation in scikit-learn (transposing the input matrix, applying the implementation in scikit-learn, transposing the output matrix). It's not the case for KBinsDiscretizer because I didn't want to allow the use of the k-means strategy to compute the bins, and I wanted to add the 'normal' strategy.

https://github.com/johannfaouzi/pyts/blob/24345921f09c608f75be99c7f1fcb4ea19c63676/pyts/approximation/sax.py#L94-L102

janthmueller commented 1 year ago

thank you for clearing up the misunderstanding and the additional info about scikit-learns implementation!