Similarly to the default LDA implementation is sklearn (see here) this PR exposes the option to standardize the feature dimensions before obtaining the shrinkage gamma.
We found that this improves classification performance slightly, especially when feature dimensions have largely differing scale. This can be the case when one channel with bad impedance shows much higher variation than the other channels, but this channel cannot be rejected for some reason.
While we always standardize before calculating shrinkage amount, the default is set to 0/false to not interfere with existing code bases.
Similarly to the default LDA implementation is sklearn (see here) this PR exposes the option to standardize the feature dimensions before obtaining the shrinkage gamma.
We found that this improves classification performance slightly, especially when feature dimensions have largely differing scale. This can be the case when one channel with bad impedance shows much higher variation than the other channels, but this channel cannot be rejected for some reason. While we always standardize before calculating shrinkage amount, the default is set to 0/false to not interfere with existing code bases.