The clean_windows function seems to introduce discontinuities in the data to be used for calibration - can this affect the quality of the ASR calibration & cleaning?
I ask because I'm working with very long EEG recordings and the calibration stage is extremely long given the amount of reference data used.
So I'm considering randomly dropping a fraction of the samples in the reference data to speed things up, this would introduce many discontinuities - is it a reasonable approach?
Alternatively I could just take the first N samples of relatively clean data for calibration, but this is less preferable due to possible differences across the recording.
The clean_windows function seems to introduce discontinuities in the data to be used for calibration - can this affect the quality of the ASR calibration & cleaning?
I ask because I'm working with very long EEG recordings and the calibration stage is extremely long given the amount of reference data used. So I'm considering randomly dropping a fraction of the samples in the reference data to speed things up, this would introduce many discontinuities - is it a reasonable approach?
Alternatively I could just take the first N samples of relatively clean data for calibration, but this is less preferable due to possible differences across the recording.