YuanGongND / ssast

Code for the AAAI 2022 paper "SSAST: Self-Supervised Audio Spectrogram Transformer".
BSD 3-Clause "New" or "Revised" License
356 stars 58 forks source link

Target length #11

Open kremHabashy opened 1 year ago

kremHabashy commented 1 year ago

Hi Yuan,

Thanks again for this great work, I have been using both this and the original AST model for some downstream tasks. I am currently looking into some other time series data, and was wondering if there was a particular reason you chose 10 seconds for the audio length during audioset pretraining. Why not 5 seconds, 15? Did you consult any specific resources to conclude this or is it more arbitrary?

Thanks, Karim

YuanGongND commented 1 year ago

Hi Karim,

The main reason is AudioSet, the primary dataset we used to pretrain the SSAST model, mostly consists of 10s audios. Using longer or shorter audio lengths is perfectly fine. In my opinion, when the downstream task is unknown, the pretraining audio length should be the longer the better because we use cut/interpolate to adjust the audio length (positional embedding) between the pretraining and fine-tuning stage. Cut should be better than interpolation. However, Transformer is O(n^2) so longer input will be more computationally expensive.

This is the code for positional embedding for different input length:https://github.com/YuanGongND/ssast/blob/bfc5c1ab7ddca209690f3a5aed9af2cfafb9d9eb/src/models/ast_models.py#L192-L201

-Yuan

kremHabashy commented 1 year ago

Thank you!!