YuanGongND / ast

Code for the Interspeech 2021 paper "AST: Audio Spectrogram Transformer".
BSD 3-Clause "New" or "Revised" License
1.17k stars 221 forks source link

Use librosa for inference.py instead of torchaudio #29

Closed AlexJian1086 closed 3 years ago

AlexJian1086 commented 3 years ago

Hi I was going through inference pipeline and i wanted to know if there is a way we can replace Kaldi Fbank implementation to livbrosa library, I am hoping to run it on my jeson device and kaldi uses mkl library which is not suitable for ARM architectures.

I've tried multiple methods but the results are not same as kaldi's fbank implementation. Any help would be appreciated. Thankyou.

@JeffC0628 @YuanGongND

AlexJian1086 commented 3 years ago

WIth reference to the paper for the calculation of mel filterbank, I am using librosa.feature.melspectrogram() function to replace kaldi of pythorch given in inferency.py but I am not sure about how to replicate the parameters such as '25ms Hamming window every 10ms' and what would be hop_length, n_fft, win_length for librosa? Please provide the clarity.

YuanGongND commented 3 years ago

Hi there,

Matching outputs of Librosa and torchaudio is out of the scope of this repo, you should consult either librosa or torchaudio authors. It might be hard to make them exactly the same but I assume you should be able to get similar output with appropriate parameters. Or, you can train/fine-tune the model using the librosa generated spectrogram.

Specifically for librosa.feature.melspectrogram(), hop_length should be 10ms, win_length should be 25ms, window should be scipy.signal.windows.hann, sr should be 16,000, n_fft should be 128.

-Yuan

AlexJian1086 commented 3 years ago

Ah okay, thank you for clarification. Although what exactly should I fine-tune here to achieve the desired results as inference pipeline for audioset, I assume the window size, overlap, mel bin etc would still remain same as provided in paper?

Also fbanks calculated in torchaudio.compliance.kaldi.fbank is same as librosa.feature.melspectrogram() and python_speech_features.base.logfbank?

YuanGongND commented 3 years ago

So the best way is to train and test using the feature extracted by the same toolkit. For audio event classification, you can just reuse our window size, overlap, etc to save time for searching; if your task is significantly different from audio event classification, you can consider using your own parameters.

The output of different toolkits might be different, you need experiments to confirm if they are the same.

-Yuan