HappyColor / SpeechFormer

Official implement of SpeechFormer written in Python (PyTorch).
75 stars 7 forks source link

SpeechFormer

SpeechFormer Paper:SpeechFormer: A Hierarchical Efficient Framework Incorporating the Characteristics of Speech
This paper was submitted to INTERSPEECH 2022.

Getting started

Install dependencies

All dependencies can be installed using pip:

python -m pip install -r requirements.txt

Our experiments run on Python 3.6 and PyTorch 1.5. Other versions should work but are not tested.

Prepare data

Download datasets

Note that you should create a metadata file (.csv format) for each dataset to record the name and label (and state, e.g. train or dev or test) of the samples. Then modify the argument: meta_csv_file in ./config/xxx_feature_config.json according to the absolute path of the corresponding .csv file. The example .csv files are in the ./metadata directory.

Extract acoustic feature

Train model

Set the hyper-parameters on ./config/config.py and ./config/model_config.json.
Note: the value of expand in ./config/model_config.json for SpeechFormer-S is [1, 1, 1, -1], while that of SpeechFormer-B is [1, 1, 2, -1].
Next, run:

python train_model.py

You can also pass the hyper-parameters from the command line for convenience, more details can be found in train_model.py.