zhenye234 / xcodec

Codec Does Matter: Exploring the Semantic Shortcoming of Codec for Audio Language Model
104 stars 5 forks source link
audio audio-codec codec gpt language-model music self-supervised-learning semantic sound speech speech-language-model text-to-music text-to-sound text-to-speech tokenizer vall-e

arXiv

X-Codec

Unified Semantic and Acoustic Codec for Audio Language Model.

Paper

Title: Codec Does Matter: Exploring the Semantic Shortcoming of Codec for Audio Language Model

Authors: Zhen Ye, Peiwen Sun, Jiahe Lei, Hongzhan Lin, Xu Tan, Zheqi Dai, Qiuqiang Kong, Jianyi Chen, Jiahao Pan, Qifeng Liu, Yike Guo, Wei Xue

Overview

Experiments on VALL-E

Exp

Highlight

You can easily apply our approach to enhance any existing acoustic codec:

For example

class Codec():
    def __init__(self):
        # Acoustic codec components
        self.encoder = Encoder(...)       # Acoustic encoder
        self.decoder = Decoder(...)       # Acoustic decoder
        self.quantizer = RVQ(...)         # Residual Vector Quantizer (RVQ)

        # Adding the semantic module
        self.semantic_model = AutoModel.from_pretrained(...)  # e.g., Hubert, WavLM

        # Adding Projector
        self.fc_prior = nn.Linear(...)     
        self.fc_post1 = nn.Linear(...)     
        self.fc_post2 = nn.Linear(...)     

    def forward(self, x, bw):
        # Encode the input acoustically and semantically
        e_acoustic = self.encoder(x)
        e_semantic = self.semantic_model(x)

        # Combine acoustic and semantic features
        combined_features = torch.cat([e_acoustic, e_semantic])

        # Apply prior transformation
        transformed_features = self.fc_prior(combined_features)

        # Quantize the unified  semantic and acoustic features
        quantized, codes, bandwidth, commit_loss = self.quantizer(transformed_features, bw)

        # Post-process the quantized features
        quantized_semantic = self.fc_post1(quantized)
        quantized_acoustic = self.fc_post2(quantized)

        # Decode the quantized acoustic features
        output = self.decoder(quantized_acoustic)

    def semantic_loss(self,semantic,quantized_semantic):
        return F.mse_loss(semantic,quantized_semantic)     

For more details, please refer to our code.

Available models

πŸ€— links to the Huggingface model hub.

Model name Hugging Face Config Semantic Model Domain Training Data
xcodec_hubert_librispeech πŸ€— πŸ€— πŸ€— Hubert-base Speech Librispeech
xcodec_wavlm_mls (not mentioned in paper) πŸ€— πŸ€— πŸ€— Wavlm-base-plus Speech MLS English
xcodec_wavlm_more_data (not mentioned in paper) πŸ€— πŸ€— πŸ€— Wavlm-base-plus Speech MLS English + Internal data
xcodec_hubert_general_audio πŸ€— πŸ€— πŸ€—Hubert-base-general-audio General audio 200k hours internal data
xcodec_hubert_general_audio_more_data (not mentioned in paper) πŸ€— πŸ€— πŸ€—Hubert-base-general-audio General audio More balanced data

Inference

To run inference, first download the model and config from hugging face.

python inference.py

Training

Prepare the training_file and validation_file in config. The file should list the paths to your audio files:

/path/to/your/xxx.wav
/path/to/your/yyy.wav
...

Then:

torchrun --nnodes=1 --nproc-per-node=8 main_launch_vqdp.py

Acknowledgement

I would like to extend a special thanks to authors of Uniaudio and DAC, since our code base is mainly borrowed from Uniaudio and DAC.

Citation

If you find this repo helpful, please consider citing in the following format:

@article{ye2024codecdoesmatterexploring,
      title={Codec Does Matter: Exploring the Semantic Shortcoming of Codec for Audio Language Model}, 
      author={Zhen Ye and Peiwen Sun and Jiahe Lei and Hongzhan Lin and Xu Tan and Zheqi Dai and Qiuqiang Kong and Jianyi Chen and Jiahao Pan and Qifeng Liu and Yike Guo and Wei Xue},
      journal={arXiv preprint arXiv:2408.17175},
      year={2024},
}