microsoft / SpeechT5

Unified-Modal Speech-Text Pre-Training for Spoken Language Processing
MIT License
1.16k stars 113 forks source link

Does the quantizer is used when fine-tune the pretrained backbone for the downstream task ? #6

Closed zhhao1 closed 2 years ago

zhhao1 commented 2 years ago

The quantizer and mixup method in joint pre-trainingis impressive. My question is whether the quantizer is used when fine-tune the pretrained backbone for the downstream task or not. During reading paper, i do not find the relate statement. Thanks for answer.

mechanicalsea commented 2 years ago

Thanks for your attention to SpeechT5. The quantizer is not used when fine-tuning the pre-trained backbone for the downstream tasks.

zhhao1 commented 2 years ago

Thanks for your attention to SpeechT5. The quantizer is not used when fine-tuning the pre-trained backbone for the downstream tasks.

Thanks for your reply.