Closed zhhao1 closed 2 years ago
Thanks for your attention to SpeechT5. The quantizer is not used when fine-tuning the pre-trained backbone for the downstream tasks.
Thanks for your attention to SpeechT5. The quantizer is not used when fine-tuning the pre-trained backbone for the downstream tasks.
Thanks for your reply.
The quantizer and mixup method in joint pre-trainingis impressive. My question is whether the quantizer is used when fine-tune the pretrained backbone for the downstream task or not. During reading paper, i do not find the relate statement. Thanks for answer.