Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Utilizing Self-supervised Representations for MOS Prediction
summary: Speech quality assessment has been a critical issue in speech processing for
decades. Existing automatic evaluations usually require clean references or
parallel ground truth data, which is infeasible when the amount of data soars.
Subjective tests, on the other hand, do not need any additional clean or
parallel data and correlates better to human perception. However, such a test
is expensive and time-consuming because crowd work is necessary. It thus
becomes highly desired to develop an automatic evaluation approach that
correlates well with human perception while not requiring ground truth data. In
this paper, we use self-supervised pre-trained models for MOS prediction. We
show their representations can distinguish between clean and noisy audios.
Then, we fine-tune these pre-trained models followed by simple linear layers in
an end-to-end manner. The experiment results showed that our framework
outperforms the two previous state-of-the-art models by a significant
improvement on Voice Conversion Challenge 2018 and achieves comparable or
superior performance on Voice Conversion Challenge 2016. We also conducted an
ablation study to further investigate how each module benefits the task. The
experiment results are implemented and reproducible with publicly available
toolkits.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Utilizing Self-supervised Representations for MOS Prediction
summary: Speech quality assessment has been a critical issue in speech processing for decades. Existing automatic evaluations usually require clean references or parallel ground truth data, which is infeasible when the amount of data soars. Subjective tests, on the other hand, do not need any additional clean or parallel data and correlates better to human perception. However, such a test is expensive and time-consuming because crowd work is necessary. It thus becomes highly desired to develop an automatic evaluation approach that correlates well with human perception while not requiring ground truth data. In this paper, we use self-supervised pre-trained models for MOS prediction. We show their representations can distinguish between clean and noisy audios. Then, we fine-tune these pre-trained models followed by simple linear layers in an end-to-end manner. The experiment results showed that our framework outperforms the two previous state-of-the-art models by a significant improvement on Voice Conversion Challenge 2018 and achieves comparable or superior performance on Voice Conversion Challenge 2016. We also conducted an ablation study to further investigate how each module benefits the task. The experiment results are implemented and reproducible with publicly available toolkits.
id: http://arxiv.org/abs/2104.03017v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.