-
我是使用的paddlespeech-r1.4.1,代码为:
```python
from paddlespeech.cli.tts.infer import TTSExecutor
tts = TTSExecutor()
am = "fastspeech2_mix"
voc = "hifigan_aishell3"
output = f"{am}-{voc}.wav"
tts(t…
-
(CDFSE) root@autodl-container-66ee44be9a-c385597c:~/CDFSE_FastSpeech2-main# python3 preprocess.py config/AISHELL3/preprocess.yaml
Processing Data ...
0%| …
-
原作给的代码,生成的lab文件是拼音的,但是如果使用mfa官方方法直接对拼音对齐的话,由于mfa官方提供的拼音词典不正确,会导致textgrid里的phone全是spn。
将生成lab文件的代码/preprocessor/preprocessor.py中的:text = text.split(" ")[1::2] 改成text = text.split(" ")[0::2] ,这样获得的lab…
-
I follow your tips on training the mfa acoustic model but cannot get labels on aishell3 as accurate as the one you offered.
I see there is 'sp' in the alignment result and its position is suprisingly…
-
Who can share the pre-trained model which is the AISHELL3
-
https://pan.baidu.com/s/1pu_XfQJnLRcQZYfawqCeNQ ,提取码:7777
aishell3数据集,Tesla V100 32G,BS 96训练的160K,loss值0.24
同时本人有两台V100 32G闲置,为BUG时撸的云服务器,有想训练啥的也可以提要求,反正闲着也是闲着。
-
I ran aishell recipe with this [gst + xvector +tacotron2](https://github.com/espnet/espnet/blob/master/egs2/aishell3/tts1/conf/tuning/train_gst%2Bxvector_tacotron2.yaml) configuration. However the clo…
-
以下是环境:
MFA1.1
tts am :
fastspeech2_mix
fastspeech2_mix_ckpt_1.2.0
voc:
hifigan_aishell3
hifigan_aishell3_ckpt_0
train fun:
paddlespeech->t2s->training->trainer.py ->run
[Uploading …
-
我用hifigan_aishell3_ckpt_0.2.0/snapshot_iter_2500000.pdz 转换成pdmodel,pdiparams 再转成onnx. 在netron里看input是name: logmel
type: float32[p2o.DynamicDimension.0,80]
但是直接从git上下载的hifigan_aishell3.onnx ,…
-
Hello, I want to know the pre-train model's training dataset in "Release" . Beacause I use vctk data get a good result , but aishell3 not.