(gpt) root@autodl-container-19b1118252-6ba73936:~/autodl-tmp/GPT-SoVITS# bash quick_start.sh
params {'input_txt_path': '/root/autodl-tmp/GPT-SoVITS/input/dehua/dehua.txt', 'save_path': 'logs/dehua', 'input_wav_path': '/root/autodl-tmp/GPT-SoVITS/input/dehua/vocal/'}
input_txt_pathhhhhhhhhh /root/autodl-tmp/GPT-SoVITS/input/dehua/dehua.txt
Building prefix dict from the default dictionary ...
Loading model from cache /tmp/jieba.cache
Loading model cost 0.738 seconds.
Prefix dict has been built succesfully.
文本转音素已完成!
Some weights of the model checkpoint at pretrained_models/chinese-hubert-base were not used when initializing HubertModel: ['encoder.pos_conv_embed.conv.weight_g', 'encoder.pos_conv_embed.conv.weight_v']
This IS expected if you are initializing HubertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing HubertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of HubertModel were not initialized from the model checkpoint at pretrained_models/chinese-hubert-base and are newly initialized: ['encoder.pos_conv_embed.conv.parametrizations.weight.original0', 'encoder.pos_conv_embed.conv.parametrizations.weight.original1']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
CnHubert特征提取已完成!
/root/miniconda3/envs/gpt/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py:28: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.
warnings.warn("torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.")
语义特征提取完成!
hps.data {'max_wav_value': 32768.0, 'sampling_rate': 32000, 'filter_length': 2048, 'hop_length': 640, 'win_length': 2048, 'n_mel_channels': 128, 'mel_fmin': 0.0, 'mel_fmax': None, 'add_blank': True, 'n_speakers': 300, 'cleaned_text': True, 'exp_dir': 'logs/dehua'}
Traceback (most recent call last):
File "/root/autodl-tmp/GPT-SoVITS/src/train/train_sovits.py", line 457, in
main()
File "/root/autodl-tmp/GPT-SoVITS/src/train/train_sovits.py", line 44, in main
run(hps)
File "/root/autodl-tmp/GPT-SoVITS/src/train/train_sovits.py", line 59, in run
train_dataset = TextAudioSpeakerLoader(hps.data)
File "/root/autodl-tmp/GPT-SoVITS/src/module/data_utils.py", line 59, in __init__
for _ in range(max(2, int(min_num / leng))):
ZeroDivisionError: division by zero
Seed set to 1234
Finish writing config!
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
ckpt_path: None
[rank: 0] Seed set to 1234
大佬你好,我在使用quick_start进行训练时,报了这样一个错误。 这些文件中,只有这个文件生成是空的,会是这个文件的问题吗?