Easy-to-use Speech Toolkit including Self-Supervised Learning model, SOTA/Streaming ASR with punctuation, Streaming TTS with text frontend, Speaker Verification System, End-to-End Speech Translation and Keyword Spotting. Won NAACL2022 Best Demo Award.
[TTS]执行训练run.sh命令后报错AssertionError: Variable Shape not match, Variable [ create_parameter_3.w_0_moment1_0 ] need tensor with shape [] but load set tensor with shape [1] #3439
Describe the bug
想要进行微调训练
进入"/examples/other/tts_finetune/tts3"目录,执行 "./run.sh" ,执行后报错
`check oov
get mfa result
align.py:60: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
Setting up corpus information...
Number of speakers in corpus: 1, average number of utterances per speaker: 198.0
/data/tts/paddle/PaddleSpeech/examples/other/tts_finetune/tts3/tools/montreal-forced-aligner/lib/aligner/models.py:87: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
Creating dictionary information...
Setting up training data...
Calculating MFCCs...
Calculating CMVN...
Number of speakers in corpus: 1, average number of utterances per speaker: 198.0
Done with setup.
100%|#####################################################################################################################| 2/2 [00:09<00:00, 4.61s/it]
Done! Everything took 36.63117694854736 seconds
generate durations.txt
extract feature
196 1
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 196/196 [00:13<00:00, 14.22it/s]
Done
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 196/196 [00:00<00:00, 791.87it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 8.73it/s]
Done
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 554.88it/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 7.04it/s]
Done
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 504.79it/s]
create finetune env
finetune...
rank: 0, pid: 2214138, parent_pid: 2211780
multiple speaker fastspeech2!
spk_num: 174
samplers done!
dataloaders done!
vocab_size: 306
W0731 14:28:58.117866 2214138 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 12.2, Runtime API Version: 12.0
W0731 14:28:58.118633 2214138 gpu_resources.cc:149] device: 0, cuDNN Version: 8.9.
I0731 14:28:58.215528 2214138 eager_method.cc:140] Warning:: 0D Tensor cannot be used as 'Tensor.numpy()[0]' . In order to avoid this problem, 0D Tensor will be changed to 1D numpy currently, but it's not correct and will be removed in release 2.6. For Tensor contain only one element, Please modify 'Tensor.numpy()[0]' to 'float(Tensor)' as soon as possible, otherwise 'Tensor.numpy()[0]' will raise error in release 2.6.
I0731 14:28:58.215838 2214138 eager_method.cc:140] Warning:: 0D Tensor cannot be used as 'Tensor.numpy()[0]' . In order to avoid this problem, 0D Tensor will be changed to 1D numpy currently, but it's not correct and will be removed in release 2.6. For Tensor contain only one element, Please modify 'Tensor.numpy()[0]' to 'float(Tensor)' as soon as possible, otherwise 'Tensor.numpy()[0]' will raise error in release 2.6.
model done!
optimizer done!
/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/nn/layer/layers.py:1897: UserWarning: Skip loading for encoder.embed.1.alpha. encoder.embed.1.alpha receives a shape [1], but the expected shape is [].
warnings.warn(f"Skip loading for {key}. " + str(err))
/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/nn/layer/layers.py:1897: UserWarning: Skip loading for decoder.embed.0.alpha. decoder.embed.0.alpha receives a shape [1], but the expected shape is [].
warnings.warn(f"Skip loading for {key}. " + str(err))
/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/nn/layer/norm.py:777: UserWarning: When training, we now always track global mean and variance.
warnings.warn(
Exception in main training loop: Variable Shape not match, Variable [ create_parameter_3.w_0_moment1_0 ] need tensor with shape [] but load set tensor with shape [1]
Traceback (most recent call last):
File "/data/tts/paddle/PaddleSpeech/paddlespeech/t2s/training/trainer.py", line 149, in run
update()
File "/data/tts/paddle/PaddleSpeech/paddlespeech/t2s/training/updaters/standard_updater.py", line 110, in update
self.update_core(batch)
File "/data/tts/paddle/PaddleSpeech/paddlespeech/t2s/models/fastspeech2/fastspeech2_updater.py", line 118, in update_core
optimizer.step()
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), kw)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/dygraph/base.py", line 335, in impl
return func(*args, *kwargs)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/decorator.py", line 232, in fun
return caller(func, (extras + args), kw)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in impl
return wrapped_func(*args, kwargs)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/framework.py", line 462, in impl
return func(*args, *kwargs)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/adam.py", line 446, in step
optimize_ops = self._apply_optimize(
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/optimizer.py", line 1243, in _apply_optimize
optimize_ops = self._create_optimization_pass(
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/optimizer.py", line 995, in _create_optimization_pass
self._create_accumulators(
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/adam.py", line 278, in _create_accumulators
self._add_moments_pows(p)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/adam.py", line 231, in _add_moments_pows
self._add_accumulator(self._moment1_acc_str, p, dtype=acc_dtype)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/optimizer.py", line 800, in _add_accumulator
var.set_value(self._accumulators_holder.pop(var_name))
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/decorator.py", line 232, in fun
return caller(func, (extras + args), kw)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in impl
return wrapped_func(*args, kwargs)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/framework.py", line 449, in impl
return func(*args, kwargs)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/dygraph/tensor_patch_methods.py", line 196, in set_value
assert self.shape == list(
Trainer extensions will try to handle the extension. Then all extensions will finalize.Traceback (most recent call last):
File "local/finetune.py", line 269, in
train_sp(train_args, config)
File "local/finetune.py", line 202, in train_sp
trainer.run()
File "/data/tts/paddle/PaddleSpeech/paddlespeech/t2s/training/trainer.py", line 198, in run
six.reraise(exc_info)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/six.py", line 719, in reraise
raise value
File "/data/tts/paddle/PaddleSpeech/paddlespeech/t2s/training/trainer.py", line 149, in run
update()
File "/data/tts/paddle/PaddleSpeech/paddlespeech/t2s/training/updaters/standard_updater.py", line 110, in update
self.update_core(batch)
File "/data/tts/paddle/PaddleSpeech/paddlespeech/t2s/models/fastspeech2/fastspeech2_updater.py", line 118, in update_core
optimizer.step()
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/decorator.py", line 232, in fun
return caller(func, (extras + args), kw)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/dygraph/base.py", line 335, in impl
return func(*args, kwargs)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), kw)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in impl
return wrapped_func(*args, *kwargs)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/framework.py", line 462, in impl
return func(args, kwargs)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/adam.py", line 446, in step
optimize_ops = self._apply_optimize(
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/optimizer.py", line 1243, in _apply_optimize
optimize_ops = self._create_optimization_pass(
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/optimizer.py", line 995, in _create_optimization_pass
self._create_accumulators(
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/adam.py", line 278, in _create_accumulators
self._add_moments_pows(p)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/adam.py", line 231, in _add_moments_pows
self._add_accumulator(self._moment1_acc_str, p, dtype=acc_dtype)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/optimizer.py", line 800, in _add_accumulator
var.set_value(self._accumulators_holder.pop(var_name))
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), kw)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in impl
return wrapped_func(*args, *kwargs)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/framework.py", line 449, in impl
return func(args, kwargs)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/dygraph/tensor_patch_methods.py", line 196, in set_value
assert self.shape == list(
AssertionError: Variable Shape not match, Variable [ create_parameter_3.w_0_moment1_0 ] need tensor with shape [] but load set tensor with shape
[1]`
Environment (please complete the following information):
OS: [Ubuntu]
Python Version [3.8.2]
PaddlePaddle Version [2.5.0]
PaddlePaddle-gpu-2.5.0-120
GPU/DRIVER Informationo [e.g. Tesla V100-SXM2-32GB/440.64.00]
NVIDIA-SMI 535.54.03 Driver Version: 535.54.03
CUDA/CUDNN Version [e.g. cuda-10.2]
CUDA Version: 12.2
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
Describe the bug 想要进行微调训练 进入"/examples/other/tts_finetune/tts3"目录,执行 "./run.sh" ,执行后报错
`check oov get mfa result align.py:60: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. Setting up corpus information... Number of speakers in corpus: 1, average number of utterances per speaker: 198.0 /data/tts/paddle/PaddleSpeech/examples/other/tts_finetune/tts3/tools/montreal-forced-aligner/lib/aligner/models.py:87: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. Creating dictionary information... Setting up training data... Calculating MFCCs... Calculating CMVN... Number of speakers in corpus: 1, average number of utterances per speaker: 198.0 Done with setup. 100%|#####################################################################################################################| 2/2 [00:09<00:00, 4.61s/it] Done! Everything took 36.63117694854736 seconds generate durations.txt extract feature 196 1 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 196/196 [00:13<00:00, 14.22it/s] Done 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 196/196 [00:00<00:00, 791.87it/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 8.73it/s] Done 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 554.88it/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 7.04it/s] Done 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 504.79it/s] create finetune env finetune... rank: 0, pid: 2214138, parent_pid: 2211780 multiple speaker fastspeech2! spk_num: 174 samplers done! dataloaders done! vocab_size: 306 W0731 14:28:58.117866 2214138 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 12.2, Runtime API Version: 12.0 W0731 14:28:58.118633 2214138 gpu_resources.cc:149] device: 0, cuDNN Version: 8.9. I0731 14:28:58.215528 2214138 eager_method.cc:140] Warning:: 0D Tensor cannot be used as 'Tensor.numpy()[0]' . In order to avoid this problem, 0D Tensor will be changed to 1D numpy currently, but it's not correct and will be removed in release 2.6. For Tensor contain only one element, Please modify 'Tensor.numpy()[0]' to 'float(Tensor)' as soon as possible, otherwise 'Tensor.numpy()[0]' will raise error in release 2.6. I0731 14:28:58.215838 2214138 eager_method.cc:140] Warning:: 0D Tensor cannot be used as 'Tensor.numpy()[0]' . In order to avoid this problem, 0D Tensor will be changed to 1D numpy currently, but it's not correct and will be removed in release 2.6. For Tensor contain only one element, Please modify 'Tensor.numpy()[0]' to 'float(Tensor)' as soon as possible, otherwise 'Tensor.numpy()[0]' will raise error in release 2.6. model done! optimizer done! /root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/nn/layer/layers.py:1897: UserWarning: Skip loading for encoder.embed.1.alpha. encoder.embed.1.alpha receives a shape [1], but the expected shape is []. warnings.warn(f"Skip loading for {key}. " + str(err)) /root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/nn/layer/layers.py:1897: UserWarning: Skip loading for decoder.embed.0.alpha. decoder.embed.0.alpha receives a shape [1], but the expected shape is []. warnings.warn(f"Skip loading for {key}. " + str(err)) /root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/nn/layer/norm.py:777: UserWarning: When training, we now always track global mean and variance. warnings.warn( Exception in main training loop: Variable Shape not match, Variable [ create_parameter_3.w_0_moment1_0 ] need tensor with shape [] but load set tensor with shape [1] Traceback (most recent call last): File "/data/tts/paddle/PaddleSpeech/paddlespeech/t2s/training/trainer.py", line 149, in run update() File "/data/tts/paddle/PaddleSpeech/paddlespeech/t2s/training/updaters/standard_updater.py", line 110, in update self.update_core(batch) File "/data/tts/paddle/PaddleSpeech/paddlespeech/t2s/models/fastspeech2/fastspeech2_updater.py", line 118, in update_core optimizer.step() File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/decorator.py", line 232, in fun return caller(func, *(extras + args), kw) File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/dygraph/base.py", line 335, in impl return func(*args, *kwargs) File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/decorator.py", line 232, in fun return caller(func, (extras + args), kw) File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in impl return wrapped_func(*args, kwargs) File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/framework.py", line 462, in impl return func(*args, *kwargs) File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/adam.py", line 446, in step optimize_ops = self._apply_optimize( File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/optimizer.py", line 1243, in _apply_optimize optimize_ops = self._create_optimization_pass( File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/optimizer.py", line 995, in _create_optimization_pass self._create_accumulators( File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/adam.py", line 278, in _create_accumulators self._add_moments_pows(p) File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/adam.py", line 231, in _add_moments_pows self._add_accumulator(self._moment1_acc_str, p, dtype=acc_dtype) File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/optimizer.py", line 800, in _add_accumulator var.set_value(self._accumulators_holder.pop(var_name)) File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/decorator.py", line 232, in fun return caller(func, (extras + args), kw) File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in impl return wrapped_func(*args, kwargs) File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/framework.py", line 449, in impl return func(*args, kwargs) File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/dygraph/tensor_patch_methods.py", line 196, in set_value assert self.shape == list( Trainer extensions will try to handle the extension. Then all extensions will finalize.Traceback (most recent call last): File "local/finetune.py", line 269, in
train_sp(train_args, config)
File "local/finetune.py", line 202, in train_sp
trainer.run()
File "/data/tts/paddle/PaddleSpeech/paddlespeech/t2s/training/trainer.py", line 198, in run
six.reraise(exc_info)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/six.py", line 719, in reraise
raise value
File "/data/tts/paddle/PaddleSpeech/paddlespeech/t2s/training/trainer.py", line 149, in run
update()
File "/data/tts/paddle/PaddleSpeech/paddlespeech/t2s/training/updaters/standard_updater.py", line 110, in update
self.update_core(batch)
File "/data/tts/paddle/PaddleSpeech/paddlespeech/t2s/models/fastspeech2/fastspeech2_updater.py", line 118, in update_core
optimizer.step()
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/decorator.py", line 232, in fun
return caller(func, (extras + args), kw)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/dygraph/base.py", line 335, in impl
return func(*args, kwargs)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), kw)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in impl
return wrapped_func(*args, *kwargs)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/framework.py", line 462, in impl
return func(args, kwargs)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/adam.py", line 446, in step
optimize_ops = self._apply_optimize(
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/optimizer.py", line 1243, in _apply_optimize
optimize_ops = self._create_optimization_pass(
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/optimizer.py", line 995, in _create_optimization_pass
self._create_accumulators(
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/adam.py", line 278, in _create_accumulators
self._add_moments_pows(p)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/adam.py", line 231, in _add_moments_pows
self._add_accumulator(self._moment1_acc_str, p, dtype=acc_dtype)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/optimizer/optimizer.py", line 800, in _add_accumulator
var.set_value(self._accumulators_holder.pop(var_name))
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/decorator.py", line 232, in fun
return caller(func, *(extras + args), kw)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in impl
return wrapped_func(*args, *kwargs)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/framework.py", line 449, in impl
return func(args, kwargs)
File "/root/anaconda3/envs/paddle_env/lib/python3.8/site-packages/paddle/fluid/dygraph/tensor_patch_methods.py", line 196, in set_value
assert self.shape == list(
AssertionError: Variable Shape not match, Variable [ create_parameter_3.w_0_moment1_0 ] need tensor with shape [] but load set tensor with shape
Environment (please complete the following information):
OS: [Ubuntu]
Python Version [3.8.2]
PaddlePaddle Version [2.5.0] PaddlePaddle-gpu-2.5.0-120
GPU/DRIVER Informationo [e.g. Tesla V100-SXM2-32GB/440.64.00] NVIDIA-SMI 535.54.03 Driver Version: 535.54.03
CUDA/CUDNN Version [e.g. cuda-10.2] CUDA Version: 12.2
nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA Corporation Built on Sun_Jul_28_19:07:16_PDT_2019 Cuda compilation tools, release 10.1, V10.1.243