After the model training, I tried to generate the corresponding audio file for the text, but the following error occurred:
Using TensorFlow backend.
loaded model at logs-WaveNet/wave_pretrained/wavenet_model.ckpt-97500
Hyperparameters:
GL_on_GPU: True
NN_init: True
NN_scaler: 0.3
allow_clipping_in_normalization: True
attention_dim: 128
attention_filters: 32
attention_kernel: (31,)
attention_win_size: 7
batch_norm_position: after
cbhg_conv_channels: 128
cbhg_highway_units: 128
cbhg_highwaynet_layers: 4
cbhg_kernels: 8
cbhg_pool_size: 2
cbhg_projection: 256
cbhg_projection_kernel_size: 3
cbhg_rnn_units: 128
cdf_loss: False
cin_channels: 80
cleaners: uyghur_cleaners
clip_for_wavenet: True
clip_mels_length: True
clip_outputs: True
cross_entropy_pos_weight: 1
cumulative_weights: True
decoder_layers: 2
decoder_lstm_units: 1024
embedding_dim: 512
enc_conv_channels: 512
enc_conv_kernel_size: (5,)
enc_conv_num_layers: 3
encoder_lstm_units: 256
fmax: 7600
fmin: 55
frame_shift_ms: None
freq_axis_kernel_size: 3
gate_channels: 256
gin_channels: -1
griffin_lim_iters: 60
hop_size: 275
input_type: raw
kernel_size: 3
layers: 20
leaky_alpha: 0.4
legacy: True
log_scale_min: -32.23619130191664
log_scale_min_gauss: -16.11809565095832
lower_bound_decay: 0.1
magnitude_power: 2.0
mask_decoder: False
mask_encoder: True
max_abs_value: 4.0
max_iters: 10000
max_mel_frames: 900
max_time_sec: None
max_time_steps: 11000
min_level_db: -100
n_fft: 2048
n_speakers: 5
normalize_for_wavenet: True
num_freq: 1025
num_mels: 80
out_channels: 2
outputs_per_step: 1
postnet_channels: 512
postnet_kernel_size: (5,)
postnet_num_layers: 5
power: 1.5
predict_linear: True
preemphasis: 0.97
preemphasize: True
prenet_layers: [256, 256]
quantize_channels: 65536
ref_level_db: 20
rescale: True
rescaling_max: 0.999
residual_channels: 128
residual_legacy: True
sample_rate: 22050
signal_normalization: True
silence_threshold: 2
skip_out_channels: 128
smoothing: False
speakers: ['speaker0', 'speaker1', 'speaker2', 'speaker3', 'speaker4']
speakers_path: None
split_on_cpu: True
stacks: 2
stop_at_any: True
symmetric_mels: True
synthesis_constraint: False
synthesis_constraint_type: window
tacotron_adam_beta1: 0.9
tacotron_adam_beta2: 0.999
tacotron_adam_epsilon: 1e-06
tacotron_batch_size: 8
tacotron_clip_gradients: True
tacotron_data_random_state: 1234
tacotron_decay_learning_rate: True
tacotron_decay_rate: 0.5
tacotron_decay_steps: 18000
tacotron_dropout_rate: 0.5
tacotron_final_learning_rate: 0.0001
tacotron_fine_tuning: False
tacotron_initial_learning_rate: 0.001
tacotron_natural_eval: False
tacotron_num_gpus: 1
tacotron_random_seed: 5339
tacotron_reg_weight: 1e-06
tacotron_scale_regularization: False
tacotron_start_decay: 40000
tacotron_swap_with_cpu: False
tacotron_synthesis_batch_size: 1
tacotron_teacher_forcing_decay_alpha: None
tacotron_teacher_forcing_decay_steps: 40000
tacotron_teacher_forcing_final_ratio: 0.0
tacotron_teacher_forcing_init_ratio: 1.0
tacotron_teacher_forcing_mode: constant
tacotron_teacher_forcing_ratio: 1.0
tacotron_teacher_forcing_start_decay: 10000
tacotron_test_batches: None
tacotron_test_size: 0.05
tacotron_zoneout_rate: 0.1
train_with_GTA: True
trim_fft_size: 2048
trim_hop_size: 512
trim_silence: True
trim_top_db: 40
upsample_activation: Relu
upsample_scales: [11, 25]
upsample_type: SubPixel
use_bias: True
use_lws: False
use_speaker_embedding: True
wavenet_adam_beta1: 0.9
wavenet_adam_beta2: 0.999
wavenet_adam_epsilon: 1e-06
wavenet_batch_size: 8
wavenet_clip_gradients: True
wavenet_data_random_state: 1234
wavenet_debug_mels: ['training_data/mels/mel-LJ001-0008.npy']
wavenet_debug_wavs: ['training_data/audio/audio-LJ001-0008.npy']
wavenet_decay_rate: 0.5
wavenet_decay_steps: 200000
wavenet_dropout: 0.05
wavenet_ema_decay: 0.9999
wavenet_gradient_max_norm: 100.0
wavenet_gradient_max_value: 5.0
wavenet_init_scale: 1.0
wavenet_learning_rate: 0.001
wavenet_lr_schedule: exponential
wavenet_natural_eval: False
wavenet_num_gpus: 1
wavenet_pad_sides: 1
wavenet_random_seed: 5339
wavenet_swap_with_cpu: False
wavenet_synth_debug: False
wavenet_synthesis_batch_size: 20
wavenet_test_batches: 1
wavenet_test_size: None
wavenet_warmup: 4000.0
wavenet_weight_normalization: False
win_size: 1100
Constructing model: WaveNet
Initializing Wavenet model. Dimensions (? = dynamic shape):
Train mode: False
Eval mode: False
Synthesis mode: True
device: 0
WARNING:tensorflow:From /home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/models/wavenet.py:881: Print (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2018-08-20.
Instructions for updating:
Use tf.print instead of tf.Print. Note that tf.print returns a no-output operator that directly prints the output. Outside of defuns or eager mode, this operator will not be executed unless it is directly specified in session.run or used as a control dependency for other operators. This is only a concern in graph mode. Below is an example of how to ensure tf.print executes in graph mode:
sess = tf.Session()
with sess.as_default():
tensor = tf.range(10)
print_op = tf.print(tensor)
with tf.control_dependencies([print_op]):
out = tf.add(tensor, tensor)
sess.run(out)
Additionally, to use tf.print in python 2.7, users must make sure to import
the following:
from __future__ import print_function
local_condition: (?, 80, ?)
outputs: (?, ?)
Receptive Field: (4093 samples / 185.6 ms)
WaveNet Parameters: 3.064 Million.
Loading checkpoint: logs-WaveNet/wave_pretrained/wavenet_model.ckpt-97500
Traceback (most recent call last):
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.NotFoundError: Key WaveNet_model/WaveNet_model/inference/ResidualConv1DGLU_0/residual_block_causal_conv_ResidualConv1DGLU_0/bias/ExponentialMovingAverage not found in checkpoint
[[{{node WaveNet_model/save/RestoreV2}} = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_WaveNet_model/save/Const_0_0, WaveNet_model/save/RestoreV2/tensor_names, WaveNet_model/save/RestoreV2/shape_and_slices)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1546, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Key WaveNet_model/WaveNet_model/inference/ResidualConv1DGLU_0/residual_block_causal_conv_ResidualConv1DGLU_0/bias/ExponentialMovingAverage not found in checkpoint
[[node WaveNet_model/save/RestoreV2 (defined at /home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/train.py:83) = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_WaveNet_model/save/Const_0_0, WaveNet_model/save/RestoreV2/tensor_names, WaveNet_model/save/RestoreV2/shape_and_slices)]]
Caused by op 'WaveNet_model/save/RestoreV2', defined at:
File "synthesize.py", line 103, in
main()
File "synthesize.py", line 95, in main
wavenet_synthesize(args, hparams, wave_checkpoint)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/synthesize.py", line 79, in wavenet_synthesize
run_synthesis(args, checkpoint_path, output_dir, hparams)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/synthesize.py", line 20, in run_synthesis
synth.load(checkpoint_path, hparams)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/synthesizer.py", line 33, in load
sh_saver = create_shadow_saver(self.model)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/train.py", line 83, in create_shadow_saver
return tf.train.Saver(shadow_dict, max_to_keep=20)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1102, in init
self.build()
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1114, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1151, in _build
build_save=build_save, build_restore=build_restore)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 795, in _build_internal
restore_sequentially, reshape)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 406, in _AddRestoreOps
restore_sequentially)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 862, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1466, in restore_v2
shape_and_slices=shape_and_slices, dtypes=dtypes, name=name)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1770, in init
self._traceback = tf_stack.extract_stack()
NotFoundError (see above for traceback): Key WaveNet_model/WaveNet_model/inference/ResidualConv1DGLU_0/residual_block_causal_conv_ResidualConv1DGLU_0/bias/ExponentialMovingAverage not found in checkpoint
[[node WaveNet_model/save/RestoreV2 (defined at /home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/train.py:83) = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_WaveNet_model/save/Const_0_0, WaveNet_model/save/RestoreV2/tensor_names, WaveNet_model/save/RestoreV2/shape_and_slices)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1556, in restore
names_to_keys = object_graph_key_mapping(save_path)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1830, in object_graph_key_mapping
checkpointable.OBJECT_GRAPH_PROTO_KEY)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 371, in get_tensor
status)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.NotFoundError: Key _CHECKPOINTABLE_OBJECT_GRAPH not found in checkpoint
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "synthesize.py", line 103, in
main()
File "synthesize.py", line 95, in main
wavenet_synthesize(args, hparams, wave_checkpoint)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/synthesize.py", line 79, in wavenet_synthesize
run_synthesis(args, checkpoint_path, output_dir, hparams)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/synthesize.py", line 20, in run_synthesis
synth.load(checkpoint_path, hparams)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/synthesizer.py", line 44, in load
load_averaged_model(self.session, sh_saver, checkpoint_path)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/train.py", line 86, in load_averaged_model
sh_saver.restore(sess, checkpoint_path)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1562, in restore
err, "a Variable name or other graph key that is missing")
tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
Key WaveNet_model/WaveNet_model/inference/ResidualConv1DGLU_0/residual_block_causal_conv_ResidualConv1DGLU_0/bias/ExponentialMovingAverage not found in checkpoint
[[node WaveNet_model/save/RestoreV2 (defined at /home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/train.py:83) = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_WaveNet_model/save/Const_0_0, WaveNet_model/save/RestoreV2/tensor_names, WaveNet_model/save/RestoreV2/shape_and_slices)]]
Caused by op 'WaveNet_model/save/RestoreV2', defined at:
File "synthesize.py", line 103, in
main()
File "synthesize.py", line 95, in main
wavenet_synthesize(args, hparams, wave_checkpoint)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/synthesize.py", line 79, in wavenet_synthesize
run_synthesis(args, checkpoint_path, output_dir, hparams)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/synthesize.py", line 20, in run_synthesis
synth.load(checkpoint_path, hparams)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/synthesizer.py", line 33, in load
sh_saver = create_shadow_saver(self.model)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/train.py", line 83, in create_shadow_saver
return tf.train.Saver(shadow_dict, max_to_keep=20)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1102, in init
self.build()
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1114, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1151, in _build
build_save=build_save, build_restore=build_restore)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 795, in _build_internal
restore_sequentially, reshape)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 406, in _AddRestoreOps
restore_sequentially)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 862, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1466, in restore_v2
shape_and_slices=shape_and_slices, dtypes=dtypes, name=name)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1770, in init
self._traceback = tf_stack.extract_stack()
NotFoundError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
Key WaveNet_model/WaveNet_model/inference/ResidualConv1DGLU_0/residual_block_causal_conv_ResidualConv1DGLU_0/bias/ExponentialMovingAverage not found in checkpoint
[[node WaveNet_model/save/RestoreV2 (defined at /home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/train.py:83) = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_WaveNet_model/save/Const_0_0, WaveNet_model/save/RestoreV2/tensor_names, WaveNet_model/save/RestoreV2/shape_and_slices)]]
If you suspect this is an IPython 7.16.1 bug, please report it at:
https://github.com/ipython/ipython/issues
or send an email to the mailing list at ipython-dev@python.org
You can print a more detailed traceback right now with "%tb", or use "%debug"
to interactively debug it.
Extra-detailed tracebacks for bug-reporting purposes can be enabled via:
%config Application.verbose_crash=True
Can someone tell me what the problem is this? Thank you!
After the model training, I tried to generate the corresponding audio file for the text, but the following error occurred: Using TensorFlow backend. loaded model at logs-WaveNet/wave_pretrained/wavenet_model.ckpt-97500 Hyperparameters: GL_on_GPU: True NN_init: True NN_scaler: 0.3 allow_clipping_in_normalization: True attention_dim: 128 attention_filters: 32 attention_kernel: (31,) attention_win_size: 7 batch_norm_position: after cbhg_conv_channels: 128 cbhg_highway_units: 128 cbhg_highwaynet_layers: 4 cbhg_kernels: 8 cbhg_pool_size: 2 cbhg_projection: 256 cbhg_projection_kernel_size: 3 cbhg_rnn_units: 128 cdf_loss: False cin_channels: 80 cleaners: uyghur_cleaners clip_for_wavenet: True clip_mels_length: True clip_outputs: True cross_entropy_pos_weight: 1 cumulative_weights: True decoder_layers: 2 decoder_lstm_units: 1024 embedding_dim: 512 enc_conv_channels: 512 enc_conv_kernel_size: (5,) enc_conv_num_layers: 3 encoder_lstm_units: 256 fmax: 7600 fmin: 55 frame_shift_ms: None freq_axis_kernel_size: 3 gate_channels: 256 gin_channels: -1 griffin_lim_iters: 60 hop_size: 275 input_type: raw kernel_size: 3 layers: 20 leaky_alpha: 0.4 legacy: True log_scale_min: -32.23619130191664 log_scale_min_gauss: -16.11809565095832 lower_bound_decay: 0.1 magnitude_power: 2.0 mask_decoder: False mask_encoder: True max_abs_value: 4.0 max_iters: 10000 max_mel_frames: 900 max_time_sec: None max_time_steps: 11000 min_level_db: -100 n_fft: 2048 n_speakers: 5 normalize_for_wavenet: True num_freq: 1025 num_mels: 80 out_channels: 2 outputs_per_step: 1 postnet_channels: 512 postnet_kernel_size: (5,) postnet_num_layers: 5 power: 1.5 predict_linear: True preemphasis: 0.97 preemphasize: True prenet_layers: [256, 256] quantize_channels: 65536 ref_level_db: 20 rescale: True rescaling_max: 0.999 residual_channels: 128 residual_legacy: True sample_rate: 22050 signal_normalization: True silence_threshold: 2 skip_out_channels: 128 smoothing: False speakers: ['speaker0', 'speaker1', 'speaker2', 'speaker3', 'speaker4'] speakers_path: None split_on_cpu: True stacks: 2 stop_at_any: True symmetric_mels: True synthesis_constraint: False synthesis_constraint_type: window tacotron_adam_beta1: 0.9 tacotron_adam_beta2: 0.999 tacotron_adam_epsilon: 1e-06 tacotron_batch_size: 8 tacotron_clip_gradients: True tacotron_data_random_state: 1234 tacotron_decay_learning_rate: True tacotron_decay_rate: 0.5 tacotron_decay_steps: 18000 tacotron_dropout_rate: 0.5 tacotron_final_learning_rate: 0.0001 tacotron_fine_tuning: False tacotron_initial_learning_rate: 0.001 tacotron_natural_eval: False tacotron_num_gpus: 1 tacotron_random_seed: 5339 tacotron_reg_weight: 1e-06 tacotron_scale_regularization: False tacotron_start_decay: 40000 tacotron_swap_with_cpu: False tacotron_synthesis_batch_size: 1 tacotron_teacher_forcing_decay_alpha: None tacotron_teacher_forcing_decay_steps: 40000 tacotron_teacher_forcing_final_ratio: 0.0 tacotron_teacher_forcing_init_ratio: 1.0 tacotron_teacher_forcing_mode: constant tacotron_teacher_forcing_ratio: 1.0 tacotron_teacher_forcing_start_decay: 10000 tacotron_test_batches: None tacotron_test_size: 0.05 tacotron_zoneout_rate: 0.1 train_with_GTA: True trim_fft_size: 2048 trim_hop_size: 512 trim_silence: True trim_top_db: 40 upsample_activation: Relu upsample_scales: [11, 25] upsample_type: SubPixel use_bias: True use_lws: False use_speaker_embedding: True wavenet_adam_beta1: 0.9 wavenet_adam_beta2: 0.999 wavenet_adam_epsilon: 1e-06 wavenet_batch_size: 8 wavenet_clip_gradients: True wavenet_data_random_state: 1234 wavenet_debug_mels: ['training_data/mels/mel-LJ001-0008.npy'] wavenet_debug_wavs: ['training_data/audio/audio-LJ001-0008.npy'] wavenet_decay_rate: 0.5 wavenet_decay_steps: 200000 wavenet_dropout: 0.05 wavenet_ema_decay: 0.9999 wavenet_gradient_max_norm: 100.0 wavenet_gradient_max_value: 5.0 wavenet_init_scale: 1.0 wavenet_learning_rate: 0.001 wavenet_lr_schedule: exponential wavenet_natural_eval: False wavenet_num_gpus: 1 wavenet_pad_sides: 1 wavenet_random_seed: 5339 wavenet_swap_with_cpu: False wavenet_synth_debug: False wavenet_synthesis_batch_size: 20 wavenet_test_batches: 1 wavenet_test_size: None wavenet_warmup: 4000.0 wavenet_weight_normalization: False win_size: 1100 Constructing model: WaveNet Initializing Wavenet model. Dimensions (? = dynamic shape): Train mode: False Eval mode: False Synthesis mode: True device: 0 WARNING:tensorflow:From /home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/models/wavenet.py:881: Print (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2018-08-20. Instructions for updating: Use tf.print instead of tf.Print. Note that tf.print returns a no-output operator that directly prints the output. Outside of defuns or eager mode, this operator will not be executed unless it is directly specified in session.run or used as a control dependency for other operators. This is only a concern in graph mode. Below is an example of how to ensure tf.print executes in graph mode:
Additionally, to use tf.print in python 2.7, users must make sure to import the following:
from __future__ import print_function
local_condition: (?, 80, ?) outputs: (?, ?) Receptive Field: (4093 samples / 185.6 ms) WaveNet Parameters: 3.064 Million. Loading checkpoint: logs-WaveNet/wave_pretrained/wavenet_model.ckpt-97500 Traceback (most recent call last): File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call return fn(*args) File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.NotFoundError: Key WaveNet_model/WaveNet_model/inference/ResidualConv1DGLU_0/residual_block_causal_conv_ResidualConv1DGLU_0/bias/ExponentialMovingAverage not found in checkpoint [[{{node WaveNet_model/save/RestoreV2}} = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_WaveNet_model/save/Const_0_0, WaveNet_model/save/RestoreV2/tensor_names, WaveNet_model/save/RestoreV2/shape_and_slices)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1546, in restore {self.saver_def.filename_tensor_name: save_path}) File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run run_metadata_ptr) File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _run feed_dict_tensor, options, run_metadata) File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run run_metadata) File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.NotFoundError: Key WaveNet_model/WaveNet_model/inference/ResidualConv1DGLU_0/residual_block_causal_conv_ResidualConv1DGLU_0/bias/ExponentialMovingAverage not found in checkpoint [[node WaveNet_model/save/RestoreV2 (defined at /home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/train.py:83) = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_WaveNet_model/save/Const_0_0, WaveNet_model/save/RestoreV2/tensor_names, WaveNet_model/save/RestoreV2/shape_and_slices)]]
Caused by op 'WaveNet_model/save/RestoreV2', defined at: File "synthesize.py", line 103, in
main()
File "synthesize.py", line 95, in main
wavenet_synthesize(args, hparams, wave_checkpoint)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/synthesize.py", line 79, in wavenet_synthesize
run_synthesis(args, checkpoint_path, output_dir, hparams)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/synthesize.py", line 20, in run_synthesis
synth.load(checkpoint_path, hparams)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/synthesizer.py", line 33, in load
sh_saver = create_shadow_saver(self.model)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/train.py", line 83, in create_shadow_saver
return tf.train.Saver(shadow_dict, max_to_keep=20)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1102, in init
self.build()
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1114, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1151, in _build
build_save=build_save, build_restore=build_restore)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 795, in _build_internal
restore_sequentially, reshape)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 406, in _AddRestoreOps
restore_sequentially)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 862, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1466, in restore_v2
shape_and_slices=shape_and_slices, dtypes=dtypes, name=name)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1770, in init
self._traceback = tf_stack.extract_stack()
NotFoundError (see above for traceback): Key WaveNet_model/WaveNet_model/inference/ResidualConv1DGLU_0/residual_block_causal_conv_ResidualConv1DGLU_0/bias/ExponentialMovingAverage not found in checkpoint [[node WaveNet_model/save/RestoreV2 (defined at /home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/train.py:83) = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_WaveNet_model/save/Const_0_0, WaveNet_model/save/RestoreV2/tensor_names, WaveNet_model/save/RestoreV2/shape_and_slices)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1556, in restore names_to_keys = object_graph_key_mapping(save_path) File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1830, in object_graph_key_mapping checkpointable.OBJECT_GRAPH_PROTO_KEY) File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 371, in get_tensor status) File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in exit c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.NotFoundError: Key _CHECKPOINTABLE_OBJECT_GRAPH not found in checkpoint
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "synthesize.py", line 103, in
main()
File "synthesize.py", line 95, in main
wavenet_synthesize(args, hparams, wave_checkpoint)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/synthesize.py", line 79, in wavenet_synthesize
run_synthesis(args, checkpoint_path, output_dir, hparams)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/synthesize.py", line 20, in run_synthesis
synth.load(checkpoint_path, hparams)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/synthesizer.py", line 44, in load
load_averaged_model(self.session, sh_saver, checkpoint_path)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/train.py", line 86, in load_averaged_model
sh_saver.restore(sess, checkpoint_path)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1562, in restore
err, "a Variable name or other graph key that is missing")
tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
Key WaveNet_model/WaveNet_model/inference/ResidualConv1DGLU_0/residual_block_causal_conv_ResidualConv1DGLU_0/bias/ExponentialMovingAverage not found in checkpoint [[node WaveNet_model/save/RestoreV2 (defined at /home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/train.py:83) = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_WaveNet_model/save/Const_0_0, WaveNet_model/save/RestoreV2/tensor_names, WaveNet_model/save/RestoreV2/shape_and_slices)]]
Caused by op 'WaveNet_model/save/RestoreV2', defined at: File "synthesize.py", line 103, in
main()
File "synthesize.py", line 95, in main
wavenet_synthesize(args, hparams, wave_checkpoint)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/synthesize.py", line 79, in wavenet_synthesize
run_synthesis(args, checkpoint_path, output_dir, hparams)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/synthesize.py", line 20, in run_synthesis
synth.load(checkpoint_path, hparams)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/synthesizer.py", line 33, in load
sh_saver = create_shadow_saver(self.model)
File "/home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/train.py", line 83, in create_shadow_saver
return tf.train.Saver(shadow_dict, max_to_keep=20)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1102, in init
self.build()
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1114, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1151, in _build
build_save=build_save, build_restore=build_restore)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 795, in _build_internal
restore_sequentially, reshape)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 406, in _AddRestoreOps
restore_sequentially)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 862, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1466, in restore_v2
shape_and_slices=shape_and_slices, dtypes=dtypes, name=name)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/home/nur-179/anaconda3/envs/t2/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1770, in init
self._traceback = tf_stack.extract_stack()
NotFoundError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
Key WaveNet_model/WaveNet_model/inference/ResidualConv1DGLU_0/residual_block_causal_conv_ResidualConv1DGLU_0/bias/ExponentialMovingAverage not found in checkpoint [[node WaveNet_model/save/RestoreV2 (defined at /home/nur-179/tacotron2/Tacotron-2/wavenet_vocoder/train.py:83) = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_WaveNet_model/save/Const_0_0, WaveNet_model/save/RestoreV2/tensor_names, WaveNet_model/save/RestoreV2/shape_and_slices)]]
If you suspect this is an IPython 7.16.1 bug, please report it at: https://github.com/ipython/ipython/issues or send an email to the mailing list at ipython-dev@python.org
You can print a more detailed traceback right now with "%tb", or use "%debug" to interactively debug it.
Extra-detailed tracebacks for bug-reporting purposes can be enabled via: %config Application.verbose_crash=True Can someone tell me what the problem is this? Thank you!