Rayhane-mamah / Tacotron-2

DeepMind's Tacotron-2 Tensorflow implementation
MIT License
2.25k stars 909 forks source link

when i start to train Wavenet, attribute error #397

Open hongor16 opened 5 years ago

hongor16 commented 5 years ago

WARNING: Logging before flag parsing goes to stderr. W0617 17:33:04.058133 140246269011776 lazy_loader.py:50] WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see:

W0617 17:33:04.397966 140246269011776 deprecation_wrapper.py:119] From /home/zoloo/venv/Tacotron-master/tacotron/models/modules.py:81: The name tf.nn.rnn_cell.RNNCell is deprecated. Please use tf.compat.v1.nn.rnn_cell.RNNCell instead.

Using TensorFlow backend. W0617 17:33:04.582393 140246269011776 deprecation_wrapper.py:119] From /home/zoloo/venv/Tacotron-master/wavenet_vocoder/models/modules.py:539: The name tf.layers.Conv2D is deprecated. Please use tf.compat.v1.layers.Conv2D instead.

W0617 17:33:04.582602 140246269011776 deprecation_wrapper.py:119] From /home/zoloo/venv/Tacotron-master/wavenet_vocoder/models/modules.py:697: The name tf.layers.Conv2DTranspose is deprecated. Please use tf.compat.v1.layers.Conv2DTranspose instead.

Checkpoint_path: logs-WaveNet/wave_pretrained/wavenet_model.ckpt Loading training data from: tacotron_output/gta/map.txt Using model: WaveNet Hyperparameters: GL_on_GPU: True NN_init: True NN_scaler: 0.3 allow_clipping_in_normalization: True attention_dim: 128 attention_filters: 32 attention_kernel: (31,) attention_win_size: 7 batch_norm_position: after cbhg_conv_channels: 128 cbhg_highway_units: 128 cbhg_highwaynet_layers: 4 cbhg_kernels: 8 cbhg_pool_size: 2 cbhg_projection: 256 cbhg_projection_kernel_size: 3 cbhg_rnn_units: 128 cdf_loss: False cin_channels: 80 cleaners: transliteration_cleaners clip_for_wavenet: True clip_mels_length: True clip_outputs: True cross_entropy_pos_weight: 1 cumulative_weights: True decoder_layers: 2 decoder_lstm_units: 1024 embedding_dim: 512 enc_conv_channels: 512 enc_conv_kernel_size: (5,) enc_conv_num_layers: 3 encoder_lstm_units: 256 fmax: 7600 fmin: 55 frame_shift_ms: None freq_axis_kernel_size: 3 gate_channels: 256 gin_channels: -1 griffin_lim_iters: 60 hop_size: 275 input_type: raw kernel_size: 3 layers: 20 leaky_alpha: 0.4 legacy: True log_scale_min: -32.23619130191664 log_scale_min_gauss: -16.11809565095832 lower_bound_decay: 0.1 magnitude_power: 2.0 mask_decoder: False mask_encoder: True max_abs_value: 4.0 max_iters: 10000 max_mel_frames: 900 max_time_sec: None max_time_steps: 11000 min_level_db: -100 n_fft: 2048 n_speakers: 5 normalize_for_wavenet: True num_freq: 1025 num_mels: 80 out_channels: 2 outputs_per_step: 1 postnet_channels: 512 postnet_kernel_size: (5,) postnet_num_layers: 5 power: 1.5 predict_linear: True preemphasis: 0.97 preemphasize: True prenet_layers: [256, 256] quantize_channels: 65536 ref_level_db: 20 rescale: True rescaling_max: 0.999 residual_channels: 128 residual_legacy: True sample_rate: 22050 signal_normalization: True silence_threshold: 2 skip_out_channels: 128 smoothing: False speakers: ['speaker0', 'speaker1', 'speaker2', 'speaker3', 'speaker4'] speakers_path: None split_on_cpu: True stacks: 2 stop_at_any: True symmetric_mels: True synthesis_constraint: False synthesis_constraint_type: window tacotron_adam_beta1: 0.9 tacotron_adam_beta2: 0.999 tacotron_adam_epsilon: 1e-06 tacotron_batch_size: 32 tacotron_clip_gradients: True tacotron_data_random_state: 1234 tacotron_decay_learning_rate: True tacotron_decay_rate: 0.5 tacotron_decay_steps: 18000 tacotron_dropout_rate: 0.5 tacotron_final_learning_rate: 0.0001 tacotron_fine_tuning: False tacotron_initial_learning_rate: 0.001 tacotron_natural_eval: False tacotron_num_gpus: 1 tacotron_random_seed: 5339 tacotron_reg_weight: 1e-06 tacotron_scale_regularization: False tacotron_start_decay: 40000 tacotron_swap_with_cpu: False tacotron_synthesis_batch_size: 1 tacotron_teacher_forcing_decay_alpha: None tacotron_teacher_forcing_decay_steps: 40000 tacotron_teacher_forcing_final_ratio: 0.0 tacotron_teacher_forcing_init_ratio: 1.0 tacotron_teacher_forcing_mode: constant tacotron_teacher_forcing_ratio: 1.0 tacotron_teacher_forcing_start_decay: 10000 tacotron_test_batches: None tacotron_test_size: 0.05 tacotron_zoneout_rate: 0.1 train_with_GTA: True trim_fft_size: 2048 trim_hop_size: 512 trim_silence: True trim_top_db: 40 upsample_activation: Relu upsample_scales: [11, 25] upsample_type: SubPixel use_bias: True use_lws: False use_speaker_embedding: True wavenet_adam_beta1: 0.9 wavenet_adam_beta2: 0.999 wavenet_adam_epsilon: 1e-06 wavenet_batch_size: 2 wavenet_clip_gradients: True wavenet_data_random_state: 1234 wavenet_debug_mels: ['training_data/mels/mel-LJ001-0008.npy'] wavenet_debug_wavs: ['training_data/audio/audio-LJ001-0008.npy'] wavenet_decay_rate: 0.5 wavenet_decay_steps: 200000 wavenet_dropout: 0.05 wavenet_ema_decay: 0.9999 wavenet_gradient_max_norm: 100.0 wavenet_gradient_max_value: 5.0 wavenet_init_scale: 1.0 wavenet_learning_rate: 0.001 wavenet_lr_schedule: exponential wavenet_natural_eval: False wavenet_num_gpus: 1 wavenet_pad_sides: 1 wavenet_random_seed: 5339 wavenet_swap_with_cpu: False wavenet_synth_debug: False wavenet_synthesis_batch_size: 20 wavenet_test_batches: 1 wavenet_test_size: None wavenet_warmup: 4000.0 wavenet_weight_normalization: False win_size: 1100 W0617 17:33:04.585102 140246269011776 deprecation_wrapper.py:119] From /home/zoloo/venv/Tacotron-master/wavenet_vocoder/train.py:221: The name tf.set_random_seed is deprecated. Please use tf.compat.v1.set_random_seed instead.

W0617 17:33:04.585522 140246269011776 deprecation_wrapper.py:119] From /home/zoloo/venv/Tacotron-master/wavenet_vocoder/train.py:225: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.

W0617 17:33:04.636391 140246269011776 deprecation_wrapper.py:119] From /home/zoloo/venv/Tacotron-master/wavenet_vocoder/feeder.py:75: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

W0617 17:33:04.638350 140246269011776 deprecation_wrapper.py:119] From /home/zoloo/venv/Tacotron-master/wavenet_vocoder/feeder.py:99: The name tf.FIFOQueue is deprecated. Please use tf.queue.FIFOQueue instead.

W0617 17:33:04.642899 140246269011776 deprecation_wrapper.py:119] From /home/zoloo/venv/Tacotron-master/wavenet_vocoder/train.py:169: The name tf.AUTO_REUSE is deprecated. Please use tf.compat.v1.AUTO_REUSE instead.

W0617 17:33:04.643126 140246269011776 deprecation_wrapper.py:119] From /home/zoloo/venv/Tacotron-master/wavenet_vocoder/models/modules.py:206: The name tf.layers.Conv1D is deprecated. Please use tf.compat.v1.layers.Conv1D instead.

Traceback (most recent call last): File "train.py", line 138, in main() File "train.py", line 130, in main wavenet_train(args, log_dir, hparams, args.wavenet_input) File "/home/zoloo/venv/Tacotron-master/wavenet_vocoder/train.py", line 346, in wavenet_train return train(log_dir, args, hparams, input_path) File "/home/zoloo/venv/Tacotron-master/wavenet_vocoder/train.py", line 230, in train model, stats = model_train_mode(args, feeder, hparams, global_step) File "/home/zoloo/venv/Tacotron-master/wavenet_vocoder/train.py", line 173, in model_train_mode model = create_model(model_name or args.model, hparams, init) File "/home/zoloo/venv/Tacotron-master/wavenet_vocoder/models/init.py", line 12, in create_model return WaveNet(hparams, init) File "/home/zoloo/venv/Tacotron-master/wavenet_vocoder/models/wavenet.py", line 109, in init name='input_convolution') File "/home/zoloo/venv/Tacotron-master/wavenet_vocoder/models/modules.py", line 376, in init name=name, **kwargs File "/home/zoloo/venv/Tacotron-master/wavenet_vocoder/models/modules.py", line 230, in init self._track_checkpointable(layer, name='layer') AttributeError: 'Conv1D1x1' object has no attribute '_track_checkpointable'

cfj1996123 commented 5 years ago

Me too, any solution to this ?

menu23 commented 4 years ago

Is there an update on this? @Rayhane-mamah I tried commenting out the line self._track_checkpointable(layer, name='layer') from modules.py in the wavenet_vocoder. That helped me get past this error and complete training. But during wavenet synthesis I am now encountering a different error.

loaded model at logs-WaveNet/wave_pretrained/wavenet_model.ckpt-500000 Hyperparameters: GL_on_GPU: True NN_init: True NN_scaler: 0.3 allow_clipping_in_normalization: True attention_dim: 128 attention_filters: 32 attention_kernel: (31,) attention_win_size: 7 batch_norm_position: after cbhg_conv_channels: 128 cbhg_highway_units: 128 cbhg_highwaynet_layers: 4 cbhg_kernels: 8 cbhg_pool_size: 2 cbhg_projection: 256 cbhg_projection_kernel_size: 3 cbhg_rnn_units: 128 cdf_loss: False cin_channels: 80 cleaners: english_cleaners clip_for_wavenet: True clip_mels_length: True clip_outputs: True cross_entropy_pos_weight: 1 cumulative_weights: True decoder_layers: 2 decoder_lstm_units: 1024 embedding_dim: 512 enc_conv_channels: 512 enc_conv_kernel_size: (5,) enc_conv_num_layers: 3 encoder_lstm_units: 256 fmax: 7600 fmin: 55 frame_shift_ms: None freq_axis_kernel_size: 3 gate_channels: 256 gin_channels: -1 griffin_lim_iters: 60 hop_size: 275 input_type: raw kernel_size: 3 layers: 20 leaky_alpha: 0.4 legacy: True log_scale_min: -32.23619130191664 log_scale_min_gauss: -16.11809565095832 lower_bound_decay: 0.1 magnitude_power: 2.0 mask_decoder: False mask_encoder: True max_abs_value: 4.0 max_iters: 10000 max_mel_frames: 900 max_time_sec: None max_time_steps: 11000 min_level_db: -100 n_fft: 2048 n_speakers: 5 normalize_for_wavenet: True num_freq: 1025 num_mels: 80 out_channels: 2 outputs_per_step: 1 postnet_channels: 512 postnet_kernel_size: (5,) postnet_num_layers: 5 power: 1.5 predict_linear: True preemphasis: 0.97 preemphasize: True prenet_layers: [256, 256] quantize_channels: 65536 ref_level_db: 20 rescale: True rescaling_max: 0.999 residual_channels: 128 residual_legacy: True sample_rate: 22050 signal_normalization: True silence_threshold: 2 skip_out_channels: 128 smoothing: False speakers: ['speaker0', 'speaker1', 'speaker2', 'speaker3', 'speaker4'] speakers_path: None split_on_cpu: False stacks: 2 stop_at_any: True symmetric_mels: True synthesis_constraint: False synthesis_constraint_type: window tacotron_adam_beta1: 0.9 tacotron_adam_beta2: 0.999 tacotron_adam_epsilon: 1e-06 tacotron_batch_size: 32 tacotron_clip_gradients: True tacotron_data_random_state: 1234 tacotron_decay_learning_rate: True tacotron_decay_rate: 0.5 tacotron_decay_steps: 18000 tacotron_dropout_rate: 0.5 tacotron_final_learning_rate: 0.0001 tacotron_fine_tuning: False tacotron_initial_learning_rate: 0.001 tacotron_natural_eval: False tacotron_num_gpus: 1 tacotron_random_seed: 5339 tacotron_reg_weight: 1e-06 tacotron_scale_regularization: False tacotron_start_decay: 40000 tacotron_swap_with_cpu: False tacotron_synthesis_batch_size: 1 tacotron_teacher_forcing_decay_alpha: None tacotron_teacher_forcing_decay_steps: 40000 tacotron_teacher_forcing_final_ratio: 0.0 tacotron_teacher_forcing_init_ratio: 1.0 tacotron_teacher_forcing_mode: constant tacotron_teacher_forcing_ratio: 1.0 tacotron_teacher_forcing_start_decay: 10000 tacotron_test_batches: None tacotron_test_size: 0.05 tacotron_zoneout_rate: 0.1 train_with_GTA: True trim_fft_size: 2048 trim_hop_size: 512 trim_silence: True trim_top_db: 40 upsample_activation: Relu upsample_scales: [11, 25] upsample_type: SubPixel use_bias: True use_lws: False use_speaker_embedding: True wavenet_adam_beta1: 0.9 wavenet_adam_beta2: 0.999 wavenet_adam_epsilon: 1e-06 wavenet_batch_size: 8 wavenet_clip_gradients: True wavenet_data_random_state: 1234 wavenet_debug_mels: ['training_data/mels/mel-LJ001-0008.npy'] wavenet_debug_wavs: ['training_data/audio/audio-LJ001-0008.npy'] wavenet_decay_rate: 0.5 wavenet_decay_steps: 200000 wavenet_dropout: 0.05 wavenet_ema_decay: 0.9999 wavenet_gradient_max_norm: 100.0 wavenet_gradient_max_value: 5.0 wavenet_init_scale: 1.0 wavenet_learning_rate: 0.001 wavenet_lr_schedule: exponential wavenet_natural_eval: False wavenet_num_gpus: 1 wavenet_pad_sides: 1 wavenet_random_seed: 5339 wavenet_swap_with_cpu: False wavenet_synth_debug: False wavenet_synthesis_batch_size: 20 wavenet_test_batches: 1 wavenet_test_size: None wavenet_warmup: 4000.0 wavenet_weight_normalization: False win_size: 1100 Constructing model: WaveNet

.....

Loading checkpoint: logs-WaveNet/wave_pretrained/wavenet_model.ckpt-500000 W0815 20:42:55.091497 140538845747008 deprecation.py:323] From /home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/training/saver.py:1276: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version. Instructions for updating: Use standard file APIs to check for files with this prefix. Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1356, in _do_call return fn(*args) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1341, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1429, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.NotFoundError: Key WaveNet_model/WaveNet_model/inference/ResidualConv1DGLU_0/residual_block_causal_conv_ResidualConv1DGLU_0/bias/ExponentialMovingAverage not found in checkpoint [[{{node WaveNet_model/save/RestoreV2}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 1286, in restore {self.saver_def.filename_tensor_name: save_path}) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 950, in run run_metadata_ptr) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1173, in _run feed_dict_tensor, options, run_metadata) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1350, in _do_run run_metadata) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1370, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.NotFoundError: Key WaveNet_model/WaveNet_model/inference/ResidualConv1DGLU_0/residual_block_causal_conv_ResidualConv1DGLU_0/bias/ExponentialMovingAverage not found in checkpoint [[node WaveNet_model/save/RestoreV2 (defined at /wbet/VoiceSynthAWS/Tacotron-2/wavenet_vocoder/train.py:83) ]]

Original stack trace for 'WaveNet_model/save/RestoreV2': File "synthesize.py", line 100, in main() File "synthesize.py", line 92, in main wavenet_synthesize(args, hparams, wave_checkpoint) File "/Tacotron-2/wavenet_vocoder/synthesize.py", line 78, in wavenet_synthesize run_synthesis(args, checkpoint_path, output_dir, hparams) File "/Tacotron-2/wavenet_vocoder/synthesize.py", line 19, in run_synthesis synth.load(checkpoint_path, hparams) File "/Tacotron-2/wavenet_vocoder/synthesizer.py", line 33, in load sh_saver = create_shadow_saver(self.model) File "/Tacotron-2/wavenet_vocoder/train.py", line 83, in create_shadow_saver return tf.train.Saver(shadow_dict, max_to_keep=20) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 825, in init self.build() File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 837, in build self._build(self._filename, build_save=True, build_restore=True) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 875, in _build build_restore=build_restore) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 508, in _build_internal restore_sequentially, reshape) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 328, in _AddRestoreOps restore_sequentially) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 575, in bulk_restore return io_ops.restore_v2(filename_tensor, names, slices, dtypes) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1696, in restore_v2 name=name) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper op_def=op_def) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func return func(*args, **kwargs) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3616, in create_op op_def=op_def) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 2005, in init self._traceback = tf_stack.extract_stack()

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 1296, in restore names_to_keys = object_graph_key_mapping(save_path) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 1614, in object_graph_key_mapping object_graph_string = reader.get_tensor(trackable.OBJECT_GRAPH_PROTO_KEY) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 678, in get_tensor return CheckpointReader_GetTensor(self, compat.as_bytes(tensor_str)) tensorflow.python.framework.errors_impl.NotFoundError: Key _CHECKPOINTABLE_OBJECT_GRAPH not found in checkpoint

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "synthesize.py", line 100, in main() File "synthesize.py", line 92, in main wavenet_synthesize(args, hparams, wave_checkpoint) File "/Tacotron-2/wavenet_vocoder/synthesize.py", line 78, in wavenet_synthesize run_synthesis(args, checkpoint_path, output_dir, hparams) File "/Tacotron-2/wavenet_vocoder/synthesize.py", line 19, in run_synthesis synth.load(checkpoint_path, hparams) File "/Tacotron-2/wavenet_vocoder/synthesizer.py", line 44, in load load_averaged_model(self.session, sh_saver, checkpoint_path) File "/Tacotron-2/wavenet_vocoder/train.py", line 86, in load_averaged_model sh_saver.restore(sess, checkpoint_path) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 1302, in restore err, "a Variable name or other graph key that is missing") tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Key WaveNet_model/WaveNet_model/inference/ResidualConv1DGLU_0/residual_block_causal_conv_ResidualConv1DGLU_0/bias/ExponentialMovingAverage not found in checkpoint [[node WaveNet_model/save/RestoreV2 (defined at /wbet/VoiceSynthAWS/Tacotron-2/wavenet_vocoder/train.py:83) ]]

Original stack trace for 'WaveNet_model/save/RestoreV2': File "synthesize.py", line 100, in main() File "synthesize.py", line 92, in main wavenet_synthesize(args, hparams, wave_checkpoint) File "/Tacotron-2/wavenet_vocoder/synthesize.py", line 78, in wavenet_synthesize run_synthesis(args, checkpoint_path, output_dir, hparams) File "/Tacotron-2/wavenet_vocoder/synthesize.py", line 19, in run_synthesis synth.load(checkpoint_path, hparams) File "/Tacotron-2/wavenet_vocoder/synthesizer.py", line 33, in load sh_saver = create_shadow_saver(self.model) File "/Tacotron-2/wavenet_vocoder/train.py", line 83, in create_shadow_saver return tf.train.Saver(shadow_dict, max_to_keep=20) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 825, in init self.build() File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 837, in build self._build(self._filename, build_save=True, build_restore=True) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 875, in _build build_restore=build_restore) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 508, in _build_internal restore_sequentially, reshape) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 328, in _AddRestoreOps restore_sequentially) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/training/saver.py", line 575, in bulk_restore return io_ops.restore_v2(filename_tensor, names, slices, dtypes) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1696, in restore_v2 name=name) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper op_def=op_def) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func return func(*args, **kwargs) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3616, in create_op op_def=op_def) File "/home/ubuntu/anaconda3/envs/tacotron/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 2005, in init self._traceback = tf_stack.extract_stack()

JasonWei512 commented 4 years ago

Commenting self._track_checkpointable(layer, name='layer') works fine for me (there are 2 lines).

@menu23 I had this issue with tensorflow 1.14. I downgraded to tensorflow 1.10, retrained wavenet, and the error is gone.

antonkoatl commented 4 years ago

Maybe changing self._track_checkpointable(layer, name='layer') to self._track_trackable(layer, name='layer') will work (not tested). https://github.com/tensorflow/tensorflow/commit/bd36b48c555b2d46c41a179ed9f27a04806e9e66

The-Randalorian commented 4 years ago

@garlicshk The fix self._track_trackable(layer, name='layer') does indeed seem to work. I have tested it in tensorflow-gpu 1.14.0, My model seems to be training fine after the fix.

Not sure about the synthesis though... maybe that will go away after a model is trained with this fix? I'll check it out after a little bit of training...

orascheg commented 4 years ago

@ garlicshk I have this problem still on synthesis (see also https://github.com/Rayhane-mamah/Tacotron-2/issues/434). It seems that there it is not yet fixed. Have you been successful?

mib32 commented 4 years ago

Do not change the _track_checkpointable. The real answer here - use tensorflow 1.10.1 or 1.10.0

Tanxj commented 3 years ago

@garlicshk I've tested on tf-1.15.4, the error was fixed, thanks~