Open iDevPingu opened 6 years ago
[] Checkpoint path: logs/son_2018-10-05_16-48-08/model.ckpt [] Loading training data from: ['datasets/son/data'] [*] Using model: logs/son_2018-10-05_16-48-08 Hyperparameters: adam_beta1: 0.9 adam_beta2: 0.999 attention_size: 128 attention_state_size: 256 attention_type: bah_mon batch_size: 32 cleaners: english_cleaners dec_layer_num: 2 dec_prenet_sizes: [256, 128] dec_rnn_size: 256 decay_learning_rate_mode: 0 dropout_prob: 0.5 embedding_size: 256 enc_bank_channel_size: 128 enc_bank_size: 16 enc_highway_depth: 4 enc_maxpool_width: 2 enc_prenet_sizes: [256, 128] enc_proj_sizes: [128, 128] enc_proj_width: 3 enc_rnn_size: 128 frame_length_ms: 50 frame_shift_ms: 12.5 griffin_lim_iters: 60 ignore_recognition_level: 0 initial_data_greedy: True initial_learning_rate: 0.001 initial_phase_step: 8000 main_data: [''] main_data_greedy_factor: 0 max_iters: 200 min_iters: 30 min_level_db: -100 min_tokens: 50 model_type: single num_freq: 1025 num_mels: 80 post_bank_channel_size: 128 post_bank_size: 8 post_highway_depth: 4 post_maxpool_width: 2 post_proj_sizes: [256, 80] post_proj_width: 3 post_rnn_size: 128 power: 1.5 preemphasis: 0.97 prioritize_loss: False recognition_loss_coeff: 0.2 reduction_factor: 5 ref_level_db: 20 sample_rate: 22050 skip_inadequate: False speaker_embedding_size: 16 use_fixed_test_inputs: False filter_by_min_max_frame_batch: 0it [00:00, ?it/s] [datasets/son/data] Loaded metadata for 0 examples (0.00 hours) Traceback (most recent call last): File "train.py", line 336, in main() File "train.py", line 332, in main train(config.model_dir, config) File "train.py", line 144, in train data_type='train', batch_size=hparams.batch_size) File "/home/ick/multi-speaker-tacotron-tensorflow-master/datasets/datafeeder.py", line 104, in init n_test=self.batch_size, rng=self.rng) File "/home/ick/multi-speaker-tacotron-tensorflow-master/datasets/datafeeder.py", line 62, in get_path_dict log(' [{}] Max length: {}'.format(data_dir, max(new_n_frames))) ValueError: max() arg is an empty sequence
이렇게 나오는데 어떤 것이 문제인걸까요 datafeeder.py쪽을 건드려야 할 것 같은데 감이 안오네요 ㅠㅠ.
데이터 생성 하셨나요?
python3 train.py --data_path=datasets/son 을 실행하면 [] MODEL dir: logs/son_2018-10-05_16-48-08 [] PARAM path: logs/son_2018-10-05_16-48-08/params.json ['datasets/son']
[!] Detect non-krbook dataset. May need to set sampling rate from 22050 to 20000
==================================================
[] Checkpoint path: logs/son_2018-10-05_16-48-08/model.ckpt [] Loading training data from: ['datasets/son/data'] [*] Using model: logs/son_2018-10-05_16-48-08 Hyperparameters: adam_beta1: 0.9 adam_beta2: 0.999 attention_size: 128 attention_state_size: 256 attention_type: bah_mon batch_size: 32 cleaners: english_cleaners dec_layer_num: 2 dec_prenet_sizes: [256, 128] dec_rnn_size: 256 decay_learning_rate_mode: 0 dropout_prob: 0.5 embedding_size: 256 enc_bank_channel_size: 128 enc_bank_size: 16 enc_highway_depth: 4 enc_maxpool_width: 2 enc_prenet_sizes: [256, 128] enc_proj_sizes: [128, 128] enc_proj_width: 3 enc_rnn_size: 128 frame_length_ms: 50 frame_shift_ms: 12.5 griffin_lim_iters: 60 ignore_recognition_level: 0 initial_data_greedy: True initial_learning_rate: 0.001 initial_phase_step: 8000 main_data: [''] main_data_greedy_factor: 0 max_iters: 200 min_iters: 30 min_level_db: -100 min_tokens: 50 model_type: single num_freq: 1025 num_mels: 80 post_bank_channel_size: 128 post_bank_size: 8 post_highway_depth: 4 post_maxpool_width: 2 post_proj_sizes: [256, 80] post_proj_width: 3 post_rnn_size: 128 power: 1.5 preemphasis: 0.97 prioritize_loss: False recognition_loss_coeff: 0.2 reduction_factor: 5 ref_level_db: 20 sample_rate: 22050 skip_inadequate: False speaker_embedding_size: 16 use_fixed_test_inputs: False filter_by_min_max_frame_batch: 0it [00:00, ?it/s] [datasets/son/data] Loaded metadata for 0 examples (0.00 hours) Traceback (most recent call last): File "train.py", line 336, in
main()
File "train.py", line 332, in main
train(config.model_dir, config)
File "train.py", line 144, in train
data_type='train', batch_size=hparams.batch_size)
File "/home/ick/multi-speaker-tacotron-tensorflow-master/datasets/datafeeder.py", line 104, in init
n_test=self.batch_size, rng=self.rng)
File "/home/ick/multi-speaker-tacotron-tensorflow-master/datasets/datafeeder.py", line 62, in get_path_dict
log(' [{}] Max length: {}'.format(data_dir, max(new_n_frames)))
ValueError: max() arg is an empty sequence
이렇게 나오는데 어떤 것이 문제인걸까요 datafeeder.py쪽을 건드려야 할 것 같은데 감이 안오네요 ㅠㅠ.