Closed rossbalch closed 2 years ago
When using the manual upload method training fails with the following output.
Creating dataset... This usually takes around 2-3 minutes for each minute of audio (10 minutes of training audio -> 20-30 minutes)
Training... I0820 05:51:54.979206 140026090964864 ddsp_run.py:179] Restore Dir: ./ddsp-training-2022-08-20-0413 I0820 05:51:54.979590 140026090964864 ddsp_run.py:180] Save Dir: ./ddsp-training-2022-08-20-0413 I0820 05:51:54.979815 140026090964864 resource_reader.py:50] system_path_file_exists:optimization/base.gin E0820 05:51:54.980043 140026090964864 resource_reader.py:55] Path not found: optimization/base.gin I0820 05:51:54.981687 140026090964864 resource_reader.py:50] system_path_file_exists:eval/basic.gin E0820 05:51:54.981898 140026090964864 resource_reader.py:55] Path not found: eval/basic.gin I0820 05:51:54.983518 140026090964864 ddsp_run.py:152] Operative config not found in ./ddsp-training-2022-08-20-0413 I0820 05:51:54.983788 140026090964864 resource_reader.py:50] system_path_file_exists:models/vst/vst.gin E0820 05:51:54.983992 140026090964864 resource_reader.py:55] Path not found: models/vst/vst.gin I0820 05:51:54.988860 140026090964864 resource_reader.py:50] system_path_file_exists:datasets/tfrecord.gin E0820 05:51:54.989081 140026090964864 resource_reader.py:55] Path not found: datasets/tfrecord.gin I0820 05:51:54.989506 140026090964864 resource_reader.py:50] system_path_file_exists:datasets/base.gin E0820 05:51:54.989742 140026090964864 resource_reader.py:55] Path not found: datasets/base.gin I0820 05:51:54.997555 140026090964864 ddsp_run.py:184] Operative Gin Config: import ddsp import ddsp.training as ddsp2
batch_size = 16 evaluators = [@BasicEvaluator] frame_rate = 50 frame_size = 1024 learning_rate = 0.0003 n_samples = 64320 sample_rate = 16000
processors.Add.name = 'add'
Autoencoder.decoder = @decoders.RnnFcDecoder() Autoencoder.encoder = None Autoencoder.losses = [@losses.SpectralLoss()] Autoencoder.preprocessor = @preprocessing.OnlineF0PowerPreprocessor() Autoencoder.processor_group = @processors.ProcessorGroup()
Crop.crop_location = 'back' Crop.frame_size = 320
evaluate.batch_size = 32 evaluate.data_provider = @data.TFRecordProvider() evaluate.evaluator_classes = %evaluators evaluate.num_batches = 5
FilteredNoise.n_samples = %n_samples FilteredNoise.name = 'filtered_noise' FilteredNoise.scale_fn = @core.exp_sigmoid FilteredNoise.window_size = 0
FilteredNoiseReverb.n_filter_banks = 32 FilteredNoiseReverb.n_frames = 500 FilteredNoiseReverb.name = 'reverb' FilteredNoiseReverb.reverb_length = 24000 FilteredNoiseReverb.trainable = True
get_model.model = @models.Autoencoder()
Harmonic.amp_resample_method = 'linear' Harmonic.n_samples = %n_samples Harmonic.name = 'harmonic' Harmonic.normalize_below_nyquist = True Harmonic.sample_rate = %sample_rate Harmonic.scale_fn = @core.exp_sigmoid
OnlineF0PowerPreprocessor.compute_f0 = False OnlineF0PowerPreprocessor.compute_power = True OnlineF0PowerPreprocessor.crepe_saved_model_path = None OnlineF0PowerPreprocessor.frame_rate = %frame_rate OnlineF0PowerPreprocessor.frame_size = %frame_size OnlineF0PowerPreprocessor.padding = 'center'
ProcessorGroup.dag = \ [(@synths.Harmonic(), ['amps', 'harmonic_distribution', 'f0_hz']), (@synths.FilteredNoise(), ['noise_magnitudes']), (@processors.Add(), ['filtered_noise/signal', 'harmonic/signal']), (@effects.FilteredNoiseReverb(), ['add/signal']), (@processors.Crop(), ['reverb/signal'])]
RnnFcDecoder.ch = 256 RnnFcDecoder.input_keys = ('pw_scaled', 'f0_scaled') RnnFcDecoder.layers_per_stack = 1 RnnFcDecoder.output_splits = \ (('amps', 1), ('harmonic_distribution', 60), ('noise_magnitudes', 65)) RnnFcDecoder.rnn_channels = 512 RnnFcDecoder.rnn_type = 'gru'
sample.batch_size = 16 sample.ckpt_delay_secs = 300 sample.data_provider = @data.TFRecordProvider() sample.evaluator_classes = %evaluators sample.num_batches = 1
SpectralLoss.logmag_weight = 1.0 SpectralLoss.loss_type = 'L1' SpectralLoss.mag_weight = 1.0
TFRecordProvider.centered = True TFRecordProvider.file_pattern = '' TFRecordProvider.frame_rate = 50
train.batch_size = %batch_size train.data_provider = @data.TFRecordProvider() train.num_steps = 30000 train.steps_per_save = 300 train.steps_per_summary = 300
Trainer.checkpoints_to_keep = 3 Trainer.grad_clip_norm = 3.0 Trainer.learning_rate = %learning_rate Trainer.lr_decay_rate = 0.98 Trainer.lr_decay_steps = 10000
I0820 05:51:54.997716 140026090964864 train_util.py:76] Defaulting to MirroredStrategy
2022-08-20 05:51:55.684680: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
I0820 05:51:55.689630 140026090964864 mirrored_strategy.py:374] Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
Traceback (most recent call last):
File "/usr/local/bin/ddsp_run", line 8, in
Exporting model... Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/ddsp/training/train_util.py", line 165, in get_latest_operative_config restore_dir, prefix='operative_config-', suffix='.gin') File "/usr/local/lib/python3.7/dist-packages/ddsp/training/train_util.py", line 106, in get_latest_file f'No files found matching the pattern \'{search_pattern}\'.') FileNotFoundError: No files found matching the pattern './ddsp-training-2022-08-20-0413/operative_config-*.gin'.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/ddsp_export", line 8, in
Describe the bug What happened? What did you expect to happen? As described, the script runs, asks me to authorise my Google Drive, but then a prompt never pops up asking me to choose a folder where the file is stored, it just hangs at this point,
System Info
To Reproduce Steps to reproduce the behavior.
Screenshots If you have any, screenshots would be very helpful.
Additional context Anything else we should know?