Plachtaa / VITS-fast-fine-tuning

This repo is a pipeline of VITS finetuning for fast speaker adaptation TTS, and many-to-many voice conversion
Apache License 2.0
4.74k stars 712 forks source link

Training error reporting #271

Open Bohemian-self opened 1 year ago

Bohemian-self commented 1 year ago

我在Colab上执行“Training”步骤时,想在原有的G_13600.pth模型上继续训练,但是报错: /content/so-vits-svc The tensorboard extension is already loaded. To reload it, use: %reload_ext tensorboard Reusing TensorBoard on port 6006 (pid 2446), started 0:10:00 ago. (Use '!kill 2446' to kill it.) 2023-06-10 11:15:28.020520: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-06-10 11:15:28.926952: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2023-06-10 11:15:34.058448: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT INFO:44k:{'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 1234, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3, 'all_in_mem': False, 'vol_aug': False}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 768, 'ssl_dim': 768, 'n_speakers': 1, 'speech_encoder': 'vec768l12', 'speaker_embedding': False, 'vol_embedding': False}, 'spk': {'ZeroTwo': 0}, 'model_dir': './logs/44k'} Traceback (most recent call last): File "/content/so-vits-svc/train.py", line 331, in main() File "/content/so-vits-svc/train.py", line 53, in main mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py", line 239, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py", line 197, in start_processes while not context.join(): File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py", line 160, in join raise ProcessRaisedException(msg, error_index, failed_process.pid) torch.multiprocessing.spawn.ProcessRaisedException:

-- Process 0 terminated with the following error: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py", line 69, in _wrap fn(i, *args) File "/content/so-vits-svc/train.py", line 71, in run train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps, all_in_mem=all_in_mem) File "/content/so-vits-svc/data_utils.py", line 34, in init self.unit_interpolate_mode = hparams.data.unit_interpolate_mode AttributeError: 'HParams' object has no attribute 'unit_interpolate_mode' 请问我应该如何解决,此步骤之前的所有步骤均已正常执行

Plachtaa commented 1 year ago

so-vits-svc不属于该项目,请到相应的repo去提交问题

Bohemian-self commented 1 year ago

抱歉,提交错了

原始邮件

发件人:"Songting"< @.*** >;

发件时间:2023/6/12 19:36

收件人:"Plachtaa/VITS-fast-fine-tuning"< @.*** >;

抄送人:"Bohemian-self"< @. >;"Author"< @. >;

主题:Re: [Plachtaa/VITS-fast-fine-tuning] Training error reporting (Issue#271)

so-vits-svc不属于该项目,请到相应的repo去提交问题

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>