UHO186 / emotion-finetuning-vits

MIT License
2 stars 1 forks source link

AttributeError: 'NoneType' object has no attribute 'unsqueeze' #1

Open 40740 opened 1 year ago

40740 commented 1 year ago

INFO:checkpoints:{'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 10000, 'learning_rate': 0.0002, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 16, 'fp16_run': True, 'lr_decay': 0.999875, 'segment_size': 8192, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0}, 'data': {'training_files': 'filelists/bmsslist.txt.cleaned', 'validation_files': 'filelists/bmssval.txt.cleaned', 'text_cleaners': ['chinese_dialect_cleaners'], 'max_wav_value': 32768.0, 'sampling_rate': 22050, 'filter_length': 1024, 'hop_length': 256, 'win_length': 1024, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': None, 'add_blank': True, 'n_speakers': 0, 'cleaned_text': True}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'ginchannels': 256}, 'speakers': ['bmss', 'ja', 'eng', 'gd'], 'symbols': ['', ',', '.', '!', '?', '-', '~', '…', 'A', 'E', 'I', 'N', 'O', 'Q', 'U', 'a', 'b', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'r', 's', 't', 'u', 'v', 'w', 'y', 'z', 'ʃ', 'ʧ', 'ʦ', 'ɯ', 'ɹ', 'ə', 'ɥ', '⁼', 'ʰ', '`', '→', '↓', '↑', ' '], 'model_dir': '../drive/MyDrive/vits-finetune/checkpoints'} 2023-05-25 01:26:01.854170: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client. 2023-05-25 01:26:02.738299: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT DEBUG:h5py._conv:Creating converter from 7 to 5 DEBUG:h5py._conv:Creating converter from 5 to 7 DEBUG:h5py._conv:Creating converter from 7 to 5 DEBUG:h5py._conv:Creating converter from 5 to 7 DEBUG:jaxlib.mlir._mlir_libs:Initializing MLIR with module: _site_initialize_0 DEBUG:jaxlib.mlir._mlir_libs:Registering dialects from initializer <module 'jaxlib.mlir._mlir_libs._site_initialize_0' from '/usr/local/lib/python3.10/dist-packages/jaxlib/mlir/_mlir_libs/_site_initialize_0.so'> DEBUG:jax._src.path:etils.epath found. Using etils.epath for file I/O. INFO:numexpr.utils:NumExpr defaulting to 2 threads. INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 0 INFO:torch.distributed.distributed_c10d:Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 1 nodes. /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:554: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary. warnings.warn(_create_warning_msg( 0% 0/15 [00:00<?, ?it/s]/content/emotion-finetuning-vits/utils.py:138: WavFileWarning: Chunk (non-data) not understood, skipping it. sampling_rate, data = read(full_path) /content/emotion-finetuning-vits/utils.py:138: WavFileWarning: Chunk (non-data) not understood, skipping it. sampling_rate, data = read(full_path) /content/emotion-finetuning-vits/utils.py:138: WavFileWarning: Chunk (non-data) not understood, skipping it. sampling_rate, data = read(full_path) /content/emotion-finetuning-vits/utils.py:138: WavFileWarning: Chunk (non-data) not understood, skipping it. sampling_rate, data = read(full_path) /content/emotion-finetuning-vits/utils.py:138: WavFileWarning: Chunk (non-data) not understood, skipping it. sampling_rate, data = read(full_path) terminate called without an active exception Exception ignored in: <function _MultiProcessingDataLoaderIter.del at 0x7f57365d8820> Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 1466, in del self._shutdown_workers() File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 1430, in _shutdown_workers w.join(timeout=_utils.MP_STATUS_CHECK_INTERVAL) File "/usr/lib/python3.10/multiprocessing/process.py", line 149, in join res = self._popen.wait(timeout) File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 40, in wait if not wait([self.sentinel], timeout): File "/usr/lib/python3.10/multiprocessing/connection.py", line 931, in wait ready = selector.select(timeout) File "/usr/lib/python3.10/selectors.py", line 416, in select fd_event_list = self._selector.poll(timeout) File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler _error_if_any_worker_fails() RuntimeError: DataLoader worker (pid 19442) is killed by signal: Aborted. 0% 0/15 [00:23<?, ?it/s] Traceback (most recent call last): File "/content/emotion-finetuning-vits/train_ms.py", line 306, in main() File "/content/emotion-finetuning-vits/train_ms.py", line 56, in main mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py", line 240, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method='spawn') File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py", line 198, in start_processes while not context.join(): File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py", line 160, in join raise ProcessRaisedException(msg, error_index, failed_process.pid) torch.multiprocessing.spawn.ProcessRaisedException:

-- Process 0 terminated with the following error: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py", line 69, in _wrap fn(i, args) File "/content/emotion-finetuning-vits/train_ms.py", line 124, in run train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) File "/content/emotion-finetuning-vits/train_ms.py", line 152, in train_and_evaluate (z, z_p, m_p, logs_p, m_q, logs_q) = net_g(x, x_lengths, spec, spec_lengths, speakers) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/parallel/distributed.py", line 1040, in forward output = self._run_ddp_forward(*inputs, *kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/parallel/distributed.py", line 1000, in _run_ddp_forward return module_to_run(inputs[0], kwargs[0]) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "/content/emotion-finetuning-vits/models.py", line 464, in forward x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, emo) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(input, **kwargs) File "/content/emotion-finetuning-vits/models.py", line 171, in forward x = x + self.emo_proj(emo.unsqueeze(1)) AttributeError: 'NoneType' object has no attribute 'unsqueeze'

UHO186 commented 1 year ago

I have made some modifications to the "models.py" file. Please let me know if you encounter any errors after copying and pasting the code.