effusiveperiscope / so-vits-svc

so-vits-svc
MIT License
179 stars 71 forks source link

"too many values to unpack (expected 2)" when using 22050Hz files in the training Colab script. #7

Closed AmoArt closed 1 year ago

AmoArt commented 1 year ago

Getting an error from training 2nd cell, I know that my original wavs are 22050 (that's the quality of files extracted from original source) and the code wants to convert it to 44100, BUT its getting "too many values to unpack (expected 2)".

error too many values

AmoArt commented 1 year ago

Got this working, possible fix I got was to split: audio, sr = librosa.resample(audio, sr, 44100)

Into two lines: audio = librosa.resample(audio, orig_sr=sr, target_sr=44100) sr = 44100

AmoArt commented 1 year ago

Getting a different error now, one about list being out of range.

error out of range

and since its coming from the huber python file I think there is an error problem with the file link being downloaded as now its gigging me a different error in the 1st cell:

--2023-02-25 14:53:43-- https://github.com/justinjohn0306/diff-svc/releases/download/models/0102_xiaoma_pe.zip Resolving github.com (github.com)... 20.205.243.166 Connecting to github.com (github.com)|20.205.243.166|:443... connected. HTTP request sent, awaiting response... 404 Not Found 2023-02-25 14:53:44 ERROR 404: Not Found.

--2023-02-25 14:53:44-- https://github.com/justinjohn0306/diff-svc/releases/download/models/hubert.zip Resolving github.com (github.com)... 20.205.243.166 Connecting to github.com (github.com)|20.205.243.166|:443... connected. HTTP request sent, awaiting response... 404 Not Found 2023-02-25 14:53:44 ERROR 404: Not Found.

AmoArt commented 1 year ago

I've fixed the above errors by changing not working 0102_xiaoma_pe.zip link to: !wget https://github.com/MoonInTheRiver/DiffSinger/releases/download/pretrain-model/0102_xiaoma_pe.zip

and changing the huber link to: !mkdir -p /workspace/diff-svc/checkpoints/hubert/ !wget https://github.com/bshall/hubert/releases/download/v0.1/hubert-soft-0d54a1f4.pt -P /workspace/diff-svc/checkpoints/hubert/

Now Im getting this error list when trying to run the last cell:

;33;mpredictor_hidden: -1, ;33;mpredictor_kernel: 5, ;33;mpredictor_layers: 5, ;33;mprenet_dropout: 0.5, ;33;mprenet_hidden_size: 256, ;33;mpretrain_fs_ckpt: , ;33;mprocessed_data_dir: xxx, ;33;mprofile_infer: False, ;33;mraw_data_dir: data/raw/NamelessHero_ENG, ;33;mref_norm_layer: bn, ;33;mrel_pos: True, ;33;mreset_phone_dict: True, ;33;mresidual_channels: 384, ;33;mresidual_layers: 20, ;33;msave_best: False, ;33;msave_ckpt: True, ;33;msave_codes: ['configs', 'modules', 'src', 'utils'], ;33;msave_f0: True, ;33;msave_gt: False, ;33;mschedule_type: linear, ;33;mseed: 1234, ;33;msort_by_len: True, ;33;mspeaker_id: NamelessHero_ENG, ;33;mspec_max: [0.0], ;33;mspec_min: [-5.0], ;33;mspk_cond_steps: [], ;33;mstop_token_weight: 5.0, ;33;mtask_cls: training.task.SVC_task.SVCTask, ;33;mtest_ids: [], ;33;mtest_input_dir: , ;33;mtest_num: 0, ;33;mtest_prefixes: ['test'], ;33;mtest_set_name: test, ;33;mtimesteps: 1000, ;33;mtrain_set_name: train, ;33;muse_crepe: True, ;33;muse_denoise: False, ;33;muse_energy_embed: False, ;33;muse_gt_dur: False, ;33;muse_gt_f0: False, ;33;muse_midi: False, ;33;muse_nsf: True, ;33;muse_pitch_embed: True, ;33;muse_pos_embed: True, ;33;muse_spk_embed: False, ;33;muse_spk_id: False, ;33;muse_split_spk_id: False, ;33;muse_uv: False, ;33;muse_var_enc: False, ;33;muse_vec: False, ;33;mval_check_interval: 500, ;33;mvalid_num: 0, ;33;mvalid_set_name: valid, ;33;mvalidate: False, ;33;mvocoder: network.vocoders.nsf_hifigan.NsfHifiGAN, ;33;mvocoder_ckpt: checkpoints/nsf_hifigan/model, ;33;mwarmup_updates: 2000, ;33;mwav2spec_eps: 1e-6, ;33;mweight_decay: 0, ;33;mwin_size: 2048, ;33;mwork_dir: checkpoints/NamelessHero_ENG, | Mel losses: {'ssim': 0.5, 'l1': 0.5} | Load HifiGAN: checkpoints/nsf_hifigan/model Removing weight norm... 02/25 03:42:36 PM gpu available: True, used: True | model Trainable Parameters: 33.709M 2023-02-25 15:42:36.705965: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-02-25 15:42:38.516544: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia 2023-02-25 15:42:38.516705: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia 2023-02-25 15:42:38.516727: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. 02/25 03:42:39 PM NumExpr defaulting to 2 threads. Traceback (most recent call last): File "/workspace/diff-svc/utils/pl_utils.py", line 59, in _get_data_loader value = getattr(self, attr_name) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1269, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'SVCTask' object has no attribute '_lazy_train_dataloader'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "run.py", line 15, in run_task() File "run.py", line 11, in run_task task_cls.start() File "/workspace/diff-svc/training/task/base_task.py", line 234, in start trainer.fit(task) File "/workspace/diff-svc/utils/pl_utils.py", line 495, in fit self.run_pretrain_routine(model) File "/workspace/diff-svc/utils/pl_utils.py", line 540, in run_pretrain_routine self.get_dataloaders(ref_model) File "/workspace/diff-svc/utils/pl_utils.py", line 1105, in get_dataloaders self.init_train_dataloader(model) File "/workspace/diff-svc/utils/pl_utils.py", line 1121, in init_train_dataloader if isinstance(self.get_train_dataloader(), torch.utils.data.DataLoader): File "/workspace/diff-svc/utils/pl_utils.py", line 62, in _get_data_loader value = fn(self) # Lazy evaluation, done only once. File "/workspace/diff-svc/training/task/fs2.py", line 50, in train_dataloader train_dataset = self.dataset_cls(hparams['train_set_name'], shuffle=True) File "/workspace/diff-svc/training/dataset/fs2_utils.py", line 29, in init self.sizes = np.load(f'{self.data_dir}/{self.prefix}_lengths.npy') File "/usr/local/lib/python3.8/dist-packages/numpy/lib/npyio.py", line 407, in load fid = stack.enter_context(open(os_fspath(file), "rb")) FileNotFoundError: [Errno 2] No such file or directory: 'data/binary/NamelessHero_ENG/train_lengths.npy'

HudsonHuang commented 1 year ago

I've fixed the above errors by changing not working 0102_xiaoma_pe.zip link to: !wget https://github.com/MoonInTheRiver/DiffSinger/releases/download/pretrain-model/0102_xiaoma_pe.zip

and changing the huber link to: !mkdir -p /workspace/diff-svc/checkpoints/hubert/ !wget https://github.com/bshall/hubert/releases/download/v0.1/hubert-soft-0d54a1f4.pt -P /workspace/diff-svc/checkpoints/hubert/

Now Im getting this error list when trying to run the last cell:

;33;mpredictor_hidden: -1, ;33;mpredictor_kernel: 5, ;33;mpredictor_layers: 5, ;33;mprenet_dropout: 0.5, ;33;mprenet_hidden_size: 256, ;33;mpretrain_fs_ckpt: , ;33;mprocessed_data_dir: xxx, ;33;mprofile_infer: False, ;33;mraw_data_dir: data/raw/NamelessHero_ENG, ;33;mref_norm_layer: bn, ;33;mrel_pos: True, ;33;mreset_phone_dict: True, ;33;mresidual_channels: 384, ;33;mresidual_layers: 20, ;33;msave_best: False, ;33;msave_ckpt: True, ;33;msave_codes: ['configs', 'modules', 'src', 'utils'], ;33;msave_f0: True, ;33;msave_gt: False, ;33;mschedule_type: linear, ;33;mseed: 1234, ;33;msort_by_len: True, ;33;mspeaker_id: NamelessHero_ENG, ;33;mspec_max: [0.0], ;33;mspec_min: [-5.0], ;33;mspk_cond_steps: [], ;33;mstop_token_weight: 5.0, ;33;mtask_cls: training.task.SVC_task.SVCTask, ;33;mtest_ids: [], ;33;mtest_input_dir: , ;33;mtest_num: 0, ;33;mtest_prefixes: ['test'], ;33;mtest_set_name: test, ;33;mtimesteps: 1000, ;33;mtrain_set_name: train, ;33;muse_crepe: True, ;33;muse_denoise: False, ;33;muse_energy_embed: False, ;33;muse_gt_dur: False, ;33;muse_gt_f0: False, ;33;muse_midi: False, ;33;muse_nsf: True, ;33;muse_pitch_embed: True, ;33;muse_pos_embed: True, ;33;muse_spk_embed: False, ;33;muse_spk_id: False, ;33;muse_split_spk_id: False, ;33;muse_uv: False, ;33;muse_var_enc: False, ;33;muse_vec: False, ;33;mval_check_interval: 500, ;33;mvalid_num: 0, ;33;mvalid_set_name: valid, ;33;mvalidate: False, ;33;mvocoder: network.vocoders.nsf_hifigan.NsfHifiGAN, ;33;mvocoder_ckpt: checkpoints/nsf_hifigan/model, ;33;mwarmup_updates: 2000, ;33;mwav2spec_eps: 1e-6, ;33;mweight_decay: 0, ;33;mwin_size: 2048, ;33;mwork_dir: checkpoints/NamelessHero_ENG, | Mel losses: {'ssim': 0.5, 'l1': 0.5} | Load HifiGAN: checkpoints/nsf_hifigan/model Removing weight norm... 02/25 03:42:36 PM gpu available: True, used: True | model Trainable Parameters: 33.709M 2023-02-25 15:42:36.705965: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-02-25 15:42:38.516544: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia 2023-02-25 15:42:38.516705: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia 2023-02-25 15:42:38.516727: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. 02/25 03:42:39 PM NumExpr defaulting to 2 threads. Traceback (most recent call last): File "/workspace/diff-svc/utils/pl_utils.py", line 59, in _get_data_loader value = getattr(self, attr_name) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1269, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'SVCTask' object has no attribute '_lazy_train_dataloader'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "run.py", line 15, in run_task() File "run.py", line 11, in run_task task_cls.start() File "/workspace/diff-svc/training/task/base_task.py", line 234, in start trainer.fit(task) File "/workspace/diff-svc/utils/pl_utils.py", line 495, in fit self.run_pretrain_routine(model) File "/workspace/diff-svc/utils/pl_utils.py", line 540, in run_pretrain_routine self.get_dataloaders(ref_model) File "/workspace/diff-svc/utils/pl_utils.py", line 1105, in get_dataloaders self.init_train_dataloader(model) File "/workspace/diff-svc/utils/pl_utils.py", line 1121, in init_train_dataloader if isinstance(self.get_train_dataloader(), torch.utils.data.DataLoader): File "/workspace/diff-svc/utils/pl_utils.py", line 62, in _get_data_loader value = fn(self) # Lazy evaluation, done only once. File "/workspace/diff-svc/training/task/fs2.py", line 50, in train_dataloader train_dataset = self.dataset_cls(hparams['train_set_name'], shuffle=True) File "/workspace/diff-svc/training/dataset/fs2_utils.py", line 29, in init self.sizes = np.load(f'{self.data_dir}/{self.prefix}_lengths.npy') File "/usr/local/lib/python3.8/dist-packages/numpy/lib/npyio.py", line 407, in load fid = stack.enter_context(open(os_fspath(file), "rb")) FileNotFoundError: [Errno 2] No such file or directory: 'data/binary/NamelessHero_ENG/train_lengths.npy'

Hi, I suffer from the same question, may I ask how did you solved it finally?