RVC-Boss / GPT-SoVITS

1 min voice data can also be used to train a good TTS model! (few shot voice cloning)
MIT License
32.47k stars 3.74k forks source link

我用 GPT-SoVITS-beta0306fix2里的docker-compose.yml启动,使用CPU训练出错 #1090

Open kevinzu007 opened 4 months ago

kevinzu007 commented 4 months ago

版本:GPT-SoVITS-beta0306fix2 almaLinux:9.3
配置: 16核 32GB

由于我的服务器上没有nvidia显卡,把docker-compose.yml中devply:的部分注释掉(使他可以启动),然后启动docker-compose up -d,其他都可以,就是在做GPT和SoVITS训练时出错,错误信息如下:

gpt-sovits-container  | "/usr/local/bin/python" GPT_SoVITS/s2_train.py --config "/workspace/TEMP/tmp_s2.json"
gpt-sovits-container  | INFO:s1:{'train': {'log_interval': 100, 'eval_interval': 500, 'seed': 1234, 'epochs': 8, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 7.0, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 20480, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'text_low_lr_rate': 0.4, 'pretrained_s2G': 'GPT_SoVITS/pretrained_models/s2G488k.pth', 'pretrained_s2D': 'GPT_SoVITS/pretrained_models/s2D488k.pth', 'if_save_latest': True, 'if_save_every_weights': True, 'save_every_epoch': 4, 'gpu_numbers': '0'}, 'data': {'max_wav_value': 32768.0, 'sampling_rate': 32000, 'filter_length': 2048, 'hop_length': 640, 'win_length': 2048, 'n_mel_channels': 128, 'mel_fmin': 0.0, 'mel_fmax': None, 'add_blank': True, 'n_speakers': 300, 'cleaned_text': True, 'exp_dir': 'logs/s1'}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [10, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 8, 2, 2], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 512, 'semantic_frame_rate': '25hz', 'freeze_quantizer': True}, 's2_ckpt_dir': 'logs/s1', 'content_module': 'cnhubert', 'save_weight_dir': 'SoVITS_weights', 'name': 's1', 'pretrain': None, 'resume_step': None}
gpt-sovits-container  | INFO:torch.distributed.distributed_c10d:Added key: store_based_barrier_key:1 to store for rank: 0
gpt-sovits-container  | INFO:torch.distributed.distributed_c10d:Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 1 nodes.
gpt-sovits-container  | phoneme_data_len: 2
gpt-sovits-container  | wav_data_len: 100
100% 100/100 [00:00<00:00, 77228.94it/s]
gpt-sovits-container  | skipped_phone:  0 , skipped_dur:  0
gpt-sovits-container  | total left:  100
gpt-sovits-container  | INFO:s1:loaded pretrained GPT_SoVITS/pretrained_models/s2G488k.pth
gpt-sovits-container  | <All keys matched successfully>
gpt-sovits-container  | INFO:s1:loaded pretrained GPT_SoVITS/pretrained_models/s2D488k.pth
gpt-sovits-container  | <All keys matched successfully>
gpt-sovits-container  | /usr/local/lib/python3.9/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`.  Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
gpt-sovits-container  |   warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. "
gpt-sovits-container  | Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at 0x7f1ff28c19d0>
gpt-sovits-container  | Traceback (most recent call last):
gpt-sovits-container  |   File "/usr/local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1478, in __del__
gpt-sovits-container  |     self._shutdown_workers()
gpt-sovits-container  |   File "/usr/local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1409, in _shutdown_workers
gpt-sovits-container  |     if not self._shutdown:
gpt-sovits-container  | AttributeError: '_MultiProcessingDataLoaderIter' object has no attribute '_shutdown'
gpt-sovits-container  | Traceback (most recent call last):
gpt-sovits-container  |   File "/workspace/GPT_SoVITS/s2_train.py", line 600, in <module>
gpt-sovits-container  |     main()
gpt-sovits-container  |   File "/workspace/GPT_SoVITS/s2_train.py", line 56, in main
gpt-sovits-container  |     mp.spawn(
gpt-sovits-container  |   File "/usr/local/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 239, in spawn
gpt-sovits-container  |     return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
gpt-sovits-container  |   File "/usr/local/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 197, in start_processes
gpt-sovits-container  |     while not context.join():
gpt-sovits-container  |   File "/usr/local/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 160, in join
gpt-sovits-container  |     raise ProcessRaisedException(msg, error_index, failed_process.pid)
gpt-sovits-container  | torch.multiprocessing.spawn.ProcessRaisedException: 
gpt-sovits-container  | 
gpt-sovits-container  | -- Process 0 terminated with the following error:
gpt-sovits-container  | Traceback (most recent call last):
gpt-sovits-container  |   File "/usr/local/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
gpt-sovits-container  |     fn(i, *args)
gpt-sovits-container  |   File "/workspace/GPT_SoVITS/s2_train.py", line 254, in run
gpt-sovits-container  |     train_and_evaluate(
gpt-sovits-container  |   File "/workspace/GPT_SoVITS/s2_train.py", line 308, in train_and_evaluate
gpt-sovits-container  |     ) in tqdm(enumerate(train_loader)):
gpt-sovits-container  |   File "/usr/local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 436, in __iter__
gpt-sovits-container  |     self._iterator = self._get_iterator()
gpt-sovits-container  |   File "/usr/local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 388, in _get_iterator
gpt-sovits-container  |     return _MultiProcessingDataLoaderIter(self)
gpt-sovits-container  |   File "/usr/local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 994, in __init__
gpt-sovits-container  |     super().__init__(loader)
gpt-sovits-container  |   File "/usr/local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 603, in __init__
gpt-sovits-container  |     self._sampler_iter = iter(self._index_sampler)
gpt-sovits-container  |   File "/workspace/GPT_SoVITS/module/data_utils.py", line 300, in __iter__
gpt-sovits-container  |     ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)]
gpt-sovits-container  | TypeError: can't multiply sequence by non-int of type 'float'
gpt-sovits-container  | 

可以解决吗?

Separatee commented 3 months ago

我提供的解决方法:手动使用python安装

既然你都用整合包了,不如用conda创建一个3.9.18的python环境,安装新的依赖,删除runtime这个文件夹(用于Windows的环境,linux不适用。如果没有删除可以删除)。启动虚拟环境,执行: python -m pip install -r requirements.txt 如果你的Linux不能安装conda或者其他原因,参见这篇文章conda无果---浅谈自编译Python以及项目自动版本切换

Separatee commented 3 months ago

Chat-GPT4o的看法:

Python中,你无法将一个列表(ids_bucket)与一个浮点数相乘。这可能是因为batch_size设置为浮点数(如日志中显示的'batch_size': 7.0)

ChatGPT4o的解决方案:

将batch_size设置为整数:通常情况下,batch_size应该是一个整数。在配置文件中将'batch_size': 7.0改为'batch_size': 7。