coqui-ai / TTS

🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
http://coqui.ai
Mozilla Public License 2.0
33.55k stars 4.08k forks source link

[Bug] Failed to finetune XTTS using multiple gpus #3132

Closed yiliu-mt closed 10 months ago

yiliu-mt commented 10 months ago

Describe the bug

I try to finetune xtts using the official script:

CUDA_VISIBLE_DEVICES="0, 1, 2" python -m trainer.distribute --script train_gpt_xtts.py

However, the training failed. It works for a single gpu case.

To Reproduce

CUDA_VISIBLE_DEVICES="0, 1, 2" python -m trainer.distribute --script train_gpt_xtts.py

Expected behavior

No response

Logs

['train_gpt_xtts.py', '--continue_path=', '--restore_path=', '--group_id=group_2023_11_01-100519', '--use_ddp=true', '--rank=0']
['train_gpt_xtts.py', '--continue_path=', '--restore_path=', '--group_id=group_2023_11_01-100519', '--use_ddp=true', '--rank=1']
['train_gpt_xtts.py', '--continue_path=', '--restore_path=', '--group_id=group_2023_11_01-100519', '--use_ddp=true', '--rank=2']
>> DVAE weights restored from: /nfs2/yi.liu/src/TTS/recipes/ljspeech/xtts_v1/run/training/XTTS_v1.1_original_model_files/dvae.pth
 | > Found 13100 files in /nfs2/speech/data/tts/Datasets/LJSpeech-1.1
fatal: detected dubious ownership in repository at '/nfs2/yi.liu/src/TTS'
To add an exception for this directory, call:

        git config --global --add safe.directory /nfs2/yi.liu/src/TTS
[W socket.cpp:601] [c10d] The client socket has failed to connect to [localhost]:54321 (errno: 99 - Cannot assign requested address).
[W socket.cpp:601] [c10d] The client socket has failed to connect to [localhost]:54321 (errno: 99 - Cannot assign requested address).
>> DVAE weights restored from: /nfs2/yi.liu/src/TTS/recipes/ljspeech/xtts_v1/run/training/XTTS_v1.1_original_model_files/dvae.pth
 | > Found 13100 files in /nfs2/speech/data/tts/Datasets/LJSpeech-1.1
fatal: detected dubious ownership in repository at '/nfs2/yi.liu/src/TTS'
To add an exception for this directory, call:

        git config --global --add safe.directory /nfs2/yi.liu/src/TTS
[W socket.cpp:601] [c10d] The client socket has failed to connect to [localhost]:54321 (errno: 99 - Cannot assign requested address).
[W socket.cpp:601] [c10d] The client socket has failed to connect to [localhost]:54321 (errno: 99 - Cannot assign requested address).
[W socket.cpp:601] [c10d] The client socket has failed to connect to [localhost]:54321 (errno: 99 - Cannot assign requested address).
>> DVAE weights restored from: /nfs2/yi.liu/src/TTS/recipes/ljspeech/xtts_v1/run/training/XTTS_v1.1_original_model_files/dvae.pth
 | > Found 13100 files in /nfs2/speech/data/tts/Datasets/LJSpeech-1.1
fatal: detected dubious ownership in repository at '/nfs2/yi.liu/src/TTS'
To add an exception for this directory, call:

        git config --global --add safe.directory /nfs2/yi.liu/src/TTS
[W socket.cpp:601] [c10d] The client socket has failed to connect to [localhost]:54321 (errno: 99 - Cannot assign requested address).
fatal: detected dubious ownership in repository at '/nfs2/yi.liu/src/TTS'
To add an exception for this directory, call:

        git config --global --add safe.directory /nfs2/yi.liu/src/TTS
 > Training Environment:
 | > Backend: Torch
 | > Mixed precision: False
 | > Precision: float32
 | > Current device: 0
 | > Num. of GPUs: 3
 | > Num. of CPUs: 64
 | > Num. of Torch Threads: 1
 | > Torch seed: 1
 | > Torch CUDNN: True
 | > Torch CUDNN deterministic: False
 | > Torch CUDNN benchmark: False
 | > Torch TF32 MatMul: False
[W socket.cpp:601] [c10d] The client socket has failed to connect to [localhost]:54321 (errno: 99 - Cannot assign requested address).
 > Start Tensorboard: tensorboard --logdir=/nfs2/yi.liu/src/TTS/recipes/ljspeech/xtts_v1/run/training/GPT_XTTS_LJSpeech_FT-November-01-2023_10+05AM-0000000
 > Using PyTorch DDP

 > Model has 543985103 parameters

 > EPOCH: 0/1000
 --> /nfs2/yi.liu/src/TTS/recipes/ljspeech/xtts_v1/run/training/GPT_XTTS_LJSpeech_FT-November-01-2023_10+05AM-0000000
 > Filtering invalid eval samples!!
 > Filtering invalid eval samples!!
 > Filtering invalid eval samples!!
 > Total eval samples after filtering: 131
 > Total eval samples after filtering: 131
 > Total eval samples after filtering: 131

 > EVALUATION

 | > Synthesizing test sentences.

  --> EVAL PERFORMANCE
     | > avg_loader_time: 0.1864158809185028 (+0)
     | > avg_loss_text_ce: 0.024682655464857817 (+0)
     | > avg_loss_mel_ce: 3.357127457857132 (+0)
     | > avg_loss: 3.38181009888649 (+0)

 > BEST MODEL : /nfs2/yi.liu/src/TTS/recipes/ljspeech/xtts_v1/run/training/GPT_XTTS_LJSpeech_FT-November-01-2023_10+05AM-0000000/best_model_0.pth

 > EPOCH: 1/1000
 --> /nfs2/yi.liu/src/TTS/recipes/ljspeech/xtts_v1/run/training/GPT_XTTS_LJSpeech_FT-November-01-2023_10+05AM-0000000
 > Sampling by language: dict_keys(['en'])
 > Sampling by language: dict_keys(['en'])
 > Sampling by language: dict_keys(['en'])

 > TRAINING (2023-11-01 10:06:15)
Traceback (most recent call last):
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 1808, in fit
    self._fit()
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 1760, in _fit
    self.train_epoch()
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 1487, in train_epoch
    for cur_step, batch in enumerate(self.train_loader):
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
    data = self._next_data()
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1345, in _next_data
    return self._process_data(data)
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
    data.reraise()
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/torch/_utils.py", line 644, in reraise
    raise exception
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
    data = fetcher.fetch(index)
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
TypeError: 'int' object is not iterable

 ! Run is kept in /nfs2/yi.liu/src/TTS/recipes/ljspeech/xtts_v1/run/training/GPT_XTTS_LJSpeech_FT-November-01-2023_10+05AM-0000000
Traceback (most recent call last):
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 1808, in fit
    self._fit()
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 1760, in _fit
    self.train_epoch()
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 1487, in train_epoch
    for cur_step, batch in enumerate(self.train_loader):
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
    data = self._next_data()
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1345, in _next_data
    return self._process_data(data)
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
    data.reraise()
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/torch/_utils.py", line 644, in reraise
    raise exception
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
    data = fetcher.fetch(index)
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
TypeError: 'int' object is not iterable

Traceback (most recent call last):
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 1808, in fit
    self._fit()
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 1760, in _fit
    self.train_epoch()
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/trainer/trainer.py", line 1487, in train_epoch
    for cur_step, batch in enumerate(self.train_loader):
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
    data = self._next_data()
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1345, in _next_data
    return self._process_data(data)
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
    data.reraise()
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/torch/_utils.py", line 644, in reraise
    raise exception
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
    data = fetcher.fetch(index)
  File "/root/miniconda3/envs/xtts/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
TypeError: 'int' object is not iterable

Environment

TTS: v0.19.1
pytorch: 2.0.1+cu117
python: 3.9.18

Additional context

Please inform me if any other information is needed. Thanks!

Edresson commented 10 months ago

Hi @yiliu-mt,

Currently, XTTS fine-tuning does not support multiple GPU training. I'm not sure when we will be able to implement this support. However, contributions are welcome feel free to send a PR.

yiliu-mt commented 10 months ago

I see. For this moment, to finetune a single speaker, it may be enough to use just one GPU. I will check the multi-GPU implementation in the future, if a large scale pretrain is need. Thanks!

FurkanGozukara commented 10 months ago

Can you share your venv pip freeze?

It see 0 gpu for me at the moment

hdmjdp commented 9 months ago

multi-GPU

have you fixed the multi-GPU problem?

OswaldoBornemann commented 5 months ago

Have you fixed the multi-GPU problem?

Eyalm321 commented 1 month ago

Did anyone get the DDP working?