idiap / coqui-ai-TTS

🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
https://coqui-tts.readthedocs.io
Mozilla Public License 2.0
322 stars 23 forks source link

[Bug] Training XTTSv2 leads to weird training lags #54

Open NikitaKononov opened 2 months ago

NikitaKononov commented 2 months ago

Describe the bug

Hello, training XTTSv2 leads to weird training lags - training gets stuck with no errors

with using DDP x6 RTX a6000 and 512GB RAM Here is monitoring GPU load graph. Purple - gpu0, green - gpu1 (all the rest GPUs behave like gpu1) image

Without DDP image

Tried different dataset sizes - 2500hrs, 250hrs - result remains the same

I think there's some kind of error in Trainer or in xtts scripts maybe, don't know where to dig, thank you no swap memory usage, no cpu overloading, no RAM overloading (by clearml, htop and top at least) disk is fast NVME

To Reproduce

python -m trainer.distribute --script recipes/ljspeech/xtts_v2/train_gpt_xtts.py --gpus 0,1,2,3,4,5 python -m trainer.distribute --script recipes/ljspeech/xtts_v2/train_gpt_xtts.py --gpus 0,1 python3 recipes/ljspeech/xtts_v2/train_gpt_xtts.py

Expected behavior

No response

Logs

No response

Environment

TTS 0.24.1
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.54.03              Driver Version: 535.54.03    CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA RTX A6000               On  | 00000000:01:00.0 Off |                  Off |
| 46%   70C    P2             229W / 300W |  32382MiB / 49140MiB |     91%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   1  NVIDIA RTX A6000               On  | 00000000:25:00.0 Off |                  Off |
| 42%   68C    P2             246W / 300W |  27696MiB / 49140MiB |     77%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   2  NVIDIA RTX A6000               On  | 00000000:41:00.0 Off |                  Off |
| 38%   67C    P2             256W / 300W |  27640MiB / 49140MiB |     63%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   3  NVIDIA RTX A6000               On  | 00000000:81:00.0 Off |                  Off |
| 39%   67C    P2             245W / 300W |  27640MiB / 49140MiB |     67%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   4  NVIDIA RTX A6000               On  | 00000000:A1:00.0 Off |                  Off |
| 46%   70C    P2             239W / 300W |  27620MiB / 49140MiB |     66%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
|   5  NVIDIA RTX A6000               On  | 00000000:C2:00.0 Off |                  Off |
| 30%   31C    P8              17W / 300W |      3MiB / 49140MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

Additional context

No response

NikitaKononov commented 2 months ago

tried num_workers=0, >0, MP_THREADS_NUM and so on, nothing helps lots of ram and shared memory