Closed yiliu-mt closed 10 months ago
Hi @yiliu-mt,
Currently, XTTS fine-tuning does not support multiple GPU training. I'm not sure when we will be able to implement this support. However, contributions are welcome feel free to send a PR.
I see. For this moment, to finetune a single speaker, it may be enough to use just one GPU. I will check the multi-GPU implementation in the future, if a large scale pretrain is need. Thanks!
Can you share your venv pip freeze?
It see 0 gpu for me at the moment
multi-GPU
have you fixed the multi-GPU problem?
Have you fixed the multi-GPU problem?
Did anyone get the DDP working?
Describe the bug
I try to finetune xtts using the official script:
However, the training failed. It works for a single gpu case.
To Reproduce
Expected behavior
No response
Logs
Environment
Additional context
Please inform me if any other information is needed. Thanks!