lenML / Speech-AI-Forge

🍦 Speech-AI-Forge is a project developed around TTS generation model, implementing an API Server and a Gradio-based WebUI.
https://huggingface.co/spaces/lenML/ChatTTS-Forge
GNU Affero General Public License v3.0
710 stars 87 forks source link

[assistance] Confirmation on Data Format and Structure for Fine-Tuning #141

Open IrisSally opened 1 month ago

IrisSally commented 1 month ago

确认清单

你的issues

Hi,

I am planning to fine-tune ChatTTS using my own dataset, and I would like to confirm a few details regarding the data format and requirements.

1. Data Structure and .list File Format

Based on the documentation and examples, I have organized my data as follows:

File Structure

datasets/
└── data_speaker_a/
    ├── speaker_a/
    │   ├── 1.wav
    │   ├── 2.wav
    │   └── ... (more audio files)
    └── speaker_a.list

.list File Format

Each line in the .list file is formatted as filepath|speaker|lang|text, where:

Example:

speaker_a/1.wav|John|ZH|你好
speaker_a/2.wav|John|EN|Hello

Could you please confirm if this structure and format are correct?

2. Audio Data Specifications

I am planning to use 100 audio files, each approximately 10 seconds long, with a sampling rate of 24000 Hz for training.

Is this a suitable setup for fine-tuning the model? Are there any specific recommendations or requirements?

Thank you for your assistance!

zhzLuke96 commented 1 month ago

First, it's important to note that the current fine-tuning code is still in an unusable state.

Regarding your question about the dataset format, your understanding is correct. The configuration you described is appropriate.

As for the dataset size, there's no precise limitation or recommended size. Modern TTS models are complex with multiple trainable modules, each potentially requiring different amounts of data and configurations. For example, simple embedding fine-tuning might only need 10 voice samples, but for fine-tuning the GPT module, the amount of data needed depends on your training objective. If you're just adding a new voice, 100 samples should be sufficient. However, if you need to train instructional capabilities or enhance prompt following, you might need more.

A simple suggestion would be: if the dataset quality is poor, it's better to have more data. If the quality is high, then even a small amount of data (less than 30 samples) could be enough.

By the way, almost all of the training code in this repository comes from this PR: https://github.com/2noise/ChatTTS/pull/680. I've only made simple modifications to adapt it and pre-test the entire forge inference system (because we've made some changes to ChatTTS and have an internal .spkv1.json speaker file format).

IrisSally commented 1 month ago

Thank you for your patient explanation and assistance. It's been very helpful.