Closed aluminumbox closed 4 days ago
For the first question we will be evaluating just for zero shot tts voice cloning as attribute specific codecs must be trained. Regarding the second question, it is fine to have a single codebook as long as each attribute represents different set of embeddings. For example, if embeddings 0-511 represents speaker identity and embeddings 512-1023 represents emotions, that is fine too.
Hi,
in the example json1-3 files, the data format in json1/3 is clear
I believe it means using specific speaker to synthesis specific text.
However, in track2 we need to do zero-shot tts. For example for the json2 data example below:
Question1. In the website, it says " For track 2, the evaluation will involve synthesis using zero-shot voice cloning". I thought it means that both specific-speaker tts and zeroshot tts will be evaluated, however it seems that there is only zero-shot tts? I don't see any instruct for specific-speaker tts in track2.json.
Question2. In the website, it says "with codecs representing at least 3 attributes like speaker identity, content, pitch, energy etc". I thought it means that the codes need to embed at least 3 attributes, but not necessarily 3 different codebook. Our system use only 1 codebook, but it has speaker identity / emotion / content involved, is this okay?
Thanks, Looking forward for your reply!