Closed copperdong closed 3 years ago
@copperdong For Python usage open an issue in the TensorflowTTS repo: https://github.com/TensorSpeech/TensorFlowTTS Also, my FS2 model uses phonemes, so using a text only processor is wrong. If you're using my fork of TensorflowTTS, this is how you do it:
text = "There’s a way to measure the acute emotional intelligence that has never gone out of style"
proc = LJSpeechProcessor(None,"english_cleaners")
ids, arpatxt = proc.processtxtph(text)
Feed the ids into the model like this
tf.convert_to_tensor(ids, dtype=tf.int32), 0
Thank you! I find that memory keeps growing after every inference. I made the following changes. FastSpeech2.cpp
bool FastSpeech2::Initialize(const std::string & SavedModelFolder) {
try {
**std::vector<uint8_t> config = {0x10, 0x1, 0x28, 0x1};
FastSpeech = new Model(SavedModelFolder, config)**;
}
catch (...) {
FastSpeech = nullptr;
return false;
}
return true;
}
The corresponding Python code is ...
import tensorflow as tf
config = tf.compat.v1.ConfigProto(inter_op_parallelism_threads=1, intra_op_parallelism_threads=1)
serialized = config.SerializeToString()
print(list(map(hex, serialized)))
@copperdong That's interesting. I'll check it out.
Hello, thank you for your code. I tried to test the models with Python, but the generated wav file is wrong. Can you help me to check my code? thank you!