The objective of the Speaking Portal Project is to design, develop, and deploy a lip-sync animation API for the Kukarella text-to-speech (TTS) web application. This API will serve as an animation-generating add-on for this system so that the user can both listen to and watch their avatar speak the user provided text.
Is this related to a problem? Please describe.
In order to properly test progress with the client this Friday, we would like to have our demo attempt to speak complex words.
Describe the solution you'd like
A list words should added into a test doc into the repo.