aws-samples / amazon-sumerian-hosts

Amazon Sumerian Hosts (Hosts) is an experimental open source project that aims to make it easy to create interactive animated 3D characters for Babylon.js, three.js, and other web 3D frameworks. It leverages AWS services including Amazon Polly (text-to-speech) and Amazon Lex (chatbot).
MIT No Attribution
183 stars 82 forks source link

Using synthesized audio stream #24

Closed DC2009 closed 3 years ago

DC2009 commented 3 years ago

Hi, is it possible to use already synthesized audio streams instead of text in order to animate visemes/fonemes? Or can we control viseme/foneme animations? We connect to Polly to sythesize speech from our server and provide the frontend with ready audio streams.

c-morten commented 3 years ago

Hi @DC2009. Unfortunately we don't currently have this as an option for hosts through the exposed feature API. Part of this is because it opens up room for error and confusion since you need to create both speech audio and speechmarks, we didn't want people to assume that you could pass in any audio file and get working lipsync. We also keep track of Polly usage so we can determine how much use people are getting out of our open source hosts solution. But you could certainly fork the repository and create a custom build that allows for this. There is only a single method you would need to override: AbstractTextToSpeechFeature._updateSpeech. At the bottom where we wait for the speechmarks and speech audio to be synthesized you could just skip that and use pre-existing objects.

You also do have full control over the visemes. You can manually blend them on and off using the AnimationFeature.setAnimationBlendWeight method.

DC2009 commented 3 years ago

Instead of forking the repository, we decided to extend the class HOST.aws.TextToSpeechFeature and override a few methods in order to make it work without direct connection to Polly. I called this feature VcaFeature (from the name of our service).

I receive from our server a base64 audio and a string with speech marks, generated by Polly. The audio is played correctly using Babylon.

There is no lipsync. From what I found, LipsyncFeature listens to TextToSpeechFeature events, so I changed the name of our feature to TextToSpeechFeature. It seems LipsyncFeature listens properly to EVENT.play but I get the following error:

Uncaught (in promise) Error: Cannot interpolate property blendValueX to value NaN. Target value must be numeric.
    at Function.interpolateProperty (AnimationUtils.js?1148:96)
    at Blend2dState.setBlendWeight (Blend2dState.js?8163:105)
    at AnimationLayer.setAnimationBlendWeight (AnimationLayer.js?beec:272)
    at AnimationFeature.setAnimationBlendWeight (AnimationFeature.js?ebde:706)
    at eval (PointOfInterestFeature.js?dee5:869)
    at Array.forEach (<anonymous>)
    at PointOfInterestFeature.update (PointOfInterestFeature.js?dee5:803)
    at eval (HostObject.js?75e7:84)
    at Array.forEach (<anonymous>)
    at HostObject.update (HostObject.js?75e7:83)
    at r.callback (host.js:26)
    at e.notifyObservers (babylon.js:16)
    at t.render (babylon.js:16)
    at t._renderFrame (babylon.js:16)
    at t._renderLoop (babylon.js:16)

during the host update

  23 |  // Add the host to the render loop
  24 |  const host = new HOST.HostObject({ owner: character });
  25 |  scene.onBeforeAnimationsObservable.add(() => {
> 26 |    host.update();
     | ^  27 |  });
  28 | 

Any idea what is going on? Is it possible to extend the textToSpeechFeature class like that or is it better to fork the project? We didn't want to for the project as it's quite new and possibly will change.

c-morten commented 3 years ago

Hi @DC2009. I think you're on the right track for extending the TextToSpeechFeature class as opposed to forking, that's definitely a valid alternative. The error you pasted above is actually related to the PointOfInterestFeature and not LipsyncFeature or TextToSpeechFeature. Have you made any other changes along with speech and lipsync? I have seen the above error occur when the object the PointOfInterestFeature is targeting has an invalid transformation matrix. I would try removing the PointOfInterestFeature temporarily to see if your lipsync will work without it. If that is the case we can close this out and open a separate issue specific to point of interest for further investigation.