Closed fabswt closed 1 year ago
Hi @fabswt We don't have this in the roadmap, but it's an interesting idea!
Right now we support ARKit blendshapes, so if there's anything that can translate visemes to relevant blendshapes, then it could still be doable via emotion
property.
Very cool project. Just wondering: are there plans to add support for lip sync in the near future? Looking for a solution to animate the lips given a list of visemes as input, for the Web.