ZiqiaoPeng / EmoTalk

This is the repository for EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation
116 stars 4 forks source link

How to outputs ARKit blendshapes? #2

Open lucasjinreal opened 1 year ago

lucasjinreal commented 1 year ago

How to outputs ARKit blendshapes?

ZiqiaoPeng commented 1 year ago

Thanks for your attention. To build the 3D-ETF dataset, we collected a large amount of audio-blendshape data that enables our network to learn the mapping from audio to blendshape coefficients. Our dataset will be released later.

lucasjinreal commented 1 year ago

@ZiqiaoPeng thanks for your reply, does your blendshapes compatible with arkit?

ZiqiaoPeng commented 1 year ago

Yes, our blendshapes and arkit’s blendshapes are compatible, and they can be driven directly using the plugin in UE.

lucasjinreal commented 1 year ago

@ZiqiaoPeng looks promising, how did u collected the data? Does it can be directly using with metahuman?

ZiqiaoPeng commented 1 year ago

I asked our team of animators in detail, and they said that the 52 blendshape output can be used directly in metahuman, and we are developing a higher precision blendshape driver algorithm.

lucasjinreal commented 1 year ago

@ZiqiaoPeng thanks, hoping for you release code and data soon. still, wanna ask 2 questions:

  1. how does the data aquired,blendshapes mainly;
  2. does the audio also appliable to Chinese?