uezo / ChatdollKit

ChatdollKit enables you to make your 3D model into a chatbot
Apache License 2.0
693 stars 74 forks source link

Face Expression #223

Open xuullin opened 1 year ago

xuullin commented 1 year ago

Hi, I have selected Setup VRC FaceExpression Proxy, but after running unity, the character model does not make facial expressions, what is the situation? I'm using phane from Booth for the character model, if I want to make the model's mouth shape to match different audio, how should I modify the code? What are the ideas to achieve this?

xuullin commented 1 year ago

What are the ideas to implement such a model in this project, as the model captures my movements and demeanor, imitates my speaking style and actions, and can chat with me intelligently afterwards?

uezo commented 1 year ago

Hi @xuullin , you have to make the snapshots of the face expressions before use the VRC FaceExpression Proxy at runtime.

  1. [Inspector] Make face expressions as the combination of shapekeys.
  2. [Inspector] Capture it with its name (e.g. "Angry", "Joy").
  3. [Script] Call ModelController#SetFace() in your script.
    var faces = new List<FaceExpression>() { new FaceExpression("Angry", 3.0f) };
    modelController.SetFace(faces);

    Or, set face expression to the response from your skill.

response.AddFace("Angry", 3.0f);

See also the example for ChatGPT. https://github.com/uezo/ChatdollKit/blob/master/Examples/ChatGPT/ChatGPTSkill.cs#L33

uezo commented 1 year ago

if I want to make the model's mouth shape to match different audio, how should I modify the code?

Setup uLipSync or OVRLipSync correctly. You don't need to modify the code.

xuullin commented 1 year ago

thank you

xuullin commented 1 year ago

Thank you very much for your reply. Can you talk about some of the connections and differences between this project and intelligent digital human generation technology? What are the changes that need to be made in this project if we want to implement intelligent digital human generation?