Open xuullin opened 1 year ago
What are the ideas to implement such a model in this project, as the model captures my movements and demeanor, imitates my speaking style and actions, and can chat with me intelligently afterwards?
Hi @xuullin , you have to make the snapshots of the face expressions before use the VRC FaceExpression Proxy at runtime.
var faces = new List<FaceExpression>() { new FaceExpression("Angry", 3.0f) };
modelController.SetFace(faces);
Or, set face expression to the response from your skill.
response.AddFace("Angry", 3.0f);
See also the example for ChatGPT. https://github.com/uezo/ChatdollKit/blob/master/Examples/ChatGPT/ChatGPTSkill.cs#L33
if I want to make the model's mouth shape to match different audio, how should I modify the code?
Setup uLipSync or OVRLipSync correctly. You don't need to modify the code.
thank you
Thank you very much for your reply. Can you talk about some of the connections and differences between this project and intelligent digital human generation technology? What are the changes that need to be made in this project if we want to implement intelligent digital human generation?
Hi, I have selected Setup VRC FaceExpression Proxy, but after running unity, the character model does not make facial expressions, what is the situation? I'm using phane from Booth for the character model, if I want to make the model's mouth shape to match different audio, how should I modify the code? What are the ideas to achieve this?