cdfmlr / muvtuber

Makes your AI vtuber
439 stars 75 forks source link

live2d motion flow question #46

Open ilNikk opened 1 year ago

ilNikk commented 1 year ago

Hi! what is the signal flow for the lipsync and the others motion? i try to translate the redme.md but i cant see nothing about. The character does not currently play any action. Maybe this function is only vincolated by musharing_chatbot response insted of the azure response (i use chatgpt for response)?

live2ddriver

2023/05/30 19:48:17 INFO fwd msg: {"motion":"idle"} -> http://localhost:51070 (chan 0xc000486240).
2023/05/30 19:48:17 WARN may be a OpenMouth after emo-motion, ignore: {"motion":"flick_head"}
2023/05/30 19:48:17 INFO fwd msg: {} -> http://localhost:51070 (chan 0xc000486240).

fwd msg: {} -> http://localhost:51070 should not be empty right? should be {motion:flick_head}? Who decides this string?

Some time i get flick_head but its not perfect synced and have delay.

2023/05/30 20:11:54 INFO fwd msg: {"motion":"flick_head"} -> http://localhost:51070 (chan 0xc000114de0).
2023/05/30 20:11:54 INFO fwd msg: {"motion":"flick_head"} -> http://localhost:51070 (chan 0xc000486240).
2023/05/30 20:12:04 INFO fwd msg: {"motion":"idle"} -> http://localhost:51070 (chan 0xc000114de0).
2023/05/30 20:12:04 INFO fwd msg: {"motion":"idle"} -> http://localhost:51070 (chan 0xc000486240).

another little problem is that if I enable ReadDm (from muvtuberdriver config file) it would seem that a motion request is not made, but only the chatgpt response

cdfmlr commented 1 year ago

This blog https://zhuanlan.zhihu.com/p/609878670 explains the initial design of muvtuber. It may help you to understand things.

motion

what is the signal flow for the lipsync and the others motion?

The command {"motion":"xxx"} toggles on a live2d motion. And there is no lip-sync (for now).

The character does not currently play any action.

fwd msg: {} -> http://localhost:51070 should not be empty right? should be {motion:flick_head}?

The motion controlling depend on the emotion analysis module (emotext). If there are no emotion detected, it sends this empty command. This empty message do not change anything so the live2d model keeps idle. It's harmless.

Maybe this function is only vincolated by musharing_chatbot response insted of the azure response (i use chatgpt for response)?

It works for any output text (musharing_chatbot or chatgpt).

lip-sync

Some time i get flick_head but its not perfect synced and have delay.

The depended live2d library is not supporting lip-sync for now. So I made a workaround for lip-sync that is:

Repeating a live2d motion which contains lip motion to emulate the lip-sync.

So it delays and not looks well.

38 may improve this. But I haven't experimented on it.