guansss / pixi-live2d-display

A PixiJS plugin to display Live2D models of any kind.
https://guansss.github.io/pixi-live2d-display/
MIT License
823 stars 125 forks source link

请教一下,口型同步的能力什么时候能加上 #78

Open zonglang opened 1 year ago

zonglang commented 1 year ago

如题,我在代码中看到「TODO: Add lip sync API」,貌似现在并不支持口型同步,而官方SDK的demo里是有这个能力的。 想请教一下这个能力是会补上嘛~

guansss commented 1 year ago

这个坑一直没有填上的动力,因为感觉几乎没什么场景能用上,而且准确来说也不属于 Live2D 自身带有的功能(而是作为示例),也没必要放在这个库里,想要使用的话直接把官方的文件复制进你的项目里使用就好了:

https://github.com/Live2D/CubismWebSamples/blob/2a2d4f34c6f04e301a82f642f548efb9442cbaa1/Samples/TypeScript/Demo/src/lappwavfilehandler.ts

https://github.com/Live2D/CubismWebSamples/blob/2a2d4f34c6f04e301a82f642f548efb9442cbaa1/Samples/TypeScript/Demo/src/lappmodel.ts#L515-L524

guansss commented 1 year ago

等等,我仔细看了一下,原来这个同步是基于模型的声音文件的,我一直以为是基于用户的语音输入……

那看来还是有必要加入的,我会重新考虑一下

zonglang commented 1 year ago

等等,我仔细看了一下,原来这个同步是基于模型的声音文件的,我一直以为是基于用户的语音输入……

那看来还是有必要加入的,我会重新考虑一下

对的,核心代码就是 this._model.addParameterValueById(this._lipSyncIds.at(i), value, 0.8); 感觉把官方demo里那个读取wav文件音量大小的逻辑抄过来就行。 因为我这边想让口型和motion解绑,所以准备在大佬提供的事件通知里(beforeModelUpdate)去更新口型

guansss commented 1 year ago

补充一下后续:这部分代码是在 Live2D 自己的 license 下的,但那个 license 我实在是看不懂,不确定能不能在二次分发的项目里使用,目前就卡在这里了

fatalgoth commented 1 year ago

I made some modifications in my fork to play model audio files with lip sync. I don't know anything about the licensing unfortunately:

https://github.com/fatalgoth/pixi-live2d-display/commit/ebfb98ad91f221794e70fa94064d19aab4c2a6e8

windjackz commented 1 year ago

I made some modifications in my fork to play model audio files with lip sync. I don't know anything about the licensing unfortunately:

fatalgoth@ebfb98a

thanks for you job and one question is that does the static vars contents and analysers at the SoundManager.ts not clean correctly while the audio had been removed?

954-Ivory commented 1 year ago

I made some modifications in my fork to play model audio files with lip sync. I don't know anything about the licensing unfortunately: fatalgoth@ebfb98a

thanks for you job and one question is that does the static vars contents and analysers at the SoundManager.ts not clean correctly while the audio had been removed?

Can you teach me how to use this fork?

windjackz commented 1 year ago

I made some modifications in my fork to play model audio files with lip sync. I don't know anything about the licensing unfortunately: fatalgoth@ebfb98a

thanks for you job and one question is that does the static vars contents and analysers at the SoundManager.ts not clean correctly while the audio had been removed?

Can you teach me how to use this fork?

I haven't use it for the prod env yet. 3 ways that might work:

  1. clone the fork project and run npm run setup && npm run prepublishOnly after npm install. Then you can use the lib index.min.js via cdn.

  2. clone the fork project and republish it to npm as your own package.

  3. fork the project and use it as a submodule in your project. However you have to handle the problem of import path since it use alias path.

954-Ivory commented 1 year ago

I haven't use it for the prod env yet. 3 ways that might work:

  1. clone the fork project and run npm run setup && npm run prepublishOnly after npm install. Then you can use the lib index.min.js via cdn.
  2. clone the fork project and republish it to npm as your own package.
  3. fork the project and use it as a submodule in your project. However you have to handle the problem of import path since it use alias path.

I have reviewed and tried this fork source code. It seen incompleted.

I'm trying to implement it by self. Do you has any thinking about that?


  1. You can replace pixi-live2d-display in package.js like this:
"pixi-live2d-display": "git+https://github.com/fatalgoth/pixi-live2d-display.git",
  1. Run yarn install.
windjackz commented 1 year ago

I haven't use it for the prod env yet. 3 ways that might work:

  1. clone the fork project and run npm run setup && npm run prepublishOnly after npm install. Then you can use the lib index.min.js via cdn.
  2. clone the fork project and republish it to npm as your own package.
  3. fork the project and use it as a submodule in your project. However you have to handle the problem of import path since it use alias path.

I have reviewed and tried this fork source code. It seen incompleted.

I'm trying to implement it by self. Do you has any thinking about that?

  1. You can replace pixi-live2d-display in package.js like this:
"pixi-live2d-display": "git+https://github.com/fatalgoth/pixi-live2d-display.git",
  1. Run yarn install.

It really a good idea but seems not work for me unless i add a prepare script.

windjackz@ 499d2c8

This fork source code is work but it seems not clean clean correctly.

I will implement it by my self too and try to decouple the sounds and json file

954-Ivory commented 1 year ago

It really a good idea but seems not work for me unless i add a prepare script.

windjackz@ 499d2c8

This fork source code is work but it seems not clean clean correctly.

I will implement it by my self too and try to decouple the sounds and json file

Yes, I have the same demand. I want to decouple all the things(json file, motion trigger) with it.

AceyKubbo commented 8 months ago

大佬们能考虑下用wav2lip+lipsync试试看,能否搞定口型绑定

954-Ivory commented 8 months ago

大佬们能考虑下用wav2lip+lipsync试试看,能否搞定口型绑定

我半年前就已经做好了,没那么复杂。

AceyKubbo commented 8 months ago

大佬们能考虑下用wav2lip+lipsync试试看,能否搞定口型绑定

我半年前就已经做好了,没那么复杂。

大佬能给个技术路线建议或者参考工程,我研究研究,谢谢🥹

954-Ivory commented 8 months ago

大佬们能考虑下用wav2lip+lipsync试试看,能否搞定口型绑定

我半年前就已经做好了,没那么复杂。

大佬能给个技术路线建议或者参考工程,我研究研究,谢谢🥹

给你发邮件了,里面有些参考代码(虽然写的很乱哈哈哈)

XiaoMo-Donald commented 7 months ago

大佬们能考虑下用wav2lip+lipsync试试看,能否搞定口型绑定

我半年前就已经做好了,没那么复杂。

大佬,我也需要这部分,我发现之前看到有个项目中依赖的 pixi-live2d-display源码中 存在 speak 函数,好像该函数是实现说话的,但是我到了仓库这边发现并没有这个 speak 如图: image

PoteXB commented 7 months ago

大佬们能考虑下用wav2lip+lipsync试试看,能否搞定口型绑定

我半年前就已经做好了,没那么复杂。

大佬能也给我发一下吗,我研究研究,谢谢