Open huangxin168 opened 3 months ago
@huangxin168 感谢反馈,可否把原始驱动视频贴上来 : )
Thank you for the feedback. Could you please share the original driving video? :)
@huangxin168 感谢反馈,可否把原始驱动视频贴上来 : )
Thank you for the feedback. Could you please share the original driving video? :)
https://github.com/user-attachments/assets/190574d2-81ad-4f18-9490-9e85bb25007a
@huangxin168 感谢提供,我们正在定位中。
Thank you for providing it. We are currently debuging the issue.
It seems that the scale is not stable, especially when the input texture is warped (e.g., by D-ID). We will continue to work on optimizing this issue. @huangxin168
You can try to input a realistic driven video, similar to the example videos!
It seems that the scale is not stable, especially when the input texture is warped (e.g., by D-ID). We will continue to work on optimizing this issue. @huangxin168
You can try to input a realistic driven video, similar to the example videos! Thanks for your debugging. Can we fix the scale when inferencing?
我也有类似的问题,面部抖动,我是采用论文中提到的音频驱动的方式生成的。请问作者有什么解决方式吗
https://github.com/user-attachments/assets/9796339b-81ef-4193-8946-f33e35a25e6e
https://github.com/user-attachments/assets/781e7c92-aa27-4b2c-94d1-423b3620c934
It's related about the frame image paste list , seem like when after combine audio to video the frame is not continuously , I'm working on the new feature for multiple motion face in multiples face image, my result is better when not have audio , and have audio, the result is very confused
Modify the code at live_portrait_pipeline.py#L284 to : scale_new = x_s_info['scale'] can solve the problem https://github.com/KwaiVGI/LivePortrait/blob/d654a014da85e4b45d17b0c2016acc843a392149/src/live_portrait_pipeline.py#L284
Modify the code at live_portrait_pipeline.py#L284 to : scale_new = x_s_info['scale'] can solve the problem
https://github.com/user-attachments/assets/5fb2be08-00f3-4ddb-b9ee-402dfa57408a Thanks for your help, I tried and got new problem when eye-blinking.
Modify the code at live_portrait_pipeline.py#L284 to : scale_new = x_s_info['scale'] can solve the problem
helpful!
我也有类似的问题,面部抖动,我是采用论文中提到的音频驱动的方式生成的。请问作者有什么解决方式吗
s6--test_FaceTalk_170809_00138_TA_condition_FaceTalk_170913_03279_TA.mp4 test_FaceTalk_170809_00138_TA_condition_FaceTalk_170913_03279_TA.mp4
这真的太棒了,请问你这是如何使用音频驱动的方式?是直接修改代码就可以吗?还是需要自己搭建一个训练框架来实现这个audio-driven模块?谢谢。
似乎没有用,效果依旧差,跳来跳去的 @FacePoluke https://github.com/user-attachments/assets/37760ae5-00aa-4bfe-9880-775fc704dcf9
非常优秀的项目!感谢分享! 我按照安装说明配置好linux环境后,测试了一下, python inference.py -s assets/examples/source/s0.jpg -d pose0.mp4 --flag_crop_driving_video 发现生成的结果镜头很不稳定,前后跳跃,不清楚什么原因导致的问题。 请大佬抽空看看。 谢谢!
https://github.com/user-attachments/assets/c43cc26c-a28e-4092-bea6-9604d4849bf8