Open sumansid opened 6 hours ago
Is there any way to smooth out the frames? When I run it, the change of frames is very visible and the cutoff from one expression to another is harsh.
Yes, in many cases, the continuity between frames is not smooth enough. Currently, we have added some smoothing loss during training and applied EMA smoothing to the inferred motion sequence during post-processing, but the results are not yet optimal. We are considering releasing the training code to facilitate community collaboration for further optimization.
cool, thanks for the reply. Are the frames not smooth because of too many motions in every frame generated by your model before running it through liveportrait? Have you tried lip-sync only ? or lip-sync and eyes only ?
cool, thanks for the reply. Are the frames not smooth because of too many motions in every frame generated by your model before running it through liveportrait? Have you tried lip-sync only ? or lip-sync and eyes only ?
Good question. We smoothed the head motions before running the LivePortrait model.
I think another possible reason for residual inconsistencies might be that, during training and inference, we treated different dimensions of the motion sequence as independent units rather than as integrated sequences, as they may have correlations with each other.
We have tried lip-sync only and used head motions from real-life videos, and it works better than predicting whole motions.
Is there any way to smooth out the frames? When I run it, the change of frames is very visible and the cutoff from one expression to another is harsh.