LizhenWangT / FaceVerse

FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable Model from a Hybrid Dataset (CVPR2022)
BSD 2-Clause "Simplified" License
467 stars 58 forks source link

Handling Video Input #8

Closed JamesBrod closed 2 years ago

JamesBrod commented 2 years ago

Hi! Really cool work! I was wondering if you're offline tracking does anything special in terms of stabilisation? I've noticed that using the image input demo across frames of a video leads to the mesh being more shakey compared to the specific tracking script and was wondering how you handled it.

Thanks!

LizhenWangT commented 2 years ago

Hi, thank you. I think there are three main reasons:

  1. The landmarks detected by OpenSeeFace are stable in videos.
  2. The initial tracking parameters of the current frame is already optimized by the previous frame, which makes the optimization much easier.
  3. The differentiable rendering loss provides stable contraints and is vital for stability.
JamesBrod commented 2 years ago

Ok, great! I thought something like that was the case. Thanks for getting back to me so quickly!