xg-chu / lightning_track

[ICLR 2024] Generalizable and Precise Head Avatar from Image(s)
https://xg-chu.site/project_gpavatar/
60 stars 7 forks source link

Regarding Matting during preprocessing and stable MICA shape value #5

Closed chakri1804 closed 2 months ago

chakri1804 commented 2 months ago

Hi,

Thanks for sharing this great repo. I had a couple of doubts so just wanted to confirm these details with you

  1. Is matting during preprocessing compulsory ? Do the networks expect images with black background for optimal performace ?
  2. I noticed that you were predicting MICA shape for each frame and then passing it. Instead, if I have a MICA shape I have precalculated for a video, can I use that single value across all frames ? Just so that the MICA shape value doesn't change between frames ? Since the MICA shape isn't further optimised in lightning step and synthesis step

Thanks :smiley:

xg-chu commented 2 months ago

Hi,

Matting is not compulsory, just for more convenient use in GPAvatar. No issues were observed with other backgrounds.

It is feasible (and I think better) to compute the MICA shape for each video, and the current approach is actually for convenience in implementation. ^_^

chakri1804 commented 2 months ago

Thanks for the reply Also, I think it improves the stability if you leave L145 uncommented in the core_engine.py else wrong keys get associated with wrong landmarks, throwing away some particular frames tracking (lmdb sorts it like alphabets aka 1049, 105, 1050, 1051 ... whereas they must be 1049, 1050, ...)