Open galib360 opened 2 years ago
I also had the same problem. BIWI dataset should not be applied for training without value preprocessing, otherwise the loss will be huge and result is funcky.
Okay, as no response has been posted here by the author, I share my solution for those who need help:
好的,由于作者没有在这里发布任何回复,我将我的解决方案分享给需要帮助的人:
- 在 /BIWI 文件夹中生成您自己的 template.pkl:
- 从每个角色的 obj 文件中加载顶点数据
- 通过居中和重新缩放来归一化顶点坐标
- 居中:减去每个 x、y、z 轴的平均值
- rescale:划分顶点的最大范围(例如除以max_val - min_val)
- 将每个字符的缩放因子保存到文件(例如 json)
- 将此结果写入字典并将其保存到 template.pkl
- 同时将角色的标准化顶点和面之一保存到 /BIWI/templates 下的 ply 文件(可选)
- 使用保存的重新缩放因子对所有 BIWI 数据进行预处理,如步骤 1。
- 享受漂亮稳定的输出!
Hello, can you provide your processing code, thank you very much.
Okay, as no response has been posted here by the author, I share my solution for those who need help:
- Generate your own template.pkl in /BIWI folder:
- load vertices data from each of the character's obj file
- normalize the vertices coords by centering and rescale
- centering: substract the mean for each of x,y,z axis
- rescale: divide the max range of vertices (e.g. divide by max_val - min_val)
- save the rescale factor for each character to a file (e.g. a json)
- write this result to a dictionary and save it to template.pkl
- at the same time save one of the character's normalized vertices and faces to ply file under /BIWI/templates (optional)
- Preprocess all the BIWI data as in step1, using the rescale factor saved.
- Enjoy the nice & stable output !
thank you for your sharing! where i can get about this one " load vertices data from each of the character's obj file"?
Hello, I would like to know how to convert a. vl file to. npy. Thank you
Hi,
This is amazing work and I am trying to train FaceFormer to reproduce the results on BIWI dataset. The repo documentation says the code for preprocessing the BIWI dataset is coming soon (written as "to do"). Will it be available soon?
However, if it comes later, could you please clarify in short on how you preprocessed the BIWI vertices? Looking at both the original dataset (that comes in .vl files) and the result files (.npy) of the model, I see that the values of the vertices are not in the same range. Did you apply normalization on the dataset across the captured frames for all vertices along their respective coordinates (x,y,z)?
Any pointer would be greatly appreciated. :)