Nice work!
I met a problem when training the Relightable Avatar on the ZJUMocap Dataset using the provided configs/my_zju_mocap/my_387_4v_geo.yaml.
The problem is that the rendering video only contains a partial of the whole human as shown in the video (
https://github.com/user-attachments/assets/c55cf9ca-8bbc-4829-af98-2e5b3e54aa34). I also use similar configuration files to train on the other 4 sub-datasets on the ZJUMocap dataset, the rendering results all have similar problems. All models' recorded PSNR is around 10 which is abnormal. I have checked all files with GitHub's files and successfully trained models on Synthetic Human Dataset ++ before. Could you please provide me with some hints to solve this problem?
Nice work! I met a problem when training the Relightable Avatar on the ZJUMocap Dataset using the provided configs/my_zju_mocap/my_387_4v_geo.yaml.
The problem is that the rendering video only contains a partial of the whole human as shown in the video ( https://github.com/user-attachments/assets/c55cf9ca-8bbc-4829-af98-2e5b3e54aa34). I also use similar configuration files to train on the other 4 sub-datasets on the ZJUMocap dataset, the rendering results all have similar problems. All models' recorded PSNR is around 10 which is abnormal. I have checked all files with GitHub's files and successfully trained models on Synthetic Human Dataset ++ before. Could you please provide me with some hints to solve this problem?
Thank you very much!