mikeqzy / 3dgs-avatar-release

3DGS-Avatar: Animatable Avatars via Deformable 3D Gaussian Splatting
MIT License
278 stars 25 forks source link

Confusion in testing the models #7

Closed Gojo1729 closed 3 months ago

Gojo1729 commented 4 months ago

Hello, I am done with setting up the environment as instructed, now I want to try out the model, so I was following the instructions specified over here - https://github.com/mikeqzy/3dgs-avatar-release?tab=readme-ov-file#test-on-out-of-distribution-poses , over there it's mentioned to download the "preprocessed AIST++ and AMASS sequence for subjects in ZJU-MoCap" from https://drive.google.com/drive/folders/17vGpq6XGa7YYQKU4O1pI4jCMbcEXJjOI?usp=drive_link, but I don't find any folders having CoreView and containing cam_params.json, I'm not sure where to get those files.

mikeqzy commented 4 months ago

Hi, You should first download and preprocess the ZJU-MoCap dataset according to the instruction here. Afterwards, you can put the pose sequence folders in corresponding CoreView_xxx.

Gojo1729 commented 4 months ago

@mikeqzy Thanks for the info, what if I wanted to test my own recorded video, how to go about that ?

mikeqzy commented 4 months ago

In priciple our method can also work on self-captured videos, given that accurate subject masks, SMPL estimation and camera calibration are available. We do not test on such data ourselves, but you can check HumanNeRF as reference.