SimonGiebenhain / NPHM

[CVPR'23] Learning Neural Parametric Head Models
https://simongiebenhain.github.io/NPHM/
Other
211 stars 17 forks source link

How to make custom pointcloud data as inference input? #1

Open YiChenCityU opened 1 year ago

YiChenCityU commented 1 year ago

Hi, Congratulations. I want to test the inference code with the pointcloud as input. Could you provide some advices? Thanks very much.

SimonGiebenhain commented 1 year ago

Hi @YiChenCityU, thanks for your interest. For the "dummy_dataset" as well as our proposed test set we provide single view data already.

If you want to change some properties of the input for inference you can play araound with scripts.data_processing.generate_single_view_observations. I used this script to generate the input. Per default it tries to do so for every subject in the test set. But you can simply specifiy what subject and expression you are interested in.

YiChenCityU commented 1 year ago

Thanks very much. What if I only have a point cloud captured from iphone, do I have to provide the expression of it?

YiChenCityU commented 1 year ago

Screenshot from 2023-06-07 14-34-13 Screenshot from 2023-06-07 14-35-35 Screenshot from 2023-06-07 14-39-14 Screenshot from 2023-06-07 14-48-04 This is the point cloud I used and the result was not similar to it. Do you have some suggestions? Ply files are below. https://drive.google.com/file/d/1UYBbR-TkRtgSKJQbuNUnMu4dwdN1kx9a/view?usp=sharing https://drive.google.com/file/d/1A4EJbSUjuAfJ_k8FzmsimsSKBPi1QQ5k/view?usp=sharing

SimonGiebenhain commented 1 year ago

Hey, cool stuff.

The problem is very likely the coordinate system. NPHM only works if the input is in the expected coordinate system (FLAME coordinate system scaled by a factor of 4).

Therefore, you would first have to align the input point cloud with the FLAME coordinate system, e.g. a very simple approach would be a similarity transform from detected 3D landmarks to the landmarks of the FLAME template. Actually, you could also first fit FLAME and use the resulting Scale, Rotation, and Translation from the result. In that case, you can separate the head from the torso in the same way as in the preprocessing of NPHM. Having observations on the torso tends to confuse the inference optimization

Here is an example mesh from the dataset and one of the provided point clouds to show why the model fails:

Screenshot from 2023-06-07 12-19-07

SimonGiebenhain commented 1 year ago

Actually, the second Point Cloud aligns better, but is still noticeably off from the expected canonicalization.

Screenshot from 2023-06-07 12-28-03

YiChenCityU commented 1 year ago

I will try. Thanks very much.

xvdp commented 1 year ago

Ive been trying to unravel the description as well, I didnt get as far as yichen, It would be wonderful if you could provide a full test example ... If you are concerned about identity, maybe take a pointcloud of a statue...

nsarafianos commented 11 months ago

Thank you so much @SimonGiebenhain for publishing the code and congrats for your great work!

Quick Q: I have a pointcloud in.obj format (lifted from a foreground RGB-D monocular image) that is transformed to be on the exact same space with FLAME as suggested above. How do you go about fitting NPHM to this particular pointcloud ?

I'm asking because the example provided uses existing identities (along with their expressions) from the dummy_data whereas I'm interested in preserving the identity of the pointcloud.

Thank you!

Zvyozdo4ka commented 5 months ago

@SimonGiebenhain even with perfect alignment it did not resemble the identity

Original files are here. https://drive.google.com/drive/folders/1cprPG_9AihL4HpYl0lOvZDz7kNbXv8kB?usp=sharing

image image

Zvyozdo4ka commented 2 months ago

The problem is very likely the coordinate system. NPHM only works if the input is in the expected coordinate system (FLAME coordinate system scaled by a factor of 4).

How did you get FLAME models? What solution did you employ?

Therefore, you would first have to align the input point cloud with the FLAME coordinate system, e.g. a very simple approach would be a similarity transform from detected 3D landmarks to the landmarks of the FLAME template.

Do you have this code of alignment or did you use another method to align point cloud and flame?

Actually, you could also first fit FLAME and use the resulting Scale, Rotation, and Translation from the result. In that case, you can separate the head from the torso in the same way as in the preprocessing of NPHM. Having observations on the torso tends to confuse the inference optimization

Do you mean that fitting Flame to point cloud can give us the same NPHM output?

Zvyozdo4ka commented 4 days ago

Actually, the second Point Cloud aligns better, but is still noticeably off from the expected canonicalization.

Screenshot from 2023-06-07 12-28-03

in your work, did you align point cloud and flame manually or using an alignment algorithm?