shunsukesaito / PIFu

This repository contains the code for the paper "PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization"
https://shunsukesaito.github.io/PIFu/
Other
1.75k stars 343 forks source link

Transformation Questions #147

Open Janudis opened 1 year ago

Janudis commented 1 year ago

I have some questions mostly regarding the transformations you used. 1) In lib/data/TrainDataset.py you create the calibration matrix in get_render. You create the uv_intrinsic matrix with: 1.0 / float(self.opt.loadSize // 2), where opt.loadSize = 512. However the feature maps from which you sample are 128 x 128. Wouldn't be more correct to use opt.loadSize = 128 or it doesn't matter?

2) I try to implement it with a perspective projection. Since the calibration matrix is different in my case and it doesn't include the uv_intrinsic, i should this later. Is the transforms parameter in lib/model/HGPIFuNet created for this case-should i use it for the uv normalisation?

3) Your calib matrix is a 4x4 matrix where the last row is [0, 0, 0, 1]. This means that the input 3d points should be in homogeneous coordinates. However your 3d points are of size [B, 3, N]. Furthermore your calib matrix is 4x4 but the documentation of the query function in lib/model/HGPIFuNet is: :param points: [B, 3, N] world space coordinates of points :param calibs: [B, 3, 4] calibration matrices for each image :param transforms: Optional [B, 2, 3] image space coordinate transforms

Thanks in advance for your time.