xiexh20 / behave-dataset

Code to access BEHAVE dataset, CVPR'22
https://virtualhumans.mpi-inf.mpg.de/behave/
Other
141 stars 6 forks source link

intrinsic and depth back projection #15

Closed yscoffee closed 1 year ago

yscoffee commented 1 year ago

Hi could you explain a bit why depth backprojection is not directly performed with pixel coordinates and intrinsic matrix ? (i.e. will be a problem if I doing a normal back-projection as kinectFusion? e.g.

fp='/behave_dataset/calibs/intrinsics/0/calibration.json'
with open(fp,'r') as fin:
    x=json.load(fin)
fx=x['color']['fx']
fy=x['color']['fy']
cx=x['color']['cx']
cy=x['color']['cy']
H=x['color']['height']
W=x['color']['width']

ix, iy = torch.meshgrid(torch.linspace(0, W-1, W-0), torch.linspace(0, H-1, H-0))
xx = (ix-cx)/fx
yy = (iy-cy)/fy
zz = torch.ones_like(ix)
...

I saw an example here using a pre-computed table. https://github.com/xiexh20/behave-dataset/blob/953a0d981e0b2f6a34cbebac995d70daa69ada25/data/kinect_calib.py#L78

xiexh20 commented 1 year ago

Hi, The precomputed point cloud table is obtained by the example here: https://github.com/microsoft/Azure-Kinect-Sensor-SDK/blob/develop/examples/fastpointcloud/main.cpp#L11

It takes the distortion coefficients into account. If you do the unprojection directly there will be some tiny misalignment, especially for the regions far away from the image center. But this misalignment typically is not a big issue as it is very small.

yscoffee commented 1 year ago

thanks for your explanation!