Hi,
I'm trying to project the world coordinate-points given in the hdPose3d_stage1_coco19 directory into the corresponding depth images for each of the kinects.
The univ_time given in the pose-file is matched up to the closest frame using the sync-tables.
In the kcalibration_xxxx files an array of sensors are given. The sensor objects in the array does not have an ID, so I assume that they are sorted such that the sensor at index 0 corresponds to the KINECTNODE1, the sensor at index 1 corresponds to KINECTNODE2 and so on. Is this assumption correct?
Further an M-matrix M_world2sensor is given. I assume this is the transformation matrix transforming the coco19 points into the sensor coordinate system.
Then, an M_depth is given, which I assume is the transformation for taking the point in the sensor coordinate system, into the depth-image coordinates.
Last, the K_depth matrix is assumed to be the intrinsics for the sensor.
However, I don't get plausible coordinates using the following code (distortion is not yet applied, but I don't think it will have that big of an effect)
def reproject_point(self, kinect_n: str, coord):
"""
Find the x, y position of the given 3d point in the kinect
:param kinect_n: kinect node ("KINECTNODE[1-10])")
:param coord: world coordinates in a 1x3 vector
:return: (row, column) in the given kinect depth image
"""
sensor = self.sensors[int(re.search(r'\d+', kinect_n).group()) - 1] # Minus 1 because idx starts at 0
m_world = np.array(sensor['M_world2sensor'], dtype=float)
m_depth = np.array(sensor['M_depth'], dtype=float)
k_depth = np.array(sensor['K_depth'])
distort_d = np.array(sensor['distCoeffs_depth'])
_m = np.matmul(m_depth, m_world.transpose())
_extrinsic = np.matmul(_m, np.append(coord, 1).transpose())
_extrinsic = np.matmul(np.eye(3, 4), _extrinsic.transpose())
_intrinsic = np.matmul(k_depth, _extrinsic)
row, column = _intrinsic[0]/_intrinsic[2], _intrinsic[1]/_intrinsic[2]
return int(row), int(column)
Can anyone spot my error, or is one or more of my assumptions incorrect?
I suspect the assumption that the indices of the sensor array correspond to the kinectnode-number is not correct, however, I can't find a mapping for the indices to the kinectnode-numbers. Would be great if anyone knows.
Hi, I'm trying to project the world coordinate-points given in the
hdPose3d_stage1_coco19
directory into the corresponding depth images for each of the kinects.The univ_time given in the pose-file is matched up to the closest frame using the sync-tables.
In the
kcalibration_xxxx
files an array of sensors are given. The sensor objects in the array does not have an ID, so I assume that they are sorted such that the sensor at index 0 corresponds to theKINECTNODE1
, the sensor at index 1 corresponds toKINECTNODE2
and so on. Is this assumption correct?M_world2sensor
is given. I assume this is the transformation matrix transforming the coco19 points into the sensor coordinate system.M_depth
is given, which I assume is the transformation for taking the point in the sensor coordinate system, into the depth-image coordinates.K_depth
matrix is assumed to be the intrinsics for the sensor.However, I don't get plausible coordinates using the following code (distortion is not yet applied, but I don't think it will have that big of an effect)
Can anyone spot my error, or is one or more of my assumptions incorrect? I suspect the assumption that the indices of the sensor array correspond to the kinectnode-number is not correct, however, I can't find a mapping for the indices to the kinectnode-numbers. Would be great if anyone knows.