Open essamgoda opened 4 years ago
Hi @essamgoda , this code is designed to evaluate poses represented by the matrix (i.e. 3x4 matrix). You don't need to convert the pose to 6DoF for evaluation.
Please check this section for pose loading
Hi @Huangying-Zhan, I'm trying to implement DeepVO like this repo it converts poses to 6-DOF as shown above for training and also the model output is 6-DOF. the question is how to evaluate 6DoF results or how to convert 6-DOF to 12-DOF to use the evaluation tool
Hi @essamgoda , from your provided code, it shows that you convert Rotation matrix to euler angles (3DoF) first and concatenate it with the translation vector to for the 6-DoF representation. 12-DoF is actually a truncated version of transformation matrix, which is 16DoF, i.e.
# 16DoF
R00 R01 R02 t1
R10 R11 R12 t2
R20 R21 R22 t3
0 0 0 1
# 12DoF is nothing but the first three rows of the transformation matrix, i.e.
R00 R01 R02 t1
R10 R11 R12 t2
R20 R21 R22 t3
Therefore, you don't need to convert the rotation matrix to euler angle.
You can just concatenate the rotation matrix with translation vector to for the 3x4 matrix
and save it in a txt file for evaluation.
Hope this helps.
Hi @Huangying-Zhan, the proposed model is predicted pose at 6-DOF format (translation vector concatenate with euler angles) this for I asked to convert this format to 12DoF to for evaluation.
in code mentioned above, it converts Euler angle back to rotation matrix and concatenates it with the translation vector
def eulerAnglesToRotationMatrix(self, theta):
R_x = np.array([[1, 0, 0],
[0, np.cos(theta[0]), -np.sin(theta[0])],
[0, np.sin(theta[0]), np.cos(theta[0])]
])
R_y = np.array([[np.cos(theta[1]), 0, np.sin(theta[1])],
[0, 1, 0],
[-np.sin(theta[1]), 0, np.cos(theta[1])]
])
R_z = np.array([[np.cos(theta[2]), -np.sin(theta[2]), 0],
[np.sin(theta[2]), np.cos(theta[2]), 0],
[0, 0, 1]
])
R = np.dot(R_z, np.dot(R_y, R_x))
return R
R = eulerAnglesToRotationMatrix(pred[3:])
t = pred[:3].reshape(3, 1)
T_r = np.concatenate((np.concatenate([R, t], axis=1), [[0.0, 0.0, 0.0, 1.0]]), axis=0)
# With respect to the first frame
T_abs = np.dot(T, T_r)
# Update the T matrix till now.
T = T_abs
but this not working well with me
the paper for the proposed model DeepVO
thanks for your replies
@essamgoda I see now. You can refer to the code here for the conversion
Hi @Huangying-Zhan I'm using same code and results are very bad when evaluate using kitti-odom-eval although RMSE is very good (RMSE between 6DOF estimated and ground truth).
also if I copy Kitti data poses for any sequence and try kitti-odom-eval error in Translational is 7.3, Rotational is Rotational error (why this happen thanks to refer to any reference or tutorial)
thanks for help
Hi @Aliahme5636
I'm using same code and results are very bad when evaluate using kitti-odom-eval although RMSE is very good (RMSE between 6DOF estimated and ground truth).
I have no clue at the moment. Could you provide more information?
also if I copy Kitti data poses for any sequence and try kitti-odom-eval error in Translational is 7.3, Rotational is Rotational error (why this happen thanks to refer to any reference or tutorial)
If you meant all sequences have same translational error (7.3), could you check the reading of the files? Seems to me that the wrong files are loaded. Could you explain more about "Rotational is Rotational error"?
Hi @Huangying-Zhan I'm using deep leanning model and RSME like that
Translational RMSE 0.284642 Rotational RMSE 0.266223
although eval tool output
Translational error (%): 102.90453146933586 Rotational error (deg/100m): 18.62600435469482
all sequences have same behavior (when use Kitti data poses as estimated and also ground truth -same file- ) like below
sequence 03
Translational error (%): 3.0627180900916957e-14
Rotational error (deg/100m): 8.147885968710791e-08
sequence 09
Translational error (%): 2.4011952673267497e-14
Rotational error (deg/100m): 8.028623189108894e-08
"Rotational is Rotational error" is write by mistake it should (Rotational is 8.14) sorry for that
I'm trying to evaluate KITTI odometry first I convert pose to 6DoF using this code
also, model output is in the same format (6DoF)
the question is how to evaluate 6DoF results