Closed princefr closed 5 years ago
@princefr Can you provide the definition of observation angle of object
?
From what I can find, observation angle
is the angle between the ray from a point on the object to the observer (in this case, the capturer camera) and to the light source. Is that correct?
Hi thangt thank you for your quick answer. for example its the theta L angle in this image
If it can help, its the alpha value provided by the kitti dataset.
To calculate the theta L angle, you need to calculate 2 vectors:
forward
direction of the objectFrom the object's quaternion, you can convert it into a rotation matrix 3x3. A rotation matrix is actually collection of 3 axis of the object: X, Y, Z. You can pick your own forward
vector - Vobj_dir, in OpenCV coordinate convention, Z is forward.
Since the image is captured in the camera space, the camera location is [0, 0, 0], the ray from the camera to the object's center - Vcam_obj is just object's location.
To calculate theta angle, you just need to get the angle between Vobj_dir and the right
vector which is X axis in the OpenCV convention: [1, 0, 0].
Thank you very much.
Hi thangt thank you for your quick answer. for example its the theta L angle in this image
If it can help, its the alpha value provided by the kitti dataset.
Hello, Could you please tell me where you find the image? @princefr
@lx-r
He got it from the paper : https://arxiv.org/pdf/1612.00496
Meanwhile I'm trying to piece up information on the "observation angle" in the context of VKITTI2 dataset. What the above picture shows (theta_l being observation angle) is not confirmed by numbers in pose.txt file. context: https://europe.naverlabs.com/research/computer-vision/proxy-virtual-worlds-vkitti-2/
Hi, I've been playing with SIDOD DATASET from Nvidia who have been generated with the same utility that you used to generated the data for DOPE.
As I understand in DOPE, your model learn to generate the 9 keypoints (8 corners + center of the projected cuboid) then you do a perspective transform to project the object into the 3D space without using other label informations.
I'm interested in trying other approaches using this dataset but I'm struggling to understand how can I get the observation angle of the object using quaternion_xyzw information or the pose_transform or other values provided by the dataset.
Can you help me, or give me some hint about this ?