Closed jpdevaccount closed 2 years ago
Hello! Thank you for your kind remarks and your interest in DREAM!
If you are referring to the ground truth camera-to-robot pose estimate, it is obtainable from the provided dataset files. For example, in the Panda-3Cam, Kinect360 dataset, 000000.json file:
{
"objects": [
{
"class": "panda",
"visibility": 1,
"location": [
0.32121104,
0.38771898,
1.33385066
],
"keypoints": [
{
"name": "panda_link0",
"location": [
0.32121104,
0.38771898,
1.33385066
],
"projected_location": [
445.9277936482035,
392.10513834434806
]
},
{
...
The ground truth camera position is [0.32121104, 0.38771898, 1.33385066], the entry of field objects[0]['location']
. The 3d coordinates are with respect to the camera frame, in x/y/z ordering. The convention is +z is depth, +x is right, and +y is down. The units are meters. For our work, we assume the robot base link is the same as the origin of the robot coordinate frame, so this entry will generally be the same as field objects[0]['keypoints'][0]['location']
.
Note that we did not publish the orientation for the ground truth pose label in the real datasets. However, it may likely be recoverable given the ground truth camera position and intrinsics. However, the synthetic datasets will have the full ground truth pose label (it has a field called quaternion_xyzw
) -- note the units in the synthetic datasets are in centimeters.
Please let me know if you have any further questions!
I'll go ahead and close this issue for now, but please let me know if you have further questions and we can re-open the issue to discuss!
Hi, this is fantastic work and I enjoyed reading the paper! I want to also use the provided dataset for training my algorithm where I need the ground-truth label of the camera-to-robot pose. Can you explain how to get the label from your dataset?
Thank you!