Open ArghyaChatterjee opened 1 year ago
cuboid_
child of the object you want to export to export their 3d position, follow the same workflow as the main object. I think overall you are on the right direction and you are making good progress, good job. advice with 3d, take your time and make one change at the time and then debug it. Easy to introduce errros that are hard to recover from.
Hello @TontonTremblay ,
Thanks for the reply. I understood 1 & 3 but couldn't understand point 2.
I can see the https://github.com/NVlabs/Deep_Object_Pose/blob/master/scripts/nvisii_data_gen/single_video_pybullet.py#608 in this line, you have commented out the cuboids. I don't see any definition for this one inside the single_video_pybullet.py file.
Inside the utils.py file, in the export_to_ndds_files function definition, it's set to none: https://github.com/NVlabs/Deep_Object_Pose/blob/master/scripts/nvisii_data_gen/utils.py#1103.
I do see some cuboid definitions inside the utils.py file as add_cuboid here: https://github.com/NVlabs/Deep_Object_Pose/blob/master/scripts/nvisii_data_gen/utils.py#913
And here: https://github.com/NVlabs/Deep_Object_Pose/blob/master/scripts/nvisii_data_gen/utils.py#998
Lastly, I see a logic here: https://github.com/NVlabs/Deep_Object_Pose/blob/master/scripts/nvisii_data_gen/utils.py#1233
if not cuboids is None:
cuboid = cuboids[obj_name]
else:
cuboid = None
I am somewhat confused as there are some things parallely going on as well as I need those cuboid keypoints with respect to camera frame (not world frame). To ensure everything is on right track, it's important that I don't do anything wrong.
Also, just wanted to confirm that have you used opengl camera convention for centerpose prediction ?? I believe centerpose is using objectron dataset and in that logic, objectron dataset is also following opengl convention. If that's the case, inside utils.py, if I commented out the cam_matrix 180 degree rotation part, and add camera_projection_matrix like the same way as cam_matrix to produce everything in opengl format, is that ok ??
# camera view camera
cam_matrix = visii.entity.get(camera_name).get_transform().get_world_to_local_matrix()
# rotate camera by 180° around x
# from: X right, Y up and Z out of the image towards the viewer
# to: X right, Y down and Z into the image away from the viewer (= OpenCV / NDDS / ROS "optical" frame)
# cam_matrix = visii.mat4(1, 0, 0, 0,
# 0, -1, 0, 0,
# 0, 0, -1, 0,
# 0, 0, 0, 1) * cam_matrix
cam_matrix_export = []
for row in cam_matrix:
cam_matrix_export.append([row[0],row[1],row[2],row[3]])
# camera_projection_matrix
cam_proj_matrix = visii.entity.get(camera_name).get_camera().get_projection()
cam_proj_matrix_export = []
for row in cam_proj_matrix:
cam_proj_matrix_export.append([row[0],row[1],row[2],row[3]])
After that, here is how my json file was updated:
{
"camera_data": {
"camera_look_at": {
"at": [
1.0,
0.0,
0.0
],
"eye": [
0.0,
0.0,
0.0
],
"up": [
0.0,
0.0,
1.0
]
},
"camera_projection_matrix": [
[
0.8151217699050903,
0.0,
0.0,
0.0
],
[
0.0,
1.4491053819656372,
0.0,
0.0
],
[
0.013498933985829353,
0.014036557637155056,
-1.0010005235671997,
-1.0
],
[
0.0,
0.0,
-0.10005002468824387,
0.0
]
],
"camera_view_matrix": [
[
0.0,
0.0,
-1.0,
0.0
],
[
-1.0,
0.0,
0.0,
0.0
],
[
0.0,
1.0,
0.0,
0.0
],
[
0.0,
0.0,
0.0,
1.0
]
],
"height": 720,
"intrinsics": {
"cx": 640.0,
"cy": 360.0,
"fx": 521.6779174804688,
"fy": 521.6779174804688
},
"location_worldframe": [
-0.0,
0.0,
-0.0
],
"quaternion_xyzw_worldframe": [
0.5,
-0.5,
-0.5,
0.5
],
"width": 1280
},
"objects": [
{
"bounding_box_minx_maxx_miny_maxy": [
492,
533,
129,
177
],
"class": "mug",
"local_cuboid": null,
"local_to_world_matrix": [
[
-0.2890670597553253,
-0.8582406044006348,
0.4241040050983429,
-0.0
],
[
0.09283934533596039,
0.41579586267471313,
0.904707133769989,
-0.0
],
[
-0.9527969360351562,
0.30089423060417175,
-0.040514618158340454,
-0.0
],
[
1.481677532196045,
0.3374212980270386,
0.6008970737457275,
1.0
]
],
"location": [
-0.3374212980270386,
0.6008970737457275,
-1.481677532196045
],
"location_worldframe": [
1.481677532196045,
0.3374212980270386,
0.6008970737457275
],
"name": "mug_0",
"projected_cuboid": [
[
518.2252883911133,
149.8012661933899
],
[
491.6792297363281,
167.93272018432617
],
[
477.51625061035156,
136.42764329910278
],
[
503.6228942871094,
117.82564401626587
],
[
540.2409744262695,
165.09751796722412
],
[
515.386848449707,
181.55617475509644
],
[
502.17926025390625,
152.40899562835693
],
[
526.6552734375,
135.54736375808716
],
[
509.7786331176758,
151.17395639419556
]
],
"provenance": "nvisii",
"px_count_all": 0,
"px_count_visib": 0,
"quaternion_xyzw": [
0.013572394847869873,
0.15302777290344238,
-0.21785865724086761,
-0.9638134241104126
],
"quaternion_xyzw_worldframe": [
-0.28967729210853577,
0.6605637073516846,
0.456277459859848,
-0.5211083292961121
],
"scale": [
0.12563899159431458,
0.09907500445842743,
0.09066499769687653
]
}
]
}
When we did objectron, I am afraid I do not remember exactly how we parsed the data. So I am afraid I wont be able to help, but normally we train algorithms on opencv conventions to make our life a little easier on the robot.
But to answer your question, yes, if you remove the rotation you will get the camera pose in opengl coordinate frame. nvisii uses opengl coordinate frames (and going to opencv is pretty simple).
In the script when you pass the cuboid data structure, you will export the cuboid in its local frame, I did not include it in the final version of the code to avoid confusion with the exported data with the projected_cuboid. But also it is a data structure to keep track of. If you check the add_cuboid in the utils, it returns the cuboid in the object coordinate frame, you can use that to export the normalize cuboid centerpose needs. I hope this helps. https://github.com/NVlabs/Deep_Object_Pose/blob/master/scripts/nvisii_data_gen/utils.py#L996 I think the cuboid to export_ndds needs object_name:cuboid dictionary.
Hello,
As you know from some other issues, I have been trying to generate some ground truth data for centerpose using dope's pipeline. Here is a json data containing the annotation for cup model from objectron dataset (generated using the methods mentioned inside the original centerpose repo) :
Now, here is the data that has been generated by dope's pipeline (nvisii):
I have modified a little bit to the nvisii script to add the scale, rename and change projected cuboid orientation to match the objectron dataset coordintate frame. Now I have 3 questions:
You can clearly see the missing field in the synthetic data generation pipeline is the "camera_projection_matrix". Now, usually I think we have this in pixel but in this case, it seems like it's in some other format. Also, as this is coming from the real dataset for objectron dataset, each camera is different so is the projection matrix. How to set this/solve this problem for synthetic dataset where i am using the same camera? Attaching the camera settings for synthetic dataset generation pipeline:
In the centerpose json data, Inside AR_data field, there is plane_center and plane_normal which is missing in the synthetic data generation pipeline. Also, the keypoints_3d ( actual location) value is missing in the synthetic datasets. Are these values needed if I am trying to train a single instance of mug which has no axis symmetry for centerpose ??
In vast amount of the centerpose annotation json data, you have the z value of the "location" field to be negetive like this:
-0.38022490531119946
but for the synthetic data generation pipeline, the data is more or less bounded by a volume like this:where z value is never negetive for synthetic data generation pipeline (which also makes sense as in front of camera, the z depth should be positive not negetive). Why is the discrepency and how to mitigate through this problem ??