Closed qpwodlsqp closed 2 years ago
I think this is an expected behaviour, nvisii uses a different coordinate frame as opencv (which in the end is used by DOPE to compute the pose). You are going to need to do a rotation by the x axis. I have a few scripts to look at that, if you look at that script: https://github.com/NVlabs/Deep_Object_Pose/blob/master/scripts/metrics/render_json.py Check what happens to the pose when it is nvisii vs dope output. Sorry I should I made it universal. In the ADD script I used, depending of the provenance of the data, different rotations are applied.
Hello, I trained the DOPE with custom dataset using the script in
nvisii_data_gen
, and the model outputs the belief maps as intended. But predicted results of orientation do not match well with labels during inference. (location value matches well)Here are two example images I synthesized using your script: [00621.png] [00622.png]
Overlayed boxes are model predicted (green) and labeld (red)
Here are corresponding json labels: [00621.json]
[00622.json]
The objects in each image have different orientations, but
quaternion_xyzw
attributes in both labels are almost identical. Could this be a bug fromnvisii
method used inexport_to_ndds_file
innvisii_data_gen/utils.py
? Current version ofnvisii
in my machine == 1.1.72