Closed JRauer closed 5 years ago
The coordinate system used in UE4, opencv and ros are all different.
Because of the different coordinate systems you need to rotate the quaternion in the right frame you want to use. We use http://kieranwynn.github.io/pyquaternion/ to handle quaternions in DOPE. I think your problems are most likely caused by that.
There might be an other problem, if you are using YCB objects, please note that the objects coordinate frames in DOPE were updated from the original, you can find these transformations in the _object_settings.json file.
You are correct that DOPE does not need the rotation information to represent the pose as we are doing it through keypoint locations. We believe this is a strength as it avoids baking in the camera intrinsic.
I hope this helps, thank you for looking into DOPE.
@TontonTremblay Thank you for your fast reply and all the info! I will not be able to play around with the quaternions until next week but will post my solution here - if I find one.
@TontonTremblay Thank you so much for sharing your great work! I am currently working on my master's thesis which includes training DOPE with DR-Data (from the dataset-synthesizer) and real data I record with a robot, knowing the object's ground truth. The FAT-Dataset you used includes quaternion_xyzw which is the rotation between the camera and object frame. I tried to figure out if the rotation I get from ROS-tf can be written to quaternion_xyzw directly: I place the object in UE4 in a specific pose (by giving roll/pitch/yaw in the interface), run the sythesizer-plugin and try to calculate from the exported quaternions_xyzw which angles where given in UE4 - but am not able to get the angle-values I give in the user-interface. Analyzing the plugin's sourcecode shows that quaternion-axis are switched and negated but that did also not solve the problem. I guess it is something UE4-intern. Can you tell me how the transformation in the data should be given (can I just write the transformation as I get it from ROS-tf?) or how the plugin calculates the transformation from the angles given in the interface?
I analyzed the DOPE-training code and noticed that you load the quaternion_xyzw but I am not sure if it is really used as I am inexperienced with pytorch. It seems to me when training the network you only use the cuboids and there projections which also makes sense when reading your paper. Is it correct, that you don't use the quaternions when training DOPE. Thank you in advance!