wenbowen123 / iros20-6d-pose-tracking

[IROS 2020] se(3)-TrackNet: Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic Domains
Other
384 stars 66 forks source link

confusion about camera_extrinsic_parameters_calibration #44

Closed cynthia-you closed 2 years ago

cynthia-you commented 2 years ago

Hi, bowen, I have download your YCBInEOAT Dataset , and wanna peodict on my own RGBD data via BundleTrack. But i checked the cam_K.txt( camera_extrinsic_parameters_calibration ), i found out that the data in the txt is a 33 matrix. As far as i know, for extrinsic calibration, the matrix is AFFINE(44). And the intrinsic matrix is 3*3 is ​​exactly the same as the external parameter calibration parameter you provided. Cloud u pls explain that ? thanxxxxxx~~~~

wenbowen123 commented 2 years ago

The cam_K.txt is the intrinsic. There is no extrinsic here. That depends on how you define the world coordinate frame.

cynthia-you commented 2 years ago

The cam_K.txt is the intrinsic. There is no extrinsic here. That depends on how you define the world coordinate frame.

Dr. Bowen. Thank you for your guidance last time. I have been able to run 6D position tracking on my own data. But I still have some questions. i. My data is done on MUJOCO simulation platform, the camera is a simulation OpenGL frustum camera (RGB and Depth information can be obtained together), I also set the camera internal parameters to [[FX,0, Cx],[0,fy,cy],[0,01]]. ii. However, since your Bunder is mixed c++ programming, I don't quite understand that RGB camera and Depth are relative to each other in data collection. Do I need to change your code to change the relative transformation matrix to the unit matrix? iii. Can you tell me where I can visualize the result (color_viz)?

wenbowen123 commented 2 years ago

In my data, the rgb and depth are aligned, so their relative transform is identity. When you run the tracking example, you should see a window poping up showing the visualization, or are you looking for something else?

cynthia-you commented 2 years ago

In my data, the rgb and depth are aligned, so their relative transform is identity. When you run the tracking example, you should see a window poping up showing the visualization, or are you looking for something else?

Tanks for your reply. I followed your BundleTrack configuration and ran bleach0 and my own data with BundleTrack/scripts/run_ycbineoat.py, and /temp/../poses generates the predicted poses, but no window pops up. Is there a script like run_video.py that visualizes video tracking as shown on your repo home page

wenbowen123 commented 2 years ago

Are you asking for BundleTrack or se(3)-tracknet? This repo is for se(3)-tracknet

cynthia-you commented 2 years ago

Are you asking for BundleTrack or se(3)-tracknet? This repo is for se(3)-tracknet

Sorry , I am asking about the results visualization of BundleTrack, becuze the self-made data and YCBInEOAT structure are similar, so i initiated in SE3-trackNet.

wenbowen123 commented 2 years ago

BundleTrack will not pop up window. There is a LOG setting where you can set to nonzero to save the color_viz. I'm closing this issue as the question regarding the dataset has been resolved. Feel free to open new issues under BundleTrack repo if you have question about BundleTrack code.