anl13 / MAMMAL_datasets

Pig pose datasets used in MAMMAL.
MIT License
8 stars 0 forks source link

Request for extrinsic calibration details #1

Open MahejabeenNidhi opened 6 months ago

MahejabeenNidhi commented 6 months ago

Hello,

Your work very deeply inspires me!

I would be grateful to see the code used to get the extrinsic parameters from the markerid for extrinsic pnp.

Best!

anl13 commented 6 months ago

Hello,

Your work very deeply inspires me!

I would be grateful to see the code used to get the extrinsic parameters from the markerid for extrinsic pnp.

Best!

Thank you for your attention! I simply use the pnp function in OpenCV to solve the extrinsic parameters given 3D and 2D markers. Please refer to https://docs.opencv.org/4.x/d5/d1f/calib3d_solvePnP.html.

MahejabeenNidhi commented 6 months ago

Hello, Your work very deeply inspires me! I would be grateful to see the code used to get the extrinsic parameters from the markerid for extrinsic pnp. Best!

Thank you for your attention! I simply use the pnp function in OpenCV to solve the extrinsic parameters given 3D and 2D markers. Please refer to https://docs.opencv.org/4.x/d5/d1f/calib3d_solvePnP.html.

Thank you for your insight!

May I know how you generated the marker_3dpositions.txt ? From my understanding, the {camid}.txt files are generated from running code on the marker_3dpositions.txt. Are these points real-world 3D coordinates possibly measured using a measuring tape or laser measure?

Appreciate you taking the time to explain this to me! Thank you again for your amazing work!

MahejabeenNidhi commented 5 months ago

Hello,

Following up to see if it would be possible to get to know more about how the marker_3dpositions.txt was generated.

Thank you for your time!

anl13 commented 5 months ago

Hello,

Following up to see if it would be possible to get to know more about how the marker_3dpositions.txt was generated.

Thank you for your time!

Sorry for replying late because I am busing on defending my thesis these days.

Definitely you could measure the 3D coordinates. In fact, the values in marker_3dpositions.txt was not that accurate, the best way is to measure them using measuring tape or laser or any other tools. When I captured the images of my first sequence, I did not record any extrinsic parameters or chessboard videos for extrinsic calibration. When I tried to calibrate the cameras, I performed the following steps: (1) obtained the overall size of the scene (the width and length of the cage, measured by cage keeper using measuring tape); this step given me the true size. (2) I labeled corresponding points on each image (42 points on the floor, some other points on railings or the trough). Initially, I assign the 42 points on the floor with predefined locations (e.g. the "xyz" looks like "0,0,0", "0, 0.1, 0", "0, 0.2, 0", ..., "0.1, 0, 0") because these 42 points on the floor was in a grid shape. Note that the initial coordinates did not match the true scale of the scene. It is only used for initialization. (3) I jointly optimize camera extrinsic parameters (3 DoF rotation and 3DoF translation for each camera) using initial 42 points (the process is similar to pnp). Note the parameters at current stage P0. Then I utilized P0 to triangulate other 3d points. (4) I calculate the scene size using the 3D coordinates of other 3d points. If I find the calculated scene size is smaller than the true one, I enlarge the scale of initial 42 points in step (2), and perform (3) - (4) again, until the calculated scene size is reasonable.

The above step (1) - (4) was not a best practice to solve extrinsic parameters. I did that because I was unable to access more ground truth information. In common practice, I recommend two ways. (1) measure the true coordinates of the scene and perform PnP. (2) using chessboard to calibrate the extrinsic parameters (typical code could refer to https://github.com/zhangyux15/calibrator).

BTW, the point order I used for calibration was https://docs.google.com/presentation/d/1-E0Gr_Sfjxcuvmu5MDVpK5HxD_Llwxgu/edit?usp=drive_link&ouid=116799476792899806562&rtpof=true&sd=true. The python code I used now for calibration is https://drive.google.com/drive/folders/1yqX5wkfUF_Dlh0-TYEinjAoXBwwwf3Eg?usp=drive_link (Note that, I did not clean the code, so it looks dirty. For the same scene with different camera arrangement, I used the same final 3D points for PnP for simplicity). The original code of step (1) - (4) for generating the initial marker_3dposition.txt (or points3d.txt in some cases) was in C++, see https://drive.google.com/drive/folders/1xHjPpdYD8HkCPhGPJQBY9JYHkDjgFNKo?usp=drive_link. (maybe you will feel difficult to run these C++ code pieces because I did not manage them well). I remove them from release codes because it looks very very dirty as I truly feel that this is not a best way for calibration.

I wish these comments would help you. If you have any questions, feel free to propose an issue. I will comment as soon as I see them.