Closed ghost closed 11 months ago
Setting the principle point to the values I have received, plus the image height and width resulted in the correct camera matrix. The deviations I have received for the 2D to 3D projection were not a result of the principal point, but from not considering the radial distortion correctly.
@jsammer-ode I'm facing the same issue as you. Could you please tell me how you computed the matrix?
That's what i did:
sensor_width = 6.1699999999999999 sensor_height = 4.6275000000000004 f = 3.5988558298758133 px = 13.498251531673015 py = -26.248889380993322 K = np.array( # Camera intrinsics (3x3) [[f / sensor_width, 0,px], [0, f / sensor_height, py], [0, 0, 1]], dtype = "double" )
where all the parameters value are from cameras.smf file. However i suspect there is something wrong with this. How did you do it?
Hey @AndreaMaestri18,
I believe it was an issue of units (mm vs pixel). Try replacing the sensor_height
with sensor_height / image_height
when computing the camera matrix K, where image_height
is the number of pixels.
Thanks a lot! I just fixed it, also for future readers, the principle point is with respect to the center of the image, therefore like you also said before to those coordinates you have to add width/2 and height/2
Thanks a lot! I just fixed it, also for future readers, the principle point is with respect to the center of the image, therefore like you also said before to those coordinates you have to add width/2 and height/2
So in the end where was the error? what formula did you use to recover the focal length?
here is a snippet of the code used to construct the matrix from the json:
# info from json
f = 3.5988558298758133
mx = 6.1699999999999999
my = 4.6275000000000004
px = 13.498251531673015
py = -26.248889380993322
width = 4000
height = 3000
pxFocalLength = (f / mx) * width
pyFocalLength = (f / my) * height
K = np.array([
[pxFocalLength , 0, px+width/2 ],
[0, pyFocalLength, py+height/2],
[0, 0, 1]
])
thanks again :)
Thank you very much!
Just FYI the python module has been recently added to the aliceVision library https://github.com/alicevision/AliceVision/pull/1647 . For now it provides APIs to read the json through the original C++ alicevision API so that we can spare all these troubles of conversion etc. It will probably come along with the next release of Meshroom.
In any case, your snippet is anyway useful while waiting for the release and for people who need a quick jumpstart.
I am trying to get the 2D-3D correspondence between a pointcloud and the pixels of the images that were used in the creation of the pointcloud. For that I calculated the projection matrix, as described in this issue. However, I can't figure out how to interpret my principle point.
Here are the intrinsics from the
sfm.json
. This is the output from myMeshing
node, after piping the densified pointcloud through aConvertSfmFormat
node to extract the views, extrinsics and intrinsics. .In order for the reprojection from 2D to 3D to be correct, I would expect the principle point to be something like
principalPoint: [1900, 1520]
- I got this value from trial-and-error, and have no idea how I would compute this from the given intrinsics. I am also not sure how to interpret theprinciplePoint
I received from Meshroom (e.g. is this the offset from[image_width/2, image_height/2
] or the absolute coordinates? are the units pixels or mm?). I have previously used Pix4D to solve this exact problem, and here the principle point was2010, 1494
. The images I am using were taken with a drone, and I did not run dedicated camera calibration.Could anyone help me in computing the camera matrix K given these intrinsics? Thanks, any help appreciated!