Open Janudis opened 3 weeks ago
Maybe width and height gets inverted due to the vertical image. "Sensor width" is the physical width of the sensor, so in other word, the largest side of the sensor. If the camera is rotated, the image becomes vertical, but the max sensor side becomes the height instead of the width.
Thank you for your reply! You are correct that the photos were taken vertically. To investigate further, I printed the following values for the sensor and image dimensions:
width, height = float(intrinsic_data['width']), float(intrinsic_data['height']) # In pixels
sensor_width = float(intrinsic_data['sensorWidth']) # In mm
sensor_height = float(intrinsic_data['sensorHeight']) # In mm
print(f"width {width}")
print(f"height {height}")
print(f"sensor_width {sensor_width}")
print(f"sensor_height {sensor_height}")
The output was: width 4640.0 height 3472.0 sensor_width 7.524229526519775 sensor_height 5.630199432373047
If I understood correctly, the sensor width seems correct, but it appears that the image width and height might have been swapped? I also tried manually swapping the width and height:
height, width = float(intrinsic_data['width']), float(intrinsic_data['height']) # In pixels
sensor_width = float(intrinsic_data['sensorWidth']) # In mm
sensor_height = float(intrinsic_data['sensorHeight']) # In mm
print(f"width {width}")
print(f"height {height}")
print(f"sensor_width {sensor_width}")
print(f"sensor_height {sensor_height}")
# Compute fx and fy in pixels
fx = (focal_length / sensor_width) * width
fy = (focal_length / sensor_height) * height
# Convert principal point offsets from mm to pixels
cx = principal_point[0] + width / 2
cy = principal_point[1] + height / 2
distortion_params = intrinsic_data['distortionParams']
k1 = float(distortion_params[0])
k2 = float(distortion_params[1])
k3 = float(distortion_params[2])
dist_coeffs = np.array([k1, k2, 0, 0, k3]) # OpenCV uses 5 coefficients
# Construct intrinsic matrix (K)
K = np.array([
[fx, 0, cx],
[0, fy, cy],
[0, 0, 1]
])
However, the projected points are still not aligned correctly. Could there be another aspect of the extrinsic or intrinsic parameters that might be causing this issue?
Hello, I'm attempting to project the 3D points from a mesh generated by Meshroom's Texturing node onto each of the original images I captured. However, the projected points are not aligning correctly with the images. Here is the script i use:
Unfortunately, the results are incorrect. I’m unsure whether the issue lies with the extrinsic and intrinsic parameters or the point cloud from the mesh. I’ve tried various transformations on the point cloud, but the projected points remain inaccurate. This is one of the closest results I’ve managed to achieve:
Meshroom Version: 2023.3.0
Thank you in advance for your time!