Closed severecoder closed 1 year ago
You can apply padding to an image, update the center of distortion, then unwarp it as follow:
pad = 200
img_pad = np.pad(img, pad, mode="constant")
img_corrected = post.unwarp_image_backward(img_pad, xcenter + pad, ycenter + pad, list_bfact, mode="constant")
Example of using the image in demo_8.
pad = width//2
img_corrected = []
for i in range(img.shape[-1]):
mat_pad = np.pad(img[:, :, i], pad, mode="constant")
img_corrected.append(
post.unwarp_image_backward(mat_pad, xcenter + pad, ycenter + pad, list_bfact, mode="constant"))
img_corrected = np.moveaxis(np.asarray(img_corrected), 0, 2)
io.save_image(output_base + "/F_R_hazcam_unwarped_bigger_FOV.png", img_corrected)
Or another way to keep the corrected image is at the same size as the input:
import scipy.ndimage as ndi
zoom = 2.0
list_bfact1 = zoom * list_bfact
xcenter1 = xcenter * zoom
ycenter1 = ycenter * zoom
img_corrected = []
for i in range(img.shape[-1]):
img_tmp = ndi.zoom(img[:, :, i], zoom)
img_tmp = post.unwarp_image_backward(img_tmp, xcenter1, ycenter1, list_bfact1)
img_corrected.append(ndi.zoom(img_tmp, 1 / zoom))
img_corrected = np.moveaxis(np.asarray(img_corrected), 0, 2)
io.save_image(output_base + "/F_R_hazcam_unwarped_bigger_FOV_same_size.png", img_corrected)
Thanks a lot for the quick help. I have been trying the padding in the initial input image, along with updating the center based on the updated image size. Is there any recommended rule of thumb for padding here since different images will need different padding size?
To get the right padding for each side, we can use the top-left point and bottom-right point in the distorted space to calculate the corresponding points in the undistorted space. Implementation is as follows:
# Suppose that we already got coefficients of the backward model (xcenter, yxenter, list_bfact)
# We need to find the forward transformation using the given backward model.
(height, width) = img.shape
ref_points = [[i - ycenter, j - xcenter] for i in np.linspace(0, height, 40) for j in
np.linspace(0, height, 40)]
list_ffact = proc.transform_coef_backward_and_forward(list_bfact, ref_points=ref_points)
# Define the function to calculate corresponding points between distorted and undistorted space
def find_point_to_point(points, xcenter, ycenter, list_fact):
"""
points : (row_index, column_index) of 2D array.
"""
xi, yi = points[1] - xcenter, points[0] - ycenter
ri = np.sqrt(xi * xi + yi * yi)
factor = np.float64(np.sum(list_fact * np.power(ri, np.arange(len(list_fact)))))
xo = xcenter + factor * xi
yo = ycenter + factor * yi
return xo, yo
# Find top-left point in the undistorted space given top-left point in the distorted space.
xu_top_left, yu_top_left = find_point_to_point((0, 0), xcenter, ycenter, list_ffact)
# Find bottom-right point in the undistorted space given bottom-right point in the distorted space.
xu_bot_right, yu_bot_right = find_point_to_point((height - 1, width - 1), xcenter, ycenter, list_ffact)
# Calculate padding width for each side.
pad_top = int(np.abs(yu_top_left))
pad_bot = int(yu_bot_right - height)
pad_left = int(np.abs(xu_top_left))
pad_right = int(xu_bot_right - width)
print(pad_top, pad_bot, pad_left, pad_right)
img_pad = np.pad(img, ((pad_top, pad_bot), (pad_left, pad_right)), mode="constant")
img_corrected = post.unwarp_image_backward(img_pad, xcenter + pad_left, ycenter + pad_top, list_bfact, mode="constant")
io.save_image(output_base + "/F_R_hazcam_unwarped_bigger_FOV.jpg", img_corrected)
I've added the answers to documentation: https://discorpy.readthedocs.io/en/latest/usage/tips.html
This does not exactly work exactly as the unwarping parameters are error prone closer to the edge of the images. I fixed the issue by adding extra padding and then crop as post processing.
Thanks a lot.
Yeap, strong distortion at the edge causes numerical error.
TIA.
The information on the edges of the source image is lost after final unwarping, what is the best solution to maintain that without losing the undistrortion results?