Open ngolosov opened 1 year ago
Sorry for the late reply.
That is a bit a difficult case, mainly because you have quite large shifts in your images (up to ~25 pixels) and the dimensions of your input images are rather small. There is also a slight tilt between the images.
However, it seems like the best solution is to first run a global co-registration that corrects for the coarse shift at the center of the image overlap:
from arosics import COREG
p_base = '/path/base.img'
p_warp = '/path/warp.img'
CR = COREG(p_base, p_warp, r_b4match=111, s_b4match=111, max_shift=30, path_out='/path/output/warp_globally_coregistered.bsq')
CR.calculate_spatial_shifts()
CR.correct_shifts()
Then run the local co-registration to correct for the remaining local shifts:
from arosics import COREG_LOCAL
p_base = '/path/base.img'
p_warp = '/path/warp_globally_coregistered.bsq'
CRL = COREG_LOCAL(p_base, p_warp, grid_res=8, r_b4match=111, s_b4match=111, max_shift=15, min_reliability=75, path_out='/path/output/warp_locally_coregistered.bsq')
CRL.calculate_spatial_shifts()
CRL.correct_shifts()
Here it might be needed to experiment a bit with the thresholds used for filtering false-positives, especially the min_reliability
value. Probably visualizing the detected shifts as vectors might be helpful:
CRL.view_CoRegPoints(shapes2plot='vectors', vector_scale=10, hide_filtered=True)
In general, it does not make sense to compute tie points for each pixel (grid_res=1), it is more important to have a few reliable tie points and get rid of all false-positives that will introduce you distortions into the warped output result.
If you able to pass larger input images to AROSICS, the algorithm will be able to compute more reliable shifts, especially around the former image edges (because there, the algorithm automatically reduces the matching window size leading to less reliable tie points).
Description
I'm attempting to co-register longwave infrared hyperspectral images. These images were captured by a push broom sensor on a plane flying in a circular orbit using various sensor inclination angles. There is a significant overlap between the images (around 80-90%). I'm using Band 111 on both images because it has relatively low noise and striping.
However, it seems that the algorithm is unable to identify enough control points throughout the entire image, and is instead only utilizing points in certain areas. As a result, the output images are noticeably distorted.
What I Did
I’m running the image registration in the following way:
I'm getting the following output:
Tie points with grid_res = 8
Tie points with grid_res = 1
Here's the reference image:
Here's the target image:
Here's the referenced output image:
Here are the images to reproduce the issue:
Reference image Target image