Open HouZhe1 opened 3 months ago
If I remember correctly, I just estimated coarse overlap rates by estimating the homography with the pre-trained UDIS model.
Thank you very much for your reply . Currently, I am calculating the overlap ratio by comparing the number of matched feature points between two images and the original feature points. However, the results I am getting seem to be a bit problematic. Do you apply homography matrix to transform the target image into the reference image, and then calculate the overlap rate at the pixel level? Could you please upload your detailed code for calculating the overlap ratio and parallax?
And here is my code:
import cv2 import numpy as np import os from glob import glob from collections import defaultdict
orb = cv2.ORB_create() bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
def compute_overlap_ratio(matches, kp1, kp2, img1_shape, img2_shape):
return len(matches) / float(max(len(kp1), len(kp2)))
folder_path = 'XXXX' folder_overlap_ratios = defaultdict(list)
sub_dirs = next(os.walk(folder_path))[1]
for i in range(0, len(sub_dirs), 2): dir1_name = sub_dirs[i] dir2_name = sub_dirs[i + 1] if i + 1 < len(sub_dirs) else None
if dir2_name is not None and dir1_name.endswith('-1') and dir2_name.endswith('-2'):
dir1_path = os.path.join(folder_path, dir1_name)
dir2_path = os.path.join(folder_path, dir2_name)
img_files_dir1 = glob(os.path.join(dir1_path, '*.jpg'))
img_files_dir2 = glob(os.path.join(dir2_path, '*.jpg'))
assert len(img_files_dir1) == len(img_files_dir2), "Image count mismatch"
total_overlap_ratio = 0
num_pairs = 0
for img1_file, img2_file in zip(img_files_dir1, img_files_dir2):
base_name1 = os.path.basename(img1_file)
base_name2 = os.path.basename(img2_file)
# assert base_name1 == base_name2, "Image names do not match"
img1 = cv2.imread(img1_file, 0)
img2 = cv2.imread(img2_file, 0)
kp1, des1 = orb.detectAndCompute(img1, None)
kp2, des2 = orb.detectAndCompute(img2, None)
matches = bf.match(des1, des2)
overlap_ratio = compute_overlap_ratio(matches, kp1, kp2, img1.shape, img2.shape)
total_overlap_ratio += overlap_ratio
num_pairs += 1
if num_pairs > 0:
average_overlap_ratio = total_overlap_ratio / num_pairs
folder_overlap_ratios[f"{dir1_name} & {dir2_name} sum {num_pairs} "].append(average_overlap_ratio)
print(f"{dir1_name} & {dir2_name} have {num_pairs} pairs image: {average_overlap_ratio}")
As you described, I applied the homography matrix to transform the target image into the reference image, and then calculated the overlap rate at the pixel level on the reference image resolution. But make sure the image is correctly warped by the calculated homography.
I cannot find the corresponding code. It has been a long time since UDIS was published.
In your paper——Unsupervised Deep Image Stitching: Reconstructing Stitched Features to Images ,V. EXPERIMENTS A. Dataset and Implement Details said,
To quantitatively describe the distribution of different overlap rates and varying degrees of parallax in our dataset. We divide the overlap rates into 3 levels and define a high overlap rate greater than 90%, a middle overlap rate ranging from 60%-90%, and a low overlap rate lower than 60%. This classification criterion is formulated according to [37], [38], [42], where [38] is the represnetative work in high overlap rate. The average overlap rate of the proposed dataset is greater than 90%. And [37], [42] are the representative works in middle overlap rate for the average overlap rate of Warped COCO (disturbance < 32) dataset [42] is about 75%. Besides, to describe parallax accurately, we align the target image with the reference image using a global homography and then calculate the maximum misalignment error of corresponding feature points in the coarse aligned images to show the magnitude of parallax. In this way, we divide the parallax into 2 levels: small parallax with error smaller than 30 pixels and large parallax with error greater than 30 pixels. Fig. 9 (c) demonstrates the difference of different parallax intuitively.
The amount of Synthetic image overlap is controlled by the point perturbation parameter ρ. I want to know how do you get the overlap rate of the real dataset UDIS, while there are no point perturbation parameter ρ.