Closed yash1996 closed 4 years ago
https://github.com/ducha-aiki/affnet/blob/master/examples/hesaffnet/WBS%20demo.ipynb does the same, except the last part of your code, which you should add yourself:
# Find homography
h, mask = cv2.findHomography(points1, points2, cv2.RANSAC)
# Use homography
height, width, channels = im2.shape
im1Reg = cv2.warpPerspective(im1, h, (width, height))
Thanks for showing the the direction. But I need some help in converting LAFs into cv2.Keypoints vectors or points to find the homography to map the image. In the LAF.py I found
def LAF2pts(LAF, n_pts = 50):
a = np.linspace(0, 2*np.pi, n_pts);
x = [0]
x.extend(list(np.sin(a)))
x = np.array(x).reshape(1,-1)
y = [0]
y.extend(list(np.cos(a)))
y = np.array(y).reshape(1,-1)
HLAF = np.concatenate([LAF, np.array([0,0,1]).reshape(1,3)])
H_pts =np.concatenate([x,y,np.ones(x.shape)])
H_pts_out = np.transpose(np.matmul(HLAF, H_pts))
H_pts_out[:,0] = H_pts_out[:,0] / H_pts_out[:, 2]
H_pts_out[:,1] = H_pts_out[:,1] / H_pts_out[:, 2]
return H_pts_out[:,0:2]```
Thanks for showing the direction but I also need to convert LAFs to cv2.Keypoints format or points to get the homography of the image,
I found LAF.py
def LAF2pts(LAF, n_pts = 50):
a = np.linspace(0, 2*np.pi, n_pts);
x = [0]
x.extend(list(np.sin(a)))
x = np.array(x).reshape(1,-1)
y = [0]
y.extend(list(np.cos(a)))
y = np.array(y).reshape(1,-1)
HLAF = np.concatenate([LAF, np.array([0,0,1]).reshape(1,3)])
H_pts =np.concatenate([x,y,np.ones(x.shape)])
H_pts_out = np.transpose(np.matmul(HLAF, H_pts))
H_pts_out[:,0] = H_pts_out[:,0] / H_pts_out[:, 2]
H_pts_out[:,1] = H_pts_out[:,1] / H_pts_out[:, 2]
return H_pts_out[:,0:2]
I used the following methode of getting the Keypoints from Local Affine Features (LAFs)
keypoints1 = list(map(lambda x:cv2.KeyPoint(x=x[0],y=x[1], _size = 2),LAFs1.numpy()[:,:,2]))
keypoints2 = list(map(lambda x:cv2.KeyPoint(x=x[0],y=x[1], _size = 2),LAFs2.numpy()[:,:,2]))
# Draw top matches
imMatches = cv2.drawMatches(im1,keypoints1, im2,keypoints2 , matches, None)
cv2.imwrite("matches.jpg", imMatches)
#cv2_imshow(imMatches)
# Extract location of good matches
points1 = np.zeros((len(matches), 2), dtype=np.float32)
points2 = np.zeros((len(matches), 2), dtype=np.float32)
for i, match in enumerate(matches):
points1[i, :] = keypoints1[match.queryIdx].pt
points2[i, :] = keypoints2[match.trainIdx].pt
# Find homography
h, mask = cv2.findHomography(points1, points2, cv2.RANSAC)
print(img2.shape)
# Use homography
height, width, channels = im2.shape
im1Reg = cv2.warpPerspective(im2, h, (width, height))
Although the matches are good but it doesn't warps the image as expected. I am getting better warping using ORB/SIFT from opencv itself.
Is there something wrong in this implementation ?
Considering the fact that the view point difference is lesser than 10 degrees
Well, for viewpoint difference < 10 degrees you don`t need HardNet or AffNet, it would be overkill. How many matches and detections do you get with opencv SIFT and with this repo? It might be the issue of detector threshold. Could you also post the warped images from all methods?
After playing around with the thresholds and number of matches, I am getting the following results. Input Images: 1st image - 2nd image
Affnet Matches
Results After warping the output using the affnet model
After warping the output using the SIFT model
After warping the output using the ORB model
I don`t see any huge difference actually. What is your final goal?
How can I use affnet and hardnnet++ to align two images
similar to this