Closed invisiblestrangler closed 1 year ago
I'm not familiar with the Microsoft publication, but from what I understand, you would like to get the width and height of the detected paper in the frame?
If so, it's pretty simple. You would first find the paper contour, then extract its corner points.
Here's an example:
const rawImage = document.querySelector('img')
const parsedImage = cv.imread(rawImage)
const scanner = new jscanify()
const paperContour = scanner.findPaperContour(parsedImage)
const {
topLeftCorner,
topRightCorner,
bottomLeftCorner,
bottomRightCorner,
} = scanner.getCornerPoints(paperContour, parsedImage);
Each corner in the returned object has an x
and y
value. You can calculate the dimensions from there.
Yea, the code above will prob work if you scan the document from top-down view. But it will pretty much break and distort the image result if you take it from an angle. For example, if you take a photo of A4 document from a slight angle, the height will be shorter than the actual A4 paper, messing up the aspect ratio.
So to find the real aspect ratio I have used the python code below and made it accessible via API. I don't know if anyone here would like to take his/her time into translating the python code into JavaScript. (I tried but failed because I don't know the math behind it.)
`import math import cv2 import scipy.spatial.distance import numpy as np
img = cv2.imread('img.png') (rows,cols,_) = img.shape
u0 = (cols)/2.0 v0 = (rows)/2.0
p = [] p.append((67,74)) p.append((270,64)) p.append((10,344)) p.append((343,331))
w1 = scipy.spatial.distance.euclidean(p[0],p[1]) w2 = scipy.spatial.distance.euclidean(p[2],p[3])
h1 = scipy.spatial.distance.euclidean(p[0],p[2]) h2 = scipy.spatial.distance.euclidean(p[1],p[3])
w = max(w1,w2) h = max(h1,h2)
ar_vis = float(w)/float(h)
m1 = np.array((p[0][0],p[0][1],1)).astype('float32') m2 = np.array((p[1][0],p[1][1],1)).astype('float32') m3 = np.array((p[2][0],p[2][1],1)).astype('float32') m4 = np.array((p[3][0],p[3][1],1)).astype('float32')
k2 = np.dot(np.cross(m1,m4),m3) / np.dot(np.cross(m2,m4),m3) k3 = np.dot(np.cross(m1,m4),m2) / np.dot(np.cross(m3,m4),m2)
n2 = k2 m2 - m1 n3 = k3 m3 - m1
n21 = n2[0] n22 = n2[1] n23 = n2[2]
n31 = n3[0] n32 = n3[1] n33 = n3[2]
f = math.sqrt(np.abs( (1.0/(n23n33)) ((n21n31 - (n21n33 + n23n31)u0 + n23n33u0u0) + (n22n32 - (n22n33+n23n32)v0 + n23n33v0v0))))
A = np.array([[f,0,u0],[0,f,v0],[0,0,1]]).astype('float32')
At = np.transpose(A) Ati = np.linalg.inv(At) Ai = np.linalg.inv(A)
ar_real = math.sqrt(np.dot(np.dot(np.dot(n2,Ati),Ai),n2)/np.dot(np.dot(np.dot(n3,Ati),Ai),n3))`
I see what you mean. This is interesting - I'll take a look into this.
I think this is the biggest problem right now. If you use a fixed value then it will only work for that particular aspect ratio. Any plans for implementing something similar to the methods outlined in the Whiteboard Scanning and Image Enhancement by Microsoft?
Appreciate it.