Open mmortazavi opened 5 years ago
Hi, I am also using this library for decoding data matrix and i found something about these arguments that timeout is int value in milliseconds which is really helping for quickly decoding and gap_size is no of pixels between tow data matrix used when you have more than one data matrix in sequence to decode with equal gap. with threshold you can directly give threshold value between 0-100 to this function without using open-CV functions and max count is no of data matrix to be decode in one image and shape is data_matrix size i. for 10 x 10 it is 0, 12 x 12 it is 1 and so on. by using all these together we can have quick and effective decoding of data matrix.
Hi there.
Thanks for commenting on the parameters, to clarify:
What is your recommended value for timeout?
Would you elaborate on gap_size? What do you mean by 'threshold you can directly give threshold value between 0-100 to this function without using open-CV functions'? At present I read images from camera frames, and do some processing with Open-CV and then pass the image array stored in memory to libdmtx to read the datamatrix! Is this parameter would be relevant for my process?
Shape also is not clear to me. Would that speedup the reading? I do not follow the '1010 it is 0, 1212 it is 1' convention? Can you elaborate this too?
I would love to fine the optimum value for parameters to maximize the reading capability as well as speed. So far reading is not bad for playing with the shrink parameters as mentioned. But it is slow! On average 1-1.5 sec to read a datamatrix!
I am using the python wrapper of libdmtx: pylibdmtx. And I have been struggling to read some of my data matrices which looked pretty decent in terms of quality. I did quite few enhancement like cv2.fastNlMeansDenoisingColored, cv2.threshold and so on. With these improvements I have had some success, but still missing quite few readings. It was funny that I could read some of my blurred, partly damaged data matrices, but missing good ones. It was rather a random behavior to be honest.
Until I learned that decode function takes quite a few number of parameters:
BUT there are not documentations what these arguments are. Some surely can be guessed, but others I could not guess what values to put in. A better documentation would be really appreciated. The reason I emphasize having one that, in my case, I came across this post in stackoverflow, and shrink was one recommended to play with. I started seeing magic. I basically, out of blue, screened some values
shrink = np.arange(1,10)
and I could read the data matrices that I was struggling with. The problem is that now it is not fixed shrink value, sometimes higher value, sometimes lower! I have to scan for each reading this range!This bring to the significance of knowing more about arguments of this function!