Closed bkimman closed 1 year ago
Q1. The input image is rotated about the point (xcen, ycen), yielding the output image.
Q2. There is always rounding, and iteration by rotation of a small angle will show deterioration of the image. But the important point is that we loop over the destination, so that we are assured of getting a result for each dest pixel. If we looped over the source, some dest pixels would not be set, due to rounding.
Q3. We're doing the inverse transform (dest --> src), which because this is orthogonal, is simply the transpose: x = x' cos(T) + y' sin(T) y = -x' sin(T) + y' cos(T) We've also defined xdif as xcen - x', and similarly for ydif. These both add another minus sign to each of the primed coords. Removing the 'dif' part of the primed names, and leaving out the shift by xcen and ycen, we then get x = -x' cos(T) - y' sin(T) y = x' sin(T) - y' cos(T)
Thanks a lot .. now clear
K
You're welcome.
Hello Dan
I am working on some imaging application and had a doubt regarding rotation by sampling. In Leptonica, a new image is created .. then each pixel in this image is iterated .. and the source pixel is determined .. if the pixel determined is 'within bounds' then that pixel is set in the target image.
Note : I am working with 1bpp images only and so am focussing on that part of the code only.
The code from pixRotateBySampling .. if (d == 1) { for (i = 0; i < h; i++) { / scan over pixd / lined = datad + i wpld; ydif = ycen - i; for (j = 0; j < w; j++) { xdif = xcen - j; x = xcen + (l_int32)(-xdif cosa - ydif sina); if (x < 0 || x > wm1) continue; y = ycen + (l_int32)(-ydif cosa + xdif * sina); if (y < 0 || y > hm1) continue; if (incolor == L_BRING_IN_WHITE) { if (GET_DATA_BIT(lines[y], x)) SET_DATA_BIT(lined, j); } else { if (!GET_DATA_BIT(lines[y], x)) CLEAR_DATA_BIT(lined, j); } } } LEPT_FREE(lines); return pixd; }
Question .. the center of the source image is different from that of the target image .. does this play no role in the computation of the source pixel?
Q2 : Iterating over the rotated image pixels is to ensure rounding does not impact the calculation. For 1 bpp images, if we looped the source and for each black pixel computed its position in the target image and applied it .. would we still suffer from rounding problems?
Finally, I could not understand the transformation being applied .. for rotation from source to destination we use
cos theta, -sin theta sin theta, cos theta
Would appreciate if you could help me understand the transformation that is being applied above.
Thanks
K