Closed GarrickLin closed 8 years ago
To my understanding, the training samples will be scaled with size 80x80, 40x40, 20x20 (maybe wrong). Currently, I scale the offset not the image, the idea is borrowed form FaceDetect/jointCascade_py, but we can change it later.
Yes, the scale is wrong in the code, I will change it. Thanks a lot for pointing out this error.
@GreenKing I find it is wrong to scale the offset, since the offset is relative range in [0, radius]. No matter how we scale the image, the offset shouldn't be changed. I will use three scaled image to calculate the feature.
I saw it. You mean offset is in range [0, radius]. If you scale the offset directly, which just means you are scaling the radius. But this will be a little bit different with scaling the images. Isn't it?
no, they are different. Since offset is relative to the size of image while radius is relative too, scaling the image means we have different views of the same point, while scaling the offset means we have different points under the same view. I think they are different, so I will reimplement the way to calculate the feature value. The scaled image size I think will be 80x80, 56x56 and 40x40. Am I right?
in JDA/src/jda/data.cpp line34 to 43
for example, if you want to pick up a pix in half scale of original image, you must mutilply a scale of 2(or sqrt(2)?) but not 0.5, so the code should be like this: