HuangYG123 / CurricularFace

CurricularFace(CVPR2020)
MIT License
529 stars 72 forks source link

Cannot reproduce the results on IJBB and IJBC #24

Closed johnnysclai closed 3 years ago

johnnysclai commented 3 years ago

I used the pretrained IR101 model and the IJB evaluation code from insightface. Following are the results I got:

IJB-B TAR@FAR: 1e-6 -> 41.76% 1e-5 -> 69.81% 1e-4 -> 87.14% 1e-3 -> 93.27%

IJB-C TAR@FAR: 1e-6 -> 62.64% 1e-5 -> 75.46% 1e-4 -> 87.53% 1e-3 -> 94.01%

There is no problem if I use the pretrained models from face.evoLVe.PyTorch. Would you please share the evaluation code on IJB-B and IJB-C?

johnnysclai commented 3 years ago

I found the problem is due the face alignment. Once I change to the same face alignment code as inisightface, I can reproduce the results.

from skimage import transform as trans
def alignment(img, landmark):
    image_size = (112, 112)
    src = np.array([
        [30.2946, 51.6963],
        [65.5318, 51.5014],
        [48.0252, 71.7366],
        [33.5493, 92.3655],
        [62.7299, 92.2041]], dtype=np.float32)
    src[:, 0] += 8.0
    assert landmark.shape[0] == 68 or landmark.shape[0] == 5
    assert landmark.shape[1] == 2
    if landmark.shape[0] == 68:
        landmark5 = np.zeros((5, 2), dtype=np.float32)
        landmark5[0] = (landmark[36] + landmark[39]) / 2
        landmark5[1] = (landmark[42] + landmark[45]) / 2
        landmark5[2] = landmark[30]
        landmark5[3] = landmark[48]
        landmark5[4] = landmark[54]
    else:
        landmark5 = landmark
    tform = trans.SimilarityTransform()
    tform.estimate(landmark5, src)
    M = tform.params[0:2, :]
    img = cv2.warpAffine(img, M, (image_size[1], image_size[0]), borderValue=0.0)
    return img
wangs311 commented 3 years ago

hi, this is my first time doing an experiment about face recognition. Could you share the IJB evaluation code which you used before? It will be very helpful. Thanks a lot!

CloudWalking0 commented 3 years ago

I found the problem is due the face alignment. Once I change to the same face alignment code as inisightface, I can reproduce the results.

from skimage import transform as trans
def alignment(img, landmark):
  image_size = (112, 112)
  src = np.array([
      [30.2946, 51.6963],
      [65.5318, 51.5014],
      [48.0252, 71.7366],
      [33.5493, 92.3655],
      [62.7299, 92.2041]], dtype=np.float32)
  src[:, 0] += 8.0
  assert landmark.shape[0] == 68 or landmark.shape[0] == 5
  assert landmark.shape[1] == 2
  if landmark.shape[0] == 68:
      landmark5 = np.zeros((5, 2), dtype=np.float32)
      landmark5[0] = (landmark[36] + landmark[39]) / 2
      landmark5[1] = (landmark[42] + landmark[45]) / 2
      landmark5[2] = landmark[30]
      landmark5[3] = landmark[48]
      landmark5[4] = landmark[54]
  else:
      landmark5 = landmark
  tform = trans.SimilarityTransform()
  tform.estimate(landmark5, src)
  M = tform.params[0:2, :]
  img = cv2.warpAffine(img, M, (image_size[1], image_size[0]), borderValue=0.0)
  return img

Hello, I was wondering how to test our pytorch model on IJB datasets. The code in insightface seems to use mxnet model to evaluate. Thank you very much!

johnnysclai commented 3 years ago

@wangs311 @CloudWalking0 I download the notebook and data from here: https://github.com/deepinsight/insightface/tree/8d971de36370320c286f8bf92829f2995234b386/recognition/_evaluation_/ijb , and then replace the mxnet model to a pytorch model

CloudWalking0 commented 3 years ago

@wangs311 @CloudWalking0 I download the notebook and data from here: https://github.com/deepinsight/insightface/tree/8d971de36370320c286f8bf92829f2995234b386/recognition/_evaluation_/ijb , and then replace the mxnet model to a pytorch model

Thanks for your reply!