Hi, thank you for providing the version of PyTorch. But I find that at the beginning of the training, the accuracy of the three validation datasets was very low. (I use MobileFaceNet, ArcFace loss and train on CASIA-WebFace, and the validation accuracy on lfw at the beginning epoch of training only 70%).
Then I find the code img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR) in data_pipe.py transforms the RGB to BGR. But in PIL, the image has the mode RGB and transforms.ToTensor() can process PIL image directly. So I think this transform is useless.
And I add one line of code(carray = carray[:, ::-1, :, :].copy()) in function evaluate to change the BRG to RGB. The accuracy of validation at the beginning of the train is higher.
So I think maybe should not do the transform RGB to BGR.
Hi, thank you for providing the version of PyTorch. But I find that at the beginning of the training, the accuracy of the three validation datasets was very low. (I use
MobileFaceNet
,ArcFace
loss and train onCASIA-WebFace
, and the validation accuracy onlfw
at the beginning epoch of training only70%
).Then I find the code
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
indata_pipe.py
transforms theRGB
toBGR
. But inPIL
, the image has the modeRGB
andtransforms.ToTensor()
can processPIL image
directly. So I think this transform is useless.And I add one line of code(
carray = carray[:, ::-1, :, :].copy()
) in functionevaluate
to change theBRG
toRGB
. The accuracy of validation at the beginning of the train is higher.So I think maybe should not do the transform
RGB
toBGR
.