Closed small-seven closed 3 years ago
I found that the images in the train, eval, test set have the size of 128x128. However, the images in the real masked face dataset have diverse sizes. How to use the real masked face dataset to test the model trained with images of size 128x128?
use mtcnn to do face detection and alignment, and then use the resize function in torch.transforms to interpolate the images
Got it. Thank you for your reply.
I found that the images in the train, eval, test set have the size of 128x128. However, the images in the real masked face dataset have diverse sizes. How to use the real masked face dataset to test the model trained with images of size 128x128?