VIS-VAR / LGSC-for-FAS

Learning Generalized Spoof Cues for FaceAnti-spoofing
MIT License
225 stars 56 forks source link

Data Preprocessing #31

Closed adamhtoolwin closed 3 years ago

adamhtoolwin commented 3 years ago

Hi.

I was wondering if you guys can clarify if you did any form of augmentation or preprocessing before input. None were mentioned in the paper but in the implemenation, I see quite a few. I'm trying to implement in PyTorch.

Thanks.

giangnd1808 commented 3 years ago

Hi AdamHtooLwin, Sorry for bothering you. I hope i was received your help, thank you =( In training , val and testing at file txt what is the image for use? You use full image or crop face images? Because Faceforensic dataset only have video, how to get the image ( is extract frame ? ). If you use crop face image, how to get the crop face? Thank you so much

adamhtoolwin commented 3 years ago

Sorry for the late reply @giangnd1808.

Yes I used cropped images and I used the MTCNN pretrained face cropper here. Converting video to frames is well documented I think. I used OpenCV to decode and get the frames. Also I think FaceForensics has a different purpose from Facial Liveness so I recommend using the other benchmark datasets like SiW or OULU-NPU.

Good luck!

SHarkbingowo commented 1 year ago

@adamhtoolwin @giangnd1808 @VIS-VAR hello, sorry to bother you

i have a small quesition about Converting video to images.

whether i should remain all the frames which extracted from the video? or i should remain one frames every five frames. Because i found the image similarity of all frames in the same video is very high, i am worried about whether i will have some bad influence.

Thank you, if you have some time to answer^_^