Open BruceDai003 opened 4 years ago
@BruceDai003 Yes they are trying crop face from frame before passing to model which is not required considering the model is accepts frame with shape (None, None, 3) means any shape of frame with RGB color is accepted. What i observed is passing only face from frame to model gave unsatisfactory prediction so better use full frame from video or photo you have. Just facePAD_API() method pass any frame you get a score.
@rathodx , yep, I tested the evaluate_image function, the results are not promising. I've no idea of why. Maybe the pre-trained model is inaccurate?
@BruceDai003 yeah by the look of model size its trained on small dataset.
Dr. Liu, I tried to run your facepad-test.py script, but an
OSError: .\videos\vid.txt not found.
occurred. I looked into it, it seems that in theevaluate_video
function, you first need to load bounding boxes from a txt file? I don't quite get it. Doesn't this script use some face detection model, e.g., facebox, dlib, etc. to detect faces at the same when reading frames from videos? Or do I need to first processing the video and get the bounding boxes for each frame first and save that information in a txt file?