Open sidsingla opened 1 year ago
It's a feedforward neural net architecture. If you have a good GPU available with large enough memory, you can easily have a batch size of 30+ images. The forward pass of the batch itself will be fast, so in theory you could have 30+ FPS. This is of course not accounting for face detection but assumes already cropped and aligned images.
I've found a couple of speed improvements:
Updated detect.py with code to improve speed in SFD Code Here: https://github.com/1adrianb/face-alignment/pull/347
C:\anaconda3\envs\work38\lib\site-packages\face_alignment\detection\sfd\detect.py v1 of the code speeds up about 10%. I can't get v2 of his fix to work.
If you switch face-alignment from SFD to blazeface, it goes even faster. In emoca/gdl/utils/FaceDetector.py:
class FAN(FaceDetector):
def __init__(self, device = 'cuda', threshold=0.5):
import face_alignment
self.face_detector = 'blazeface'
And then you update... C:\anaconda3\envs\work38\lib\site-packages\face_alignment\api.py Line 78: You have to get rid of the face_detector_kwargs #self.face_detector = face_detector_module.FaceDetector(device=device, verbose=verbose, face_detector_kwargs) self.face_detector = face_detector_module.FaceDetector(device=device, verbose=verbose)
Hi, I am not sure if inference speed is mentioned anywhere. Couldn't find it in the paper or supplementary material. Could you share that? Thanks!