1adrianb / face-alignment

:fire: 2D and 3D Face alignment library build using pytorch
https://www.adrianbulat.com
BSD 3-Clause "New" or "Revised" License
6.99k stars 1.34k forks source link

About speed of sfd and fan Inference #117

Open GuohongLi opened 5 years ago

GuohongLi commented 5 years ago

I tested the code on my nvidai k40 gpu,but both sfd and fan Inference are too slow. sfd=12s fan=3s around. May I know how fast you have tested ? Thx.

1adrianb commented 5 years ago

@GuohongLi I have tested it on a 1080Ti and with a few optimisation I can get 20fps on a 640x480px video input ( fan+sfd )

GuohongLi commented 5 years ago

@1adrianb How about the net size? TINY, SMALL, MEDIUM, or LARGE ?

1adrianb commented 5 years ago

@GuohongLi the smalles ones is 4x faster than the large one. The current model uploaded is "LARGE" (i.e. uses 4 stacks)

SergeiSamuilov commented 5 years ago

Adrian, are fewer stacks models available at the moment? Large model is great, but it would be really nice to speed things up a bit.

kwea123 commented 4 years ago

Hi, I ported the FAN pytorch model into onnx for faster inference. Also I use numba to accelerate the post processing step. You can see my repo. The only thing it might be a concern is that you need to install onnxruntime.

As for the speed, on my 1080Ti, sfd+fan pytorch gives 25FPS, and sfd+fan onnx+numba gives me 33FPS (a 10ms gain!). You may also use tensorrt to further boost the performance.

Aside from the model itself, another thing you can do is to do for example detection 1 frame over 2, and use linear interpolation on the frame without detection. That is also easy & elegant.

Vampire-Vx commented 4 years ago

@1adrianb which kind of optimisation? As i am using 2080Ti and 640x480 resolution, still only get about 4fps..

Vampire-Vx commented 4 years ago

@GuohongLi the smalles ones is 4x faster than the large one. The current model uploaded is "LARGE" (i.e. uses 4 stacks) @1adrianb so you are testing the smallest one?

199906peng commented 3 years ago

@1adrianb So,how can we get other models?The large model is great,but i want to try faster model.

debasishaimonk commented 8 months ago

Hi, I ported the FAN pytorch model into onnx for faster inference. Also I use numba to accelerate the post processing step.

was it on large model?