HamadYA / GhostFaceNets

This repository contains the official implementation of GhostFaceNets, State-Of-The-Art lightweight face recognition models.
https://ieeexplore.ieee.org/document/10098610
MIT License
194 stars 38 forks source link

Slow inference #17

Closed Shubhambindal2017 closed 6 months ago

Shubhambindal2017 commented 1 year ago

I tested inferencing an image on a gpu instance, it seems to be taking around 130-140 ms for the model pass, isn't this too much? Even ArcFace takes only around 10 ms for a model pass on gpu. I tried "GhostFaceNetV1-1.3-1 (A)" model.

HamadYA commented 1 year ago

I tested inferencing an image on a gpu instance, it seems to be taking around 130-140 ms for the model pass, isn't this too much? Even ArcFace takes only around 10 ms for a model pass on gpu. I tried "GhostFaceNetV1-1.3-1 (A)" model.

Hi,

Could you show me how you tested the inference of ArcFace and our model?

Shubhambindal2017 commented 1 year ago

Hi, sample notebook for how I did the inference for GhostFaceNet model : https://colab.research.google.com/drive/1aDKv8VoqDoOguQaXauH0jItSV0c4WBEK?usp=sharing While for arcFace I used this repo with some modifications to make it compatible with latest tf version, used arcface model (resnet50 backbone), on a single pass to the model it took around 10 ms. https://github.com/luckycallor/InsightFace-tensorflow

Shubhambindal2017 commented 1 year ago

For GhostFaceNet, 130-140ms is when I inference using model(input), while for model.predict(input) its around 50-60ms [why this difference?] but still larger than resnet50 (10ms) and resnet100 (15ms) backbone arcface. Can you please check on this.

HamadYA commented 1 year ago

For GhostFaceNet, 130-140ms is when I inference using model(input), while for model.predict(input) its around 50-60ms [why this difference?] but still larger than resnet50 (10ms) and resnet100 (15ms) backbone arcface. Can you please check on this.

Hi, I am currently experimenting GhostFaceNets in a new research, will share all the details and codes for inference soon.

HamadYA commented 6 months ago

For GhostFaceNet, 130-140ms is when I inference using model(input), while for model.predict(input) its around 50-60ms [why this difference?] but still larger than resnet50 (10ms) and resnet100 (15ms) backbone arcface. Can you please check on this.

Sorry for the delay, I am still unable to work on this as my school recently changed my PhD Research Dissertation and I am being busy with it

mamadinho commented 5 months ago

For GhostFaceNet, 130-140ms is when I inference using model(input), while for model.predict(input) its around 50-60ms [why this difference?] but still larger than resnet50 (10ms) and resnet100 (15ms) backbone arcface. Can you please check on this.

I experienced the same problem as you, on RTX3050 Laptop version took 100-110ms using model(x), 40ms using model.predict(x). I still haven't found the solution. Someone that can give a solution will be appreciated :D