Closed Shubhambindal2017 closed 6 months ago
I tested inferencing an image on a gpu instance, it seems to be taking around 130-140 ms for the model pass, isn't this too much? Even ArcFace takes only around 10 ms for a model pass on gpu. I tried "GhostFaceNetV1-1.3-1 (A)" model.
Hi,
Could you show me how you tested the inference of ArcFace and our model?
Hi, sample notebook for how I did the inference for GhostFaceNet model : https://colab.research.google.com/drive/1aDKv8VoqDoOguQaXauH0jItSV0c4WBEK?usp=sharing While for arcFace I used this repo with some modifications to make it compatible with latest tf version, used arcface model (resnet50 backbone), on a single pass to the model it took around 10 ms. https://github.com/luckycallor/InsightFace-tensorflow
For GhostFaceNet, 130-140ms is when I inference using model(input), while for model.predict(input) its around 50-60ms [why this difference?] but still larger than resnet50 (10ms) and resnet100 (15ms) backbone arcface. Can you please check on this.
For GhostFaceNet, 130-140ms is when I inference using model(input), while for model.predict(input) its around 50-60ms [why this difference?] but still larger than resnet50 (10ms) and resnet100 (15ms) backbone arcface. Can you please check on this.
Hi, I am currently experimenting GhostFaceNets in a new research, will share all the details and codes for inference soon.
For GhostFaceNet, 130-140ms is when I inference using model(input), while for model.predict(input) its around 50-60ms [why this difference?] but still larger than resnet50 (10ms) and resnet100 (15ms) backbone arcface. Can you please check on this.
Sorry for the delay, I am still unable to work on this as my school recently changed my PhD Research Dissertation and I am being busy with it
For GhostFaceNet, 130-140ms is when I inference using model(input), while for model.predict(input) its around 50-60ms [why this difference?] but still larger than resnet50 (10ms) and resnet100 (15ms) backbone arcface. Can you please check on this.
I experienced the same problem as you, on RTX3050 Laptop version took 100-110ms using model(x), 40ms using model.predict(x). I still haven't found the solution. Someone that can give a solution will be appreciated :D
I tested inferencing an image on a gpu instance, it seems to be taking around 130-140 ms for the model pass, isn't this too much? Even ArcFace takes only around 10 ms for a model pass on gpu. I tried "GhostFaceNetV1-1.3-1 (A)" model.