SthPhoenix / InsightFace-REST

InsightFace REST API for easy deployment of face recognition services with TensorRT in Docker.
Apache License 2.0
499 stars 116 forks source link

Thank you for excllent work. How about TRT batch inference? #123

Open tungdq212 opened 10 months ago

tungdq212 commented 10 months ago

Thank you for excllent work.

Detection models now can be exported to TRT engine with batch size > 1 - inference code doesn't support it yet, though now they could be used in Triton Inference Server without issues.

Is there any plan for this? Or how can I implement batch inference myself?

SthPhoenix commented 10 months ago

Hi! Batch inference is already supported for all recognition models and for SCRFD and YOLOv5 family detection models.