MAhaitao999 / mtcnn-align-facenet-deployment

本项目是利用mtcnn网络和facenet网络实现了一个简单的人脸识别功能。整体流程大致如下:首先利用mtcnn网络进行人脸检测和人脸关键点(5个)提取;接着利用人脸关键点进行人脸校正(仿射变换);然后将校正之后的人脸图片送入facenet网络进行人脸特征(128维)提取;最后将提取到的人脸特征与底库中的人脸特征进行相似度计算(特征比对),完成人脸识别功能。
32 stars 4 forks source link

下載model 都顯示"網路錯誤" #3

Open edwardchang0112 opened 2 years ago

edwardchang0112 commented 2 years ago

如title,下載model 都顯示"網路錯誤",不知道應該如何解決?謝謝!

zzk0 commented 2 years ago

链接: https://pan.baidu.com/s/1A9jCJa_sQ4D3ejelgXX2RQ 提取码: tkhg

是在这里下载的吗?我这里可以下载,已经按照他说的搭建起来了

edwardchang0112 commented 2 years ago

是在這個網址,但是就一直出現 image

不知道您是否能夠直接透過EMAIL或是其他方式分享給我呢? 謝謝!

edwardchang0112 commented 2 years ago

你的 email 是?

好的,謝謝您!

zzk0 commented 2 years ago

ok,为保护隐私起见,记得删掉评论。

zzk0 commented 2 years ago

https://github.com/zzk0/mtcnn-align-facenet-deployment/releases/download/model/facenet_keras.h5

@edwardchang0112

edwardchang0112 commented 2 years ago

https://github.com/zzk0/mtcnn-align-facenet-deployment/releases/download/model/facenet_keras.h5

@edwardchang0112

謝謝您,已經下載了,來嘗試您寫的步驟做做看! 有問題再請教您!

edwardchang0112 commented 2 years ago

能詢問下 若是如您這樣的配置 使用tensorrt + triton server的效能如何? 尤其是FPS,不知道能不能藉由tensorrt + triton server來提高FPS? 謝謝!

zzk0 commented 2 years ago

下面是我运行 perf_client 的性能测试报告,有四组结果。乐观估计达到 100 infer/sec。GPU 为 V100,运行 facenet 时的 GPU Utilization 还不到 30%,性能上还有很大的调优空间。

root@ubuntu:/workspace# perf_client -m pnet --shape input_1:480,480,3
*** Measurement Settings ***
  Batch size: 1
  Measurement window: 5000 msec
  Using synchronous calls for inference
  Stabilizing using average latency

Request concurrency: 1
  Client: 
    Request count: 537
    Throughput: 107.4 infer/sec
    Avg latency: 9303 usec (standard deviation 7246 usec)
    p50 latency: 7138 usec
    p90 latency: 8699 usec
    p95 latency: 32396 usec
    p99 latency: 37157 usec
    Avg HTTP time: 9265 usec (send/recv 2261 usec + response wait 7004 usec)
  Server: 
    Inference count: 647
    Execution count: 647
    Successful request count: 647
    Avg request latency: 3977 usec (overhead 3 usec + queue 44 usec + compute input 1175 usec + compute infer 2253 usec + compute output 502 usec)

Inferences/Second vs. Client Average Batch Latency
Concurrency: 1, throughput: 107.4 infer/sec, latency 9303 usec
root@ubuntu:/workspace# ./perf_client -m rnet --shape input_1:24,24,3
bash: ./perf_client: No such file or directory
root@ubuntu:/workspace# perf_client -m rnet --shape input_1:24,24,3
*** Measurement Settings ***
  Batch size: 1
  Measurement window: 5000 msec
  Using synchronous calls for inference
  Stabilizing using average latency

Request concurrency: 1
  Client: 
    Request count: 4593
    Throughput: 918.6 infer/sec
    Avg latency: 1087 usec (standard deviation 2724 usec)
    p50 latency: 997 usec
    p90 latency: 1148 usec
    p95 latency: 1243 usec
    p99 latency: 1406 usec
    Avg HTTP time: 1067 usec (send/recv 61 usec + response wait 1006 usec)
  Server: 
    Inference count: 5582
    Execution count: 5582
    Successful request count: 5582
    Avg request latency: 628 usec (overhead 2 usec + queue 38 usec + compute input 531 usec + compute infer 10 usec + compute output 47 usec)

Inferences/Second vs. Client Average Batch Latency
Concurrency: 1, throughput: 918.6 infer/sec, latency 1087 usec
root@ubuntu:/workspace# perf_client -m onet --shape input_1:48,48,3
*** Measurement Settings ***
  Batch size: 1
  Measurement window: 5000 msec
  Using synchronous calls for inference
  Stabilizing using average latency

Request concurrency: 1
  Client: 
    Request count: 3778
    Throughput: 755.6 infer/sec
    Avg latency: 1322 usec (standard deviation 128 usec)
    p50 latency: 1313 usec
    p90 latency: 1457 usec
    p95 latency: 1501 usec
    p99 latency: 1655 usec
    Avg HTTP time: 1314 usec (send/recv 70 usec + response wait 1244 usec)
  Server: 
    Inference count: 4536
    Execution count: 4536
    Successful request count: 4536
    Avg request latency: 878 usec (overhead 3 usec + queue 40 usec + compute input 770 usec + compute infer 9 usec + compute output 56 usec)

Inferences/Second vs. Client Average Batch Latency
Concurrency: 1, throughput: 755.6 infer/sec, latency 1322 usec

facenet:

root@ubuntu:/workspace# perf_client -m facenet --shape input_1:160,160,3
*** Measurement Settings ***
  Batch size: 1
  Measurement window: 5000 msec
  Using synchronous calls for inference
  Stabilizing using average latency

Request concurrency: 1
  Client: 
    Request count: 635
    Throughput: 127 infer/sec
    Avg latency: 7870 usec (standard deviation 7232 usec)
    p50 latency: 6181 usec
    p90 latency: 6406 usec
    p95 latency: 19532 usec
    p99 latency: 36119 usec
    Avg HTTP time: 7844 usec (send/recv 102 usec + response wait 7742 usec)
  Server: 
    Inference count: 764
    Execution count: 764
    Successful request count: 764
    Avg request latency: 6926 usec (overhead 3 usec + queue 41 usec + compute input 2456 usec + compute infer 4396 usec + compute output 30 usec)

Inferences/Second vs. Client Average Batch Latency
Concurrency: 1, throughput: 127 infer/sec, latency 7870 usec