NVIDIA-AI-IOT / face-mask-detection

Face Mask Detection using NVIDIA Transfer Learning Toolkit (TLT) and DeepStream for COVID-19
MIT License
241 stars 94 forks source link

------------------------------------------------------

This sample application is no longer maintained

------------------------------------------------------

face_mask_detection

NVIDIA Developer Blog

The project shows, tutorial for NVIDIA's Transfer Learning Toolkit (TLT) + DeepStream (DS) SDK ie training and inference flow for detecting faces with mask and without mask on Jetson Platform.

By the end of this project; you will be able to build DeepStream app on Jetson platform to detect faces with mask and without mask.

alt text

What this project includes

What this project does not provide

Preferred Datasets

Note: We do not use all the images from MAFA and WiderFace. Combining we will use about 6000 faces each with and without mask

Steps to perform Face Detection with Mask:

Note:

Evaluation Results on NVIDIA Jetson Platform

Pruned mAP (Mask/No-Mask)
(%)
Inference Evaluations on Nano Inference Evaluations on Xavier NX Inference Evaluations on Xavier
GPU
(FPS)
GPU
(FPS)
DLA
(FPS)
GPU
(FPS)
DLA
(FPS)
No 86.12 (87.59, 84.65) 6.5 125.36 30.31 269.04 61.96
Yes (12%**) 85.50 (86.72, 84.27) 21.25 279 116.2 508.32 155.5

NVIDIA Transfer Learning Toolkit (TLT) Training Flow

  1. Download Pre-trained model ( For Mask Detection application, we have experimented with Detectnet_v2 with ResNet18 backbone)
  2. Convert dataset to KITTI format
  3. Train Model (tlt-train)
  4. Evaluate on validation data or infer on test images (tlt-evaluate, tlt-infer)
  5. Prune trained model (tlt-prune)
    Pruning model will help you to reduce parameter count thus improving FPS performance
  6. Retrain pruned model (tlt-train)
  7. Evaluate re-trained model on validation data (tlt-evaluate)
  8. If accuracy does not fall below satisfactory range in (7); perform step (5), (6), (7); else go to step (9)
  9. Export trained model from step (6) (tlt-export)
    Choose int8, fp16 based on you platform needs; such as Jetson Xavier and Jetson Xavier-NX has int8 DLA support

Interesting Resources

References