mxahan / rPPG_edge_implementation

3 stars 0 forks source link

Intro

This work has been accepted for publication in SmartComp 2022.

Abstract:

The primary contribution of this paper is designing and prototyping a \textit{real-time edge computing system}, RhythmEdge, that is capable of detecting changes in blood volume from facial videos (Remote Photoplethysmography; rPPG), enabling cardiovascular health assessment instantly. The benefits of RhythmEdge include non-invasive measurement of cardiovascular activity, real-time system operation, inexpensive sensing components, and computing. RhythmEdge captures a short video of the skin using a camera and extracts rPPG features to estimate the Photoplethysmography (PPG) signal using a multi-task learning framework while offloading the edge computation. In addition, we intelligently apply a transfer learning approach to the multi-task learning framework to mitigate sensor heterogeneities to scale the RhythmEdge prototype to work with a range of commercially available sensing and computing devices. Besides, to further adapt the software stack for resource-constrained devices, we postulate novel pruning and quantization techniques (Quantization: FP32, FP16; Pruned-Quantized: FP32, FP16) that efficiently optimize the deep feature learning while minimizing the runtime, latency, memory, and power usage. We benchmark RhythmEdge prototype for three different cameras and edge computing platforms while evaluating it on three publicly available datasets and an in-house dataset collected under challenging environmental circumstances. Our analysis indicates that RhythmEdge performs on par with the existing contactless heart rate monitoring systems while utilizing only half of its available resources. Furthermore, we perform an ablation study with and without pruning and quantization to report the model size ( 87 %) vs. inference time (70%) reduction. We attested the efficacy of RhythmEdge prototype with a maximum power of 8W and a memory usage of 290MB , with a minimal latency of 0.0625 seconds and a runtime of 0.64 seconds per 30 frames.

Research Contributions

System Overview

Figure: Overview diagram of RhythmEdge development.

Figure: Overview of demo rPPG system.

System development Instruction

We are planning to upload a demo paper (accepted in SmartComp) providing the details to develop a rPPG system using off-the-shelf edge devices.

Set Up for Coral Dev board

Instructions:

(Until screen is terminating writing shows up)

Solution:

sudo screen /dev/ttyUSB0 115200

After that follow instructions again.

Camera setup

Check under /dev folder whether the camera is detected or not. Usually the coral camera is dev/video0. Others will be either video1 or video2.

Taking picture

Video setup

ffmpeg -t 6 -f v4l2 -framerate 30 -video_size 1920x1080 -c:v mjpeg -i /dev/video0 output.mov


  - Alternative:

ffmpeg -t 6 -f v4l2 -framerate 90 -video_size 1280x720 -input_format mjpeg -i /dev/video1 mjpeg.mkv


### OpenCV install in Coral
[Setup link](https://krakensystems.co/blog/2020/doing-machine-vision-on-google-coral)
- Video Preprocessing using OpenCV
  1. Save video in supported format with fps of 30
  1. Import video_reader function form vid_read
  1. Check the f value and data shape (e.g. 30, 100 : 100 : frame)
  1. Crop to data to required shape
  1. Run file_main.py

### Real time implementation
[Sample code](https://www.pyimagesearch.com/2019/05/13/object-detection-and-image-classification-with-google-coral-usb-accelerator/)

### Set up for the Jetson Nano
[Package Installation Instructions](https://medium.com/@coachweichun/jeston-nano-install-opencv-python-numpy-scipy-matplotlib-pandas-kit-fa6bde651eac)

### Numpy install technique
[Follow this link](https://yanwei-liu.medium.com/tflite-on-jetson-nano-c480fdf9ac2)

### Memory Measurement
- Jetson nano

sudo jtop (select MEM)