ubicomplab / rPPG-Toolbox

rPPG-Toolbox: Deep Remote PPG Toolbox (NeurIPS 2023)
https://arxiv.org/abs/2210.00716
Other
459 stars 112 forks source link

rPPG-Toolbox on video #253

Closed PalmDomenico closed 6 months ago

PalmDomenico commented 7 months ago

I would like to take a video. mp4 and give it as input to one of the models to extract the rppg signal. How can I do this?

girishvn commented 7 months ago

Hi @PalmDomenico,

  1. You will first need to write a dataloader for your video file (if it is not a video from a supported dataset). This dataloader should read in the video file, and preprocess it (face cropping, etc.).
  2. You can then do a few things. Either load a pretrained PPG model (I suggest ./final_model_release/PURE_DeepPhys.pth) or use an unsupervised methods (for example CHROM) and feed your preprocessed video frames through this method. This will provide an output rPPG predicted signal (or difference signal for many deep methods like DeepPhys).
  3. You can then use a functionality similar to that of /evaluation /post_process.pycalculate_metric_per_video (take a look at how the predicted signals are processed) to calculate the HR.
  4. Unfortunately, this may seem rather confusing. In my opinion, the best way to get accustomed with this toolbox is to run through the exercises of Example of Using Pre-trained Models and Examples of Neural Network Training in the ReadMe. Following the data flow (maybe with either breakpoints or added print statements) will help best understand the data flow of the preprocessing, and how this data is fed during training, inference, and evaluation.

I suggest downloading the UBFC-rPPG dataset, and running inference using one of the pre-trained models. Since this has become a common thread, I'll go ahead and make a notebook in the next few weeks that outlines how this can be done.

Hope this helps!

girishvn commented 7 months ago

Hi @PalmDomenico,

  1. You will first need to write a dataloader for your video file (if it is not a video from a supported dataset). This dataloader should read in the video file, and preprocess it (face cropping, etc.).
  2. You can then do a few things. Either load a pretrained PPG model (I suggest ./final_model_release/PURE_DeepPhys.pth) or use an unsupervised methods (for example CHROM) and feed your preprocessed video frames through this method. This will provide an output rPPG predicted signal (or difference signal for many deep methods like DeepPhys).
  3. You can then use a functionality similar to that of /evaluation /post_process.pycalculate_metric_per_video (take a look at how the predicted signals are processed) to calculate the HR.
  4. Unfortunately, this may seem rather confusing. In my opinion, the best way to get accustomed with this toolbox is to run through the exercises of Example of Using Pre-trained Models and Examples of Neural Network Training in the ReadMe. Following the data flow (maybe with either breakpoints or added print statements) will help best understand the data flow of the preprocessing, and how this data is fed during training, inference, and evaluation.

I suggest downloading the UBFC-rPPG dataset, and running inference using one of the pre-trained models. Since this has become a common thread, I'll go ahead and make a notebook in the next few weeks that outlines how this can be done.

Hope this helps!

girishvn commented 6 months ago

Closing for inactivity - feel free to open if something comes up.