Closed wang-tf closed 1 year ago
Hello,
thank you for your interest in our work. Can you give me a little bit more information on what you are trying to do? The dataset published on Zenodo already contains the rPPG signals so there should be no need to use the POS algorithm.
Thank you for your work! It's very interesing.
The data on Zenodo usefull. I want to test the retrained model on my personal face video. The question is how to convert video data to model input. Thanks!
The rppg data in rPPG-BP-UKL_rppg_7s.h5
is 875 length (7s * 125). Is that mean the video data mast have 125fps ? There is 30 fps in my video.
Hi, in case you want to use your own camera data to train the neural network you do not necessarily need 120 fps. You can also upsample your signal to 120 fps using standard interpolation methods e.g. provided by SciPy or NumPy packages. However, it is important that your input signal has a length of 7 seconds.
OK, I understand it. I will try it later. Thanks so much.
The out put BVP singal from POS was 230 length in my case. So how to convert to model input ? Is there a upsample method was used ? Thanks!