Closed rhaizer closed 3 years ago
As far as I understand the package, you have to wrap your video input into the Video class and apply a remote PPG method afterward. This code snippet is working for me:
from pyVHR.signals.video import Video
# -- Video object
videoFilename = './VideoStreamOutput.avi'
video = Video(videoFilename)
# -- extract faces
video.getCroppedFaces(detector='mtcnn', extractor='opencv')
video.printVideoInfo()
# -- apply remote PPG method
from pyVHR.methods.pos import POS
from pyVHR.methods.ssr import SSR
from pyVHR.methods.pbv import PBV
params = {"video": video, "verb":0, "ROImask":"skin_adapt", "skinAdapt":0.2}
pos = POS(**params)
ssr = SSR(**params)
pbv = PBV(**params)
# -- get BPM values
bpmES_pos, timesES_pos = pos.runOffline(**params)
bpmES_ssr, timesES_ssr = ssr.runOffline(**params)
bpmES_pbv, timesES_pbv = pbv.runOffline(**params)
But I agree that a simple API taking a video as input and returning a BPM time series would be a nice feature, in addition to the great testing framework.
As far as I understand the package, you have to wrap your video input into the Video class and apply a remote PPG method afterward. This code snippet is working for me:
from pyVHR.signals.video import Video # -- Video object videoFilename = './VideoStreamOutput.avi' video = Video(videoFilename) # -- extract faces video.getCroppedFaces(detector='mtcnn', extractor='opencv') video.printVideoInfo() # -- apply remote PPG method from pyVHR.methods.pos import POS from pyVHR.methods.ssr import SSR from pyVHR.methods.pbv import PBV params = {"video": video, "verb":0, "ROImask":"skin_adapt", "skinAdapt":0.2} pos = POS(**params) ssr = SSR(**params) pbv = PBV(**params) # -- get BPM values bpmES_pos, timesES_pos = pos.runOffline(**params) bpmES_ssr, timesES_ssr = ssr.runOffline(**params) bpmES_pbv, timesES_pbv = pbv.runOffline(**params)
But I agree that a simple API taking a video as input and returning a BPM time series would be a nice feature, in addition to the great testing framework.
Thank you so much for your fast and detailed reply! It works like a charm. However, the results are not good as I expected. On a video with still face and stable lighting(webcam video) results can vary between 50 to 140 which was not true and the results are not even close on any of the methods! Is there any video length requirement(I used a 55-second video input) for it to start giving better results as some other repos requiring?
As far as I understand the package, you have to wrap your video input into the Video class and apply a remote PPG method afterward. This code snippet is working for me:
from pyVHR.signals.video import Video # -- Video object videoFilename = './VideoStreamOutput.avi' video = Video(videoFilename) # -- extract faces video.getCroppedFaces(detector='mtcnn', extractor='opencv') video.printVideoInfo() # -- apply remote PPG method from pyVHR.methods.pos import POS from pyVHR.methods.ssr import SSR from pyVHR.methods.pbv import PBV params = {"video": video, "verb":0, "ROImask":"skin_adapt", "skinAdapt":0.2} pos = POS(**params) ssr = SSR(**params) pbv = PBV(**params) # -- get BPM values bpmES_pos, timesES_pos = pos.runOffline(**params) bpmES_ssr, timesES_ssr = ssr.runOffline(**params) bpmES_pbv, timesES_pbv = pbv.runOffline(**params)
But I agree that a simple API taking a video as input and returning a BPM time series would be a nice feature, in addition to the great testing framework.
It continues to process the old video if I run this again with the different video even though variables are not in cache or anything. Looks like something being stored locally for the first time and the second time it continues to use the data extracted from the first one.
As far as I understand the package, you have to wrap your video input into the Video class and apply a remote PPG method afterward. This code snippet is working for me:
from pyVHR.signals.video import Video # -- Video object videoFilename = './VideoStreamOutput.avi' video = Video(videoFilename) # -- extract faces video.getCroppedFaces(detector='mtcnn', extractor='opencv') video.printVideoInfo() # -- apply remote PPG method from pyVHR.methods.pos import POS from pyVHR.methods.ssr import SSR from pyVHR.methods.pbv import PBV params = {"video": video, "verb":0, "ROImask":"skin_adapt", "skinAdapt":0.2} pos = POS(**params) ssr = SSR(**params) pbv = PBV(**params) # -- get BPM values bpmES_pos, timesES_pos = pos.runOffline(**params) bpmES_ssr, timesES_ssr = ssr.runOffline(**params) bpmES_pbv, timesES_pbv = pbv.runOffline(**params)
But I agree that a simple API taking a video as input and returning a BPM time series would be a nice feature, in addition to the great testing framework.
How can we run this code snippet with different ROIs? For example I want to run it with first 'forehead', then 'left cheek', etc. I tried but I think we need to change many things and I couldn't able to do that.
It continues to process the old video if I run this again with the different video even though variables are not in cache or anything. Looks like something being stored locally for the first time and the second time it continues to use the data extracted from the first one.
Use this:
Video.loadCropFaces = Video.saveCropFaces = False
Video.loadCropFaces = Video.saveCropFaces = False
Add this where?
Right after the import statements.
Closing this issue since related to a previous version of pyVHR. Please refer to README for basic usage instructions.
Thank you so much for creating this framework! I have a question about usage.
How can I get the BPM values of a video that any GT values are not available? Is there any documentation that anyone can link? Because from example notebooks it only shows to visualize BPM with the comparison with GT values. It would be really great if anyone could give me an example to get BPM output from a random video input without ground truth values.