Open leviethung2103 opened 4 years ago
Hi Hung, I think groundtruthdepth image for spoof images is set zero (follow paper). I wonder about creating rPPG groundtruth signal. How do we create it?? I don't see this task in code. Can you explain me? Thank you
Hi Hung, I think groundtruthdepth image for spoof images is set zero (follow paper). I wonder about creating rPPG groundtruth signal. How do we create it?? I don't see this task in code. Can you explain me? Thank you
I'm struggling with these depth maps and rPPG signals for training input too. Just wondering that do we also need to generate dmap for every images while running this model like the demo here?
@Phan
Hi Hung, I think groundtruthdepth image for spoof images is set zero (follow paper). I wonder about creating rPPG groundtruth signal. How do we create it?? I don't see this task in code. Can you explain me? Thank you
Thank you for replying me.
Because I am not the author of the repo, please the author in order to get the correct answer.
In my understanding, the author provide the code for rPPG in data_process2.py
, the keyword is Anchors.
Hi Hung, I think groundtruthdepth image for spoof images is set zero (follow paper). I wonder about creating rPPG groundtruth signal. How do we create it?? I don't see this task in code. Can you explain me? Thank you
I'm struggling with these depth maps and rPPG signals for training input too. Just wondering that do we also need to generate dmap for every images while running this model like the demo here?
In inference stage, as we can see from model architecture, the depth map and feature map are fed into non-rigid registration layer (like intermediate layer) , and then forward into RNN part.
In inference stage, as we can see from model architecture, the depth map and feature map are fed into non-rigid registration layer (like intermediate layer) , and then forward into RNN part.
After reading the code carefully, i can understand how the model works now. Thing is, the training part did not work out as expected :( idk why the loss value keep growing up.
Did you get into that problem? Is this because of my customized dataset or something?
In inference stage, as we can see from model architecture, the depth map and feature map are fed into non-rigid registration layer (like intermediate layer) , and then forward into RNN part.
After reading the code carefully, i can understand how the model works now. Thing is, the training part did not work out as expected :( idk why the loss value keep growing up.
Did you get into that problem? Is this because of my customized dataset or something?
Hi Long, Did you resolve this problem? If you did, can you tell me the reason that caused the problem? Thank you
Hi Long, Did you resolve this problem? If you did, can you tell me the reason that caused the problem? Thank you
Not yet, I gave up on this :)
Hi Long, Did you resolve this problem? If you did, can you tell me the reason that caused the problem? Thank you
Not yet, I gave up on this :)
Oh :( I have a project about face anti spoofing. I am interesting in using face deep-map like this repo for doing this project. Can you recommend orther repo? or other solution for this project? I am confused. Thank you
Oh :( I have a project about face anti spoofing. I am interesting in using face deep-map like this repo for doing this project. Can you recommend orther repo? or other solution for this project? I am confused. Thank you
Check out this post: https://zhuanlan.zhihu.com/p/114313640 The writer mentioned some face antispoofing attack paper of CVPR 2020, source code included. I'm not sure if they provide the dataset or not, but worth trying.
@PhanMinhKhue You can download some public dataset that I have already collected here. FRAUD - Face Replay Attack UQ Datasets: http://staff.itee.uq.edu.au/lovell/FRAUD/ LCC FASD: https://drive.google.com/file/d/1NeyTFAwdJSjxA9ZtdviwdUjdptEVjM_i/view NUAA: https://drive.google.com/file/d/1fe80Vo366h4uKylFwsSN3apvLXZZm02L/view CASIA-SURF: https://drive.google.com/open?id=1BIIGhL02FlNNRYPGDCBTMgfC5PuLCf9U CASIA-FASD: https://drive.google.com/open?id=1n0zRgxzllNOzlrTrUSPs7BiWiuv4TEd5
In inference stage, as we can see from model architecture, the depth map and feature map are fed into non-rigid registration layer (like intermediate layer) , and then forward into RNN part.
After reading the code carefully, i can understand how the model works now. Thing is, the training part did not work out as expected :( idk why the loss value keep growing up.
Did you get into that problem? Is this because of my customized dataset or something?
Hi longnguyen2, Could you tell me How to create rPPG signal for sequence of consecutive frames as label for training. If you could, I hope you show me code about this function. I still want to explore this repo. Thanks you
@PhanMinhKhue You can download some public dataset that I have already collected here. FRAUD - Face Replay Attack UQ Datasets: http://staff.itee.uq.edu.au/lovell/FRAUD/ LCC FASD: https://drive.google.com/file/d/1NeyTFAwdJSjxA9ZtdviwdUjdptEVjM_i/view NUAA: https://drive.google.com/file/d/1fe80Vo366h4uKylFwsSN3apvLXZZm02L/view CASIA-SURF: https://drive.google.com/open?id=1BIIGhL02FlNNRYPGDCBTMgfC5PuLCf9U CASIA-FASD: https://drive.google.com/open?id=1n0zRgxzllNOzlrTrUSPs7BiWiuv4TEd5
thanks for sharing datasets , I downloaded casia-surf dataset , but validation set is not labeled, Can you help me ?
@nghiabka casia fasd link not updating kindly reshare
Hello,
I would like to ask you about the depth images.
At the first glance, I've tested the depth images which is generated from this repo. https://github.com/cleardusk/3DDFA.
It's obvious that the model performs can generate the depth map for every case including genuine/spoof images.
Do we need do apply further post-processing to obtain the 3D-shape images for real and non 3D-shape images for spoof?
Thank you in advanced.