ZitongYu / CDCN

Central Difference Convolutional Networks (CVPR'20)
Other
553 stars 179 forks source link

Is there overfitting? #21

Closed ysm022 closed 2 years ago

ysm022 commented 4 years ago

Hello, I train the Track2 Single-modal with my dataset which include 5000+ real and 5000+ fake pics. The epoch is 60, and train log as follow:

Oulu-NPU, P1: train from scratch! epoch:1, Train: Absolute_Depth_loss= 0.2492, Contrastive_Depth_loss= 0.0112 epoch:2, Train: Absolute_Depth_loss= 0.1995, Contrastive_Depth_loss= 0.0083 epoch:3, Train: Absolute_Depth_loss= 0.1734, Contrastive_Depth_loss= 0.0087 epoch:4, Train: Absolute_Depth_loss= 0.1561, Contrastive_Depth_loss= 0.0088 epoch:5, Train: Absolute_Depth_loss= 0.1435, Contrastive_Depth_loss= 0.0089 epoch:6, Train: Absolute_Depth_loss= 0.1334, Contrastive_Depth_loss= 0.0089 epoch:7, Train: Absolute_Depth_loss= 0.1266, Contrastive_Depth_loss= 0.0090 epoch:8, Train: Absolute_Depth_loss= 0.1210, Contrastive_Depth_loss= 0.0091 epoch:9, Train: Absolute_Depth_loss= 0.1133, Contrastive_Depth_loss= 0.0090 epoch:10, Train: Absolute_Depth_loss= 0.1085, Contrastive_Depth_loss= 0.0091 epoch:11, Train: Absolute_Depth_loss= 0.1039, Contrastive_Depth_loss= 0.0092 epoch:12, Train: Absolute_Depth_loss= 0.0988, Contrastive_Depth_loss= 0.0092 epoch:13, Train: Absolute_Depth_loss= 0.0939, Contrastive_Depth_loss= 0.0092 epoch:14, Train: Absolute_Depth_loss= 0.0907, Contrastive_Depth_loss= 0.0091 epoch:15, Train: Absolute_Depth_loss= 0.0875, Contrastive_Depth_loss= 0.0092 epoch:16, Train: Absolute_Depth_loss= 0.0833, Contrastive_Depth_loss= 0.0091 epoch:17, Train: Absolute_Depth_loss= 0.0810, Contrastive_Depth_loss= 0.0091 epoch:18, Train: Absolute_Depth_loss= 0.0791, Contrastive_Depth_loss= 0.0090 epoch:19, Train: Absolute_Depth_loss= 0.0768, Contrastive_Depth_loss= 0.0090 epoch:20, Train: Absolute_Depth_loss= 0.0622, Contrastive_Depth_loss= 0.0084 epoch:21, Train: Absolute_Depth_loss= 0.0591, Contrastive_Depth_loss= 0.0085 epoch:22, Train: Absolute_Depth_loss= 0.0562, Contrastive_Depth_loss= 0.0084 epoch:23, Train: Absolute_Depth_loss= 0.0543, Contrastive_Depth_loss= 0.0085 epoch:24, Train: Absolute_Depth_loss= 0.0532, Contrastive_Depth_loss= 0.0084 epoch:25, Train: Absolute_Depth_loss= 0.0511, Contrastive_Depth_loss= 0.0083 epoch:26, Train: Absolute_Depth_loss= 0.0497, Contrastive_Depth_loss= 0.0083 epoch:27, Train: Absolute_Depth_loss= 0.0486, Contrastive_Depth_loss= 0.0083 epoch:28, Train: Absolute_Depth_loss= 0.0465, Contrastive_Depth_loss= 0.0082 epoch:29, Train: Absolute_Depth_loss= 0.0455, Contrastive_Depth_loss= 0.0082 epoch:30, Train: Absolute_Depth_loss= 0.0445, Contrastive_Depth_loss= 0.0081 epoch:31, Train: Absolute_Depth_loss= 0.0441, Contrastive_Depth_loss= 0.0082 epoch:32, Train: Absolute_Depth_loss= 0.0424, Contrastive_Depth_loss= 0.0081 epoch:33, Train: Absolute_Depth_loss= 0.0413, Contrastive_Depth_loss= 0.0080 epoch:34, Train: Absolute_Depth_loss= 0.0416, Contrastive_Depth_loss= 0.0080 epoch:35, Train: Absolute_Depth_loss= 0.0401, Contrastive_Depth_loss= 0.0079 epoch:36, Train: Absolute_Depth_loss= 0.0396, Contrastive_Depth_loss= 0.0080 epoch:37, Train: Absolute_Depth_loss= 0.0391, Contrastive_Depth_loss= 0.0079 epoch:38, Train: Absolute_Depth_loss= 0.0389, Contrastive_Depth_loss= 0.0079 epoch:39, Train: Absolute_Depth_loss= 0.0355, Contrastive_Depth_loss= 0.0077 epoch:40, Train: Absolute_Depth_loss= 0.0307, Contrastive_Depth_loss= 0.0073 epoch:41, Train: Absolute_Depth_loss= 0.0298, Contrastive_Depth_loss= 0.0072 epoch:42, Train: Absolute_Depth_loss= 0.0289, Contrastive_Depth_loss= 0.0072 epoch:43, Train: Absolute_Depth_loss= 0.0286, Contrastive_Depth_loss= 0.0071 epoch:44, Train: Absolute_Depth_loss= 0.0278, Contrastive_Depth_loss= 0.0071 epoch:45, Train: Absolute_Depth_loss= 0.0266, Contrastive_Depth_loss= 0.0070 epoch:46, Train: Absolute_Depth_loss= 0.0269, Contrastive_Depth_loss= 0.0070 epoch:47, Train: Absolute_Depth_loss= 0.0264, Contrastive_Depth_loss= 0.0070 epoch:48, Train: Absolute_Depth_loss= 0.0266, Contrastive_Depth_loss= 0.0070 epoch:49, Train: Absolute_Depth_loss= 0.0250, Contrastive_Depth_loss= 0.0069 epoch:50, Train: Absolute_Depth_loss= 0.0258, Contrastive_Depth_loss= 0.0070 epoch:51, Train: Absolute_Depth_loss= 0.0245, Contrastive_Depth_loss= 0.0068 epoch:52, Train: Absolute_Depth_loss= 0.0237, Contrastive_Depth_loss= 0.0067 epoch:53, Train: Absolute_Depth_loss= 0.0241, Contrastive_Depth_loss= 0.0069 epoch:54, Train: Absolute_Depth_loss= 0.0233, Contrastive_Depth_loss= 0.0068 epoch:55, Train: Absolute_Depth_loss= 0.0236, Contrastive_Depth_loss= 0.0067 epoch:56, Train: Absolute_Depth_loss= 0.0228, Contrastive_Depth_loss= 0.0067 epoch:57, Train: Absolute_Depth_loss= 0.0226, Contrastive_Depth_loss= 0.0067 epoch:58, Train: Absolute_Depth_loss= 0.0218, Contrastive_Depth_loss= 0.0066 epoch:59, Train: Absolute_Depth_loss= 0.0216, Contrastive_Depth_loss= 0.0066 epoch:60, Train: Absolute_Depth_loss= 0.0193, Contrastive_Depth_loss= 0.0063

when I use the saved model file to test. I find some images perform poorly, Fake img is recongize to real. The accuracy about 24%. Is there overfitting? or another reason?

Another question, the pics which use to train is raw image or crop face? Is there any difference between them?

Thank you very much.

xuhangxuhang commented 4 years ago

I train the whole model based on Keras API, got no overfitting problem, just found that the model author proposed in their paper is slow. Anyway, maybe you should do more data augmentation, like gamma correction, saturation correction.

ysm022 commented 4 years ago

@xuhangxuhang Thank you.

Could you tell me which dataset you use to train? After training, how does your model performance at the data out of the dataset you train?

xuhangxuhang commented 4 years ago

@xuhangxuhang Thank you.

Could you tell me which dataset you use to train? After training, how does your model performance at the data out of the dataset you train?

I got good results inside Oulu, CASIA-FASD, and Replayattack, but cross-test results of CASIA-FASD and Replayattack is bad.

ysm022 commented 4 years ago

@xuhangxuhang Thank you. Could you tell me which dataset you use to train? After training, how does your model performance at the data out of the dataset you train?

I got good results inside Oulu, CASIA-FASD, and Replayattack, but cross-test results of CASIA-FASD and Replayattack is bad.

Thank you very much.

punitha-valli commented 4 years ago

@xuhangxuhang Thank you.

Could you tell me which dataset you use to train? After training, how does your model performance at the data out of the dataset you train?

I got good results inside Oulu, CASIA-FASD, and Replayattack, but cross-test results of CASIA-FASD and Replayattack is bad.

can you please tell me, that how did you use the Oulu dataset, because in the dataset I found the videos. but in the source code they required the map_dir, even the train, test, dev required the images.

punitha-valli commented 4 years ago

@ysm022

Can you please tell me about the map_dir/ It will be a great help

Thanks in advance

xuhangxuhang commented 4 years ago

Hi,sorry I replay this email so late, sincerely apologize for that.

In Oulu-NPU dataset, the code provider sent single image into their network. So what you should do is, write a generator(in pytorch write a class as the code in the Github Repo, if you use TF or Keras, a simple function is enough), the generator should yeild image(face iamge) and the corresponding depth map. In training stage, get average output score of each sample, in evaluation stage, get prediction scores of samples from original video and average them as the final prediction result of single video.

(ps: I am happy to reply your question, and my English is not that good, if I confuse you please ask again, I will try my best to answer.)

Best wish.

-----原始邮件----- 发件人:punitha-valli notifications@github.com 发送时间:2020-07-15 09:42:03 (星期三) 收件人: ZitongYu/CDCN CDCN@noreply.github.com 抄送: xuhangxuhang hangxu@my.swjtu.edu.cn, Mention mention@noreply.github.com 主题: Re: [ZitongYu/CDCN] Is there overfitting? (#21)

@xuhangxuhang Thank you.

Could you tell me which dataset you use to train? After training, how does your model performance at the data out of the dataset you train?

I got good results inside Oulu, CASIA-FASD, and Replayattack, but cross-test results of CASIA-FASD and Replayattack is bad.

can you please tell me, that how did you use the Oulu dataset, because in the dataset I found the videos. but in the source code they required the map_dir, even the train, test, dev required the images.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

punitha-valli commented 4 years ago

Thank you for your response

I have Oulu dataset, but it's fully video.

Then I don't understand about the map_dir. Then the image path , because I have videos in Oulu dataset,....

What should I do for the train_image, test_image, dev_image, map_dir,

Can you please help me?

Thank you so much .

On Sun 26 Jul, 2020, 11:18 AM xuhangxuhang, notifications@github.com wrote:

Hi,sorry I replay this email so late, sincerely apologize for that.

In Oulu-NPU dataset, the code provider sent single image into their network. So what you should do is, write a generator(in pytorch write a class as the code in the Github Repo, if you use TF or Keras, a simple function is enough), the generator should yeild image(face iamge) and the corresponding depth map. In training stage, get average output score of each sample, in evaluation stage, get prediction scores of samples from original video and average them as the final prediction result of single video.

(ps: I am happy to reply your question, and my English is not that good, if I confuse you please ask again, I will try my best to answer.)

Best wish.

-----原始邮件----- 发件人:punitha-valli notifications@github.com 发送时间:2020-07-15 09:42:03 (星期三) 收件人: ZitongYu/CDCN CDCN@noreply.github.com 抄送: xuhangxuhang hangxu@my.swjtu.edu.cn, Mention < mention@noreply.github.com> 主题: Re: [ZitongYu/CDCN] Is there overfitting? (#21)

@xuhangxuhang Thank you.

Could you tell me which dataset you use to train? After training, how does your model performance at the data out of the dataset you train?

I got good results inside Oulu, CASIA-FASD, and Replayattack, but cross-test results of CASIA-FASD and Replayattack is bad.

can you please tell me, that how did you use the Oulu dataset, because in the dataset I found the videos. but in the source code they required the map_dir, even the train, test, dev required the images.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ZitongYu/CDCN/issues/21#issuecomment-663930403, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMM7B2YN3KOR4AATAVVIGKLR5ON75ANCNFSM4N7N7IKA .

xuhangxuhang commented 4 years ago

What you should do is split the title video into single frames, for every live frame yield depth map,  for each spoof frame just yield an all zero matrix, mapdir is the label save folder.  Is this clear? hangxu 邮箱:hangxu@my.swjtu.edu.cn 签名由 网易邮箱大师 定制 On 07/27/2020 13:08, punitha-valli wrote: Thank you for your response I have Oulu dataset, but it's fully video. Then I don't understand about the map_dir. Then the image path , because I have videos in Oulu dataset,.... What should I do for the train_image, test_image, dev_image, map_dir, Can you please help me? Thank you so much . On Sun 26 Jul, 2020, 11:18 AM xuhangxuhang, notifications@github.com wrote: > Hi,sorry I replay this email so late, sincerely apologize for that. > > > > > In Oulu-NPU dataset, the code provider sent single image into their > network. So what you should do is, write a generator(in pytorch write a > class as the code in the Github Repo, if you use TF or Keras, a simple > function is enough), the generator should yeild image(face iamge) and the > corresponding depth map. In training stage, get average output score of > each sample, in evaluation stage, get prediction scores of samples from > original video and average them as the final prediction result of single > video. > > > > > (ps: I am happy to reply your question, and my English is not that good, > if I confuse you please ask again, I will try my best to answer.) > > Best wish. > > > > > -----原始邮件----- > 发件人:punitha-valli notifications@github.com > 发送时间:2020-07-15 09:42:03 (星期三) > 收件人: ZitongYu/CDCN CDCN@noreply.github.com > 抄送: xuhangxuhang hangxu@my.swjtu.edu.cn, Mention < > mention@noreply.github.com> > 主题: Re: [ZitongYu/CDCN] Is there overfitting? (#21) > > > > > > > @xuhangxuhang Thank you. > > Could you tell me which dataset you use to train? After training, how does > your model performance at the data out of the dataset you train? > > I got good results inside Oulu, CASIA-FASD, and Replayattack, but > cross-test results of CASIA-FASD and Replayattack is bad. > > can you please tell me, that how did you use the Oulu dataset, because in > the dataset I found the videos. > but in the source code they required the map_dir, even the train, test, > dev required the images. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub, or unsubscribe. > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > https://github.com/ZitongYu/CDCN/issues/21#issuecomment-663930403, or > unsubscribe > https://github.com/notifications/unsubscribe-auth/AMM7B2YN3KOR4AATAVVIGKLR5ON75ANCNFSM4N7N7IKA > . > — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

punitha-valli commented 4 years ago

Can you please share the code for making map_dir ?

On Mon 27 Jul, 2020, 2:03 PM xuhangxuhang, notifications@github.com wrote:

What you should do is split the title video into single frames, for every live frame yield depth map, for each spoof frame just yield an all zero matrix, mapdir is the label save folder. Is this clear? hangxu 邮箱: hangxu@my.swjtu.edu.cn 签名由 网易邮箱大师 定制 On 07/27/2020 13:08, punitha-valli wrote: Thank you for your response I have Oulu dataset, but it's fully video. Then I don't understand about the map_dir. Then the image path , because I have videos in Oulu dataset,.... What should I do for the train_image, test_image, dev_image, map_dir, Can you please help me? Thank you so much . On Sun 26 Jul, 2020, 11:18 AM xuhangxuhang, < notifications@github.com> wrote: > Hi,sorry I replay this email so late, sincerely apologize for that. > > > > > In Oulu-NPU dataset, the code provider sent single image into their > network. So what you should do is, write a generator(in pytorch write a > class as the code in the Github Repo, if you use TF or Keras, a simple > function is enough), the generator should yeild image(face iamge) and the > corresponding depth map. In training stage, get average output score of > each sample, in evaluation stage, get prediction scores of samples from > original video and average them as the final prediction result of single > video. > > > > > (ps: I am happy to reply your question, and my English is not that good, > if I confuse you please ask again, I will try my best to answer.) > > Best wish.

-----原始邮件----- > 发件人:punitha-valli notifications@github.com > 发送时间:2020-07-15 09:42:03 (星期三) > 收件人: ZitongYu/CDCN < CDCN@noreply.github.com> > 抄送: xuhangxuhang hangxu@my.swjtu.edu.cn, Mention < > mention@noreply.github.com> > 主题: Re: [ZitongYu/CDCN] Is there overfitting? (#21) > > > > > > > @xuhangxuhang Thank you. > > Could you tell me which dataset you use to train? After training, how does > your model performance at the data out of the dataset you train? > > I got good results inside Oulu, CASIA-FASD, and Replayattack, but > cross-test results of CASIA-FASD and Replayattack is bad. > > can you please tell me, that how did you use the Oulu dataset, because in > the dataset I found the videos. but in the source code they required the map_dir, even the train, test, > dev required the images. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub, or unsubscribe. > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > < https://github.com/ZitongYu/CDCN/issues/21#issuecomment-663930403>, or > unsubscribe > < https://github.com/notifications/unsubscribe-auth/AMM7B2YN3KOR4AATAVVIGKLR5ON75ANCNFSM4N7N7IKA> . > — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ZitongYu/CDCN/issues/21#issuecomment-664138577, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMM7B2YW6ARN25YF3UIY4NDR5UKDBANCNFSM4N7N7IKA .

xuhangxuhang commented 4 years ago

you can search PRNet iin GitHub,  all map labels I have are generated from that code.  best wish hangxu 邮箱:hangxu@my.swjtu.edu.cn 签名由 网易邮箱大师 定制 On 07/27/2020 14:34, punitha-valli wrote: Can you please share the code for making map_dir ? On Mon 27 Jul, 2020, 2:03 PM xuhangxuhang, notifications@github.com wrote: > What you should do is split the title video into single frames, for every > live frame yield depth map, for each spoof frame just yield an all zero > matrix, mapdir is the label save folder. Is this clear? hangxu 邮箱: > hangxu@my.swjtu.edu.cn 签名由 网易邮箱大师 定制 On 07/27/2020 13:08, punitha-valli > wrote: Thank you for your response I have Oulu dataset, but it's fully > video. Then I don't understand about the map_dir. Then the image path , > because I have videos in Oulu dataset,.... What should I do for the > train_image, test_image, dev_image, map_dir, Can you please help me? Thank > you so much . On Sun 26 Jul, 2020, 11:18 AM xuhangxuhang, < > notifications@github.com> wrote: > Hi,sorry I replay this email so late, > sincerely apologize for that. > > > > > In Oulu-NPU dataset, the code > provider sent single image into their > network. So what you should do is, > write a generator(in pytorch write a > class as the code in the Github > Repo, if you use TF or Keras, a simple > function is enough), the generator > should yeild image(face iamge) and the > corresponding depth map. In > training stage, get average output score of > each sample, in evaluation > stage, get prediction scores of samples from > original video and average > them as the final prediction result of single > video. > > > > > (ps: I am > happy to reply your question, and my English is not that good, > if I > confuse you please ask again, I will try my best to answer.) > > Best wish. > > > > > > -----原始邮件----- > 发件人:punitha-valli notifications@github.com > > 发送时间:2020-07-15 09:42:03 (星期三) > 收件人: ZitongYu/CDCN < > CDCN@noreply.github.com> > 抄送: xuhangxuhang hangxu@my.swjtu.edu.cn, > Mention < > mention@noreply.github.com> > 主题: Re: [ZitongYu/CDCN] Is > there overfitting? (#21) > > > > > > > @xuhangxuhang Thank you. > > Could > you tell me which dataset you use to train? After training, how does > your > model performance at the data out of the dataset you train? > > I got good > results inside Oulu, CASIA-FASD, and Replayattack, but > cross-test results > of CASIA-FASD and Replayattack is bad. > > can you please tell me, that how > did you use the Oulu dataset, because in > the dataset I found the videos. > > but in the source code they required the map_dir, even the train, test, > > dev required the images. > > — > You are receiving this because you were > mentioned. > Reply to this email directly, view it on GitHub, or > unsubscribe. > > — > You are receiving this because you commented. > Reply > to this email directly, view it on GitHub > < > https://github.com/ZitongYu/CDCN/issues/21#issuecomment-663930403>, or > > unsubscribe > < > https://github.com/notifications/unsubscribe-auth/AMM7B2YN3KOR4AATAVVIGKLR5ON75ANCNFSM4N7N7IKA> > > . > — You are receiving this because you were mentioned. Reply to this > email directly, view it on GitHub, or unsubscribe. > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > https://github.com/ZitongYu/CDCN/issues/21#issuecomment-664138577, or > unsubscribe > https://github.com/notifications/unsubscribe-auth/AMM7B2YW6ARN25YF3UIY4NDR5UKDBANCNFSM4N7N7IKA > . > — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

punitha-valli commented 4 years ago

Thank you for so much,.

I have tried that one , but my dlib in not working properly. The wheel of dlib is not building correctly.

On Mon 27 Jul, 2020, 3:07 PM xuhangxuhang, notifications@github.com wrote:

you can search PRNet iin GitHub, all map labels I have are generated from that code. best wish hangxu 邮箱:hangxu@my.swjtu.edu.cn 签名由 网易邮箱大师 定制 On 07/27/2020 14:34, punitha-valli wrote: Can you please share the code for making map_dir ? On Mon 27 Jul, 2020, 2:03 PM xuhangxuhang, < notifications@github.com> wrote: > What you should do is split the title video into single frames, for every > live frame yield depth map, for each spoof frame just yield an all zero > matrix, mapdir is the label save folder. Is this clear? hangxu 邮箱: > hangxu@my.swjtu.edu.cn 签名由 网易邮箱大师 定制 On 07/27/2020 13:08, punitha-valli > wrote: Thank you for your response I have Oulu dataset, but it's fully > video. Then I don't understand about the map_dir. Then the image path , > because I have videos in Oulu dataset,.... What should I do for the > train_image, test_image, dev_image, map_dir, Can you please help me? Thank > you so much . On Sun 26 Jul, 2020, 11:18 AM xuhangxuhang, < > notifications@github.com> wrote: > Hi,sorry I replay this email so late, > sincerely apologize for that. > > > > > In Oulu-NPU dataset, the code > provider sent single image into their > network. So what you should do is, > write a generator(in pytorch write a > class as the code in the Github > Repo, if you use TF or Keras, a simple > function is enough), the generator > should yeild image(face iamge) and the

corresponding depth map. In > training stage, get average output score of each sample, in evaluation > stage, get prediction scores of samples from original video and average > them as the final prediction result of single > video. > > > > > (ps: I am > happy to reply your question, and my English is not that good, > if I > confuse you please ask again, I will try my best to answer.) > > Best wish. > > > > > > -----原始邮件----- > 发件人:punitha-valli notifications@github.com > > 发送时间:2020-07-15 09:42:03 (星期三) > 收件人: ZitongYu/CDCN < > CDCN@noreply.github.com> > 抄送: xuhangxuhang hangxu@my.swjtu.edu.cn, > Mention < > mention@noreply.github.com> > 主题: Re: [ZitongYu/CDCN] Is > there overfitting? (#21) > > > > > > > @xuhangxuhang Thank you. > > Could > you tell me which dataset you use to train? After training, how does > your > model performance at the data out of the dataset you train? > > I got good results inside Oulu, CASIA-FASD, and Replayattack, but > cross-test results > of CASIA-FASD and Replayattack is bad. > > can you please tell me, that how > did you use the Oulu dataset, because in > the dataset I found the videos. > > but in the source code they required the map_dir, even the train, test, > > dev required the images. > > — > You are receiving this because you were > mentioned. > Reply to this email directly, view it on GitHub, or > unsubscribe. > > — > You are receiving this because you commented. > Reply > to this email directly, view it on GitHub > < > https://github.com/ZitongYu/CDCN/issues/21#issuecomment-663930403>, or > unsubscribe > < > https://github.com/notifications/unsubscribe-auth/AMM7B2YN3KOR4AATAVVIGKLR5ON75ANCNFSM4N7N7IKA>

. > — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > < https://github.com/ZitongYu/CDCN/issues/21#issuecomment-664138577>, or > unsubscribe > < https://github.com/notifications/unsubscribe-auth/AMM7B2YW6ARN25YF3UIY4NDR5UKDBANCNFSM4N7N7IKA> . > — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ZitongYu/CDCN/issues/21#issuecomment-664161863, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMM7B22TINYGOLLCP62E6A3R5URRJANCNFSM4N7N7IKA .

punitha-valli commented 4 years ago

Thank you for your kind response

How many frames did you used for one video ?, While converting video into images. How did you saved the image names ?, I have some problem with this .

On Mon 27 Jul, 2020, 3:42 PM Punitha Valli, punithavalli.krv@gmail.com wrote:

Thank you for so much,.

I have tried that one , but my dlib in not working properly. The wheel of dlib is not building correctly.

On Mon 27 Jul, 2020, 3:07 PM xuhangxuhang, notifications@github.com wrote:

you can search PRNet iin GitHub, all map labels I have are generated from that code. best wish hangxu 邮箱:hangxu@my.swjtu.edu.cn 签名由 网易邮箱大师 定制 On 07/27/2020 14:34, punitha-valli wrote: Can you please share the code for making map_dir ? On Mon 27 Jul, 2020, 2:03 PM xuhangxuhang, < notifications@github.com> wrote: > What you should do is split the title video into single frames, for every > live frame yield depth map, for each spoof frame just yield an all zero > matrix, mapdir is the label save folder. Is this clear? hangxu 邮箱: > hangxu@my.swjtu.edu.cn 签名由 网易邮箱大师 定制 On 07/27/2020 13:08, punitha-valli > wrote: Thank you for your response I have Oulu dataset, but it's fully > video. Then I don't understand about the map_dir. Then the image path , > because I have videos in Oulu dataset,.... What should I do for the > train_image, test_image, dev_image, map_dir, Can you please help me? Thank > you so much . On Sun 26 Jul, 2020, 11:18 AM xuhangxuhang, < > notifications@github.com> wrote: > Hi,sorry I replay this email so late, > sincerely apologize for that. > > > > > In Oulu-NPU dataset, the code > provider sent single image into their > network. So what you should do is, > write a generator(in pytorch write a > class as the code in the Github > Repo, if you use TF or Keras, a simple > function is enough), the generator > should yeild image(face iamge) and the

corresponding depth map. In > training stage, get average output score of each sample, in evaluation > stage, get prediction scores of samples from original video and average > them as the final prediction result of single > video. > > > > > (ps: I am > happy to reply your question, and my English is not that good, > if I > confuse you please ask again, I will try my best to answer.) > > Best wish. > > > > > > -----原始邮件----- > 发件人:punitha-valli notifications@github.com > > 发送时间:2020-07-15 09:42:03 (星期三) > 收件人: ZitongYu/CDCN < > CDCN@noreply.github.com> > 抄送: xuhangxuhang hangxu@my.swjtu.edu.cn, > Mention < > mention@noreply.github.com> > 主题: Re: [ZitongYu/CDCN] Is > there overfitting? (#21) > > > > > > > @xuhangxuhang Thank you. > > Could > you tell me which dataset you use to train? After training, how does > your > model performance at the data out of the dataset you train? > > I got good results inside Oulu, CASIA-FASD, and Replayattack, but > cross-test results > of CASIA-FASD and Replayattack is bad. > > can you please tell me, that how > did you use the Oulu dataset, because in > the dataset I found the videos. > > but in the source code they required the map_dir, even the train, test, > > dev required the images. > > — > You are receiving this because you were > mentioned. > Reply to this email directly, view it on GitHub, or > unsubscribe. > > — > You are receiving this because you commented. > Reply > to this email directly, view it on GitHub > < > https://github.com/ZitongYu/CDCN/issues/21#issuecomment-663930403>, or > unsubscribe > < > https://github.com/notifications/unsubscribe-auth/AMM7B2YN3KOR4AATAVVIGKLR5ON75ANCNFSM4N7N7IKA>

. > — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > < https://github.com/ZitongYu/CDCN/issues/21#issuecomment-664138577>, or > unsubscribe > < https://github.com/notifications/unsubscribe-auth/AMM7B2YW6ARN25YF3UIY4NDR5UKDBANCNFSM4N7N7IKA> . > — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ZitongYu/CDCN/issues/21#issuecomment-664161863, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMM7B22TINYGOLLCP62E6A3R5URRJANCNFSM4N7N7IKA .

xuhangxuhang commented 4 years ago

In Oulu-NPU dataset, I randoimly select 20 frames of each video of dev/test subset, for training I balance data by video name first, for example in Oulu-NPU Protocol-1 there are 240 real face video and 960 spoofed ones if my memory is correct, I copy real samples for 4 times and mixed the expaneded real video list and spoofed video list, shuffled them. What I received is a balanced training set. Next randomly select one frame from every balanced video, training them then. I set 10000 samples for an epoch, and batchsize is 20. So For each epoch my model runs 500 steps. That's it.

For your 2nd question, what you should do is just save the balanced training name list as a .txt file, the same as Oulu-NPU dataset provided.

Sorry I can not show you my code now, I am already graduated, so my code are deleted.

Any question, just ask me, I will replay you when I got time.

Best wish.

-----原始邮件----- 发件人:punitha-valli notifications@github.com 发送时间:2020-07-27 16:25:04 (星期一) 收件人: ZitongYu/CDCN CDCN@noreply.github.com 抄送: xuhangxuhang hangxu@my.swjtu.edu.cn, Mention mention@noreply.github.com 主题: Re: [ZitongYu/CDCN] Is there overfitting? (#21)

Thank you for your kind response

How many frames did you used for one video ?, While converting video into images. How did you saved the image names ?, I have some problem with this .

On Mon 27 Jul, 2020, 3:42 PM Punitha Valli, punithavalli.krv@gmail.com wrote:

Thank you for so much,.

I have tried that one , but my dlib in not working properly. The wheel of dlib is not building correctly.

On Mon 27 Jul, 2020, 3:07 PM xuhangxuhang, notifications@github.com wrote:

you can search PRNet iin GitHub, all map labels I have are generated from that code. best wish hangxu 邮箱:hangxu@my.swjtu.edu.cn 签名由 网易邮箱大师 定制 On 07/27/2020 14:34, punitha-valli wrote: Can you please share the code for making map_dir ? On Mon 27 Jul, 2020, 2:03 PM xuhangxuhang, < notifications@github.com> wrote: > What you should do is split the title video into single frames, for every > live frame yield depth map, for each spoof frame just yield an all zero > matrix, mapdir is the label save folder. Is this clear? hangxu 邮箱: > hangxu@my.swjtu.edu.cn 签名由 网易邮箱大师 定制 On 07/27/2020 13:08, punitha-valli > wrote: Thank you for your response I have Oulu dataset, but it's fully > video. Then I don't understand about the map_dir. Then the image path , > because I have videos in Oulu dataset,.... What should I do for the > train_image, test_image, dev_image, map_dir, Can you please help me? Thank > you so much . On Sun 26 Jul, 2020, 11:18 AM xuhangxuhang, < > notifications@github.com> wrote: > Hi,sorry I replay this email so late, > sincerely apologize for that. > > > > > In Oulu-NPU dataset, the code > provider sent single image into their > network. So what you should do is, > write a generator(in pytorch write a > class as the code in the Github > Repo, if you use TF or Keras, a simple > function is enough), the generator > should yeild image(face iamge) and the

corresponding depth map. In > training stage, get average output score of each sample, in evaluation > stage, get prediction scores of samples from original video and average > them as the final prediction result of single > video. > > > > > (ps: I am > happy to reply your question, and my English is not that good, > if I > confuse you please ask again, I will try my best to answer.) > > Best wish. > > > > > > -----原始邮件----- > 发件人:punitha-valli notifications@github.com > > 发送时间:2020-07-15 09:42:03 (星期三) > 收件人: ZitongYu/CDCN < > CDCN@noreply.github.com> > 抄送: xuhangxuhang hangxu@my.swjtu.edu.cn, > Mention < > mention@noreply.github.com> > 主题: Re: [ZitongYu/CDCN] Is > there overfitting? (#21) > > > > > > > @xuhangxuhang Thank you. > > Could > you tell me which dataset you use to train? After training, how does > your > model performance at the data out of the dataset you train? > > I got good results inside Oulu, CASIA-FASD, and Replayattack, but > cross-test results > of CASIA-FASD and Replayattack is bad. > > can you please tell me, that how > did you use the Oulu dataset, because in > the dataset I found the videos. > > but in the source code they required the map_dir, even the train, test, > > dev required the images. > > — > You are receiving this because you were > mentioned. > Reply to this email directly, view it on GitHub, or > unsubscribe. > > — > You are receiving this because you commented. > Reply > to this email directly, view it on GitHub > < > https://github.com/ZitongYu/CDCN/issues/21#issuecomment-663930403>, or > unsubscribe > < > https://github.com/notifications/unsubscribe-auth/AMM7B2YN3KOR4AATAVVIGKLR5ON75ANCNFSM4N7N7IKA>

. > — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > < https://github.com/ZitongYu/CDCN/issues/21#issuecomment-664138577>, or > unsubscribe > < https://github.com/notifications/unsubscribe-auth/AMM7B2YW6ARN25YF3UIY4NDR5UKDBANCNFSM4N7N7IKA> . > — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ZitongYu/CDCN/issues/21#issuecomment-664161863, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMM7B22TINYGOLLCP62E6A3R5URRJANCNFSM4N7N7IKA .

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

xuhangxuhang commented 4 years ago

Sry I have no line app, I am from China, so you can connect me with Email or WeChat.

-----原始邮件----- 发件人:punitha-valli notifications@github.com 发送时间:2020-07-29 10:35:30 (星期三) 收件人: ZitongYu/CDCN CDCN@noreply.github.com 抄送: xuhangxuhang hangxu@my.swjtu.edu.cn, Mention mention@noreply.github.com 主题: Re: [ZitongYu/CDCN] Is there overfitting? (#21)

Thank you so much,

Can i ask you a personal question?

Can you share your line id, for future reference? My line id: spv.krv

Thanks in advance

On Tue, 28 Jul 2020 at 19:24, xuhangxuhang notifications@github.com wrote:

In Oulu-NPU dataset, I randoimly select 20 frames of each video of dev/test subset, for training I balance data by video name first, for example in Oulu-NPU Protocol-1 there are 240 real face video and 960 spoofed ones if my memory is correct, I copy real samples for 4 times and mixed the expaneded real video list and spoofed video list, shuffled them. What I received is a balanced training set. Next randomly select one frame from every balanced video, training them then. I set 10000 samples for an epoch, and batchsize is 20. So For each epoch my model runs 500 steps. That's it.

For your 2nd question, what you should do is just save the balanced training name list as a .txt file, the same as Oulu-NPU dataset provided.

Sorry I can not show you my code now, I am already graduated, so my code are deleted.

Any question, just ask me, I will replay you when I got time.

Best wish.

-----原始邮件----- 发件人:punitha-valli notifications@github.com 发送时间:2020-07-27 16:25:04 (星期一) 收件人: ZitongYu/CDCN CDCN@noreply.github.com 抄送: xuhangxuhang hangxu@my.swjtu.edu.cn, Mention < mention@noreply.github.com> 主题: Re: [ZitongYu/CDCN] Is there overfitting? (#21)

Thank you for your kind response

How many frames did you used for one video ?, While converting video into images. How did you saved the image names ?, I have some problem with this .

On Mon 27 Jul, 2020, 3:42 PM Punitha Valli, punithavalli.krv@gmail.com wrote:

Thank you for so much,.

I have tried that one , but my dlib in not working properly. The wheel of dlib is not building correctly.

On Mon 27 Jul, 2020, 3:07 PM xuhangxuhang, notifications@github.com wrote:

you can search PRNet iin GitHub, all map labels I have are generated from that code. best wish hangxu 邮箱:hangxu@my.swjtu.edu.cn 签名由 网易邮箱大师 定制 On 07/27/2020 14:34, punitha-valli wrote: Can you please share the code for making map_dir ? On Mon 27 Jul, 2020, 2:03 PM xuhangxuhang, < notifications@github.com> wrote: > What you should do is split the title video into single frames, for every > live frame yield depth map, for each spoof frame just yield an all zero > matrix, mapdir is the label save folder. Is this clear? hangxu 邮箱: > hangxu@my.swjtu.edu.cn 签名由 网易邮箱大师 定制 On 07/27/2020 13:08, punitha-valli > wrote: Thank you for your response I have Oulu dataset, but it's fully > video. Then I don't understand about the map_dir. Then the image path , > because I have videos in Oulu dataset,.... What should I do for the > train_image, test_image, dev_image, map_dir, Can you please help me? Thank > you so much . On Sun 26 Jul, 2020, 11:18 AM xuhangxuhang, < > notifications@github.com> wrote: > Hi,sorry I replay this email so late, > sincerely apologize for that. > > > > > In Oulu-NPU dataset, the code > provider sent single image into their > network. So what you should do is, > write a generator(in pytorch write a > class as the code in the Github > Repo, if you use TF or Keras, a simple > function is enough), the generator > should yeild image(face iamge) and the

corresponding depth map. In > training stage, get average output score of each sample, in evaluation > stage, get prediction scores of samples from original video and average > them as the final prediction result of single > video. > > > > > (ps: I am > happy to reply your question, and my English is not that good, > if I > confuse you please ask again, I will try my best to answer.) > > Best wish. > > > > > > -----原始邮件----- > 发件人:punitha-valli notifications@github.com > > 发送时间:2020-07-15 09:42:03 (星期三) > 收件人: ZitongYu/CDCN < > CDCN@noreply.github.com> > 抄送: xuhangxuhang hangxu@my.swjtu.edu.cn, > Mention < > mention@noreply.github.com> > 主题: Re: [ZitongYu/CDCN] Is > there overfitting? (#21) > > > > > > > @xuhangxuhang Thank you. > > Could > you tell me which dataset you use to train? After training, how does > your

model performance at the data out of the dataset you train? > > I got good

results inside Oulu, CASIA-FASD, and Replayattack, but > cross-test results > of CASIA-FASD and Replayattack is bad. > > can you please tell me, that how > did you use the Oulu dataset, because in > the dataset I found the videos. > > but in the source code they required the map_dir, even the train, test, > > dev required the images. > > — > You are receiving this because you were > mentioned. > Reply to this email directly, view it on GitHub, or > unsubscribe. > > — > You are receiving this because you commented. > Reply > to this email directly, view it on GitHub > < > https://github.com/ZitongYu/CDCN/issues/21#issuecomment-663930403>, or

unsubscribe > < >

https://github.com/notifications/unsubscribe-auth/AMM7B2YN3KOR4AATAVVIGKLR5ON75ANCNFSM4N7N7IKA

. > — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > < https://github.com/ZitongYu/CDCN/issues/21#issuecomment-664138577>, or

unsubscribe > <

https://github.com/notifications/unsubscribe-auth/AMM7B2YW6ARN25YF3UIY4NDR5UKDBANCNFSM4N7N7IKA

. > — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ZitongYu/CDCN/issues/21#issuecomment-664161863, or unsubscribe < https://github.com/notifications/unsubscribe-auth/AMM7B22TINYGOLLCP62E6A3R5URRJANCNFSM4N7N7IKA

.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ZitongYu/CDCN/issues/21#issuecomment-664984254, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMM7B25MM5T3VJRPBJJSALLR52YNRANCNFSM4N7N7IKA .

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

punitha-valli commented 4 years ago

my wechat id: spv_krv

Can you please share your id?

Thank you...

On Wed, 29 Jul 2020 at 10:54, xuhangxuhang notifications@github.com wrote:

Sry I have no line app, I am from China, so you can connect me with Email or WeChat.

-----原始邮件----- 发件人:punitha-valli notifications@github.com 发送时间:2020-07-29 10:35:30 (星期三) 收件人: ZitongYu/CDCN CDCN@noreply.github.com 抄送: xuhangxuhang hangxu@my.swjtu.edu.cn, Mention < mention@noreply.github.com> 主题: Re: [ZitongYu/CDCN] Is there overfitting? (#21)

Thank you so much,

Can i ask you a personal question?

Can you share your line id, for future reference? My line id: spv.krv

Thanks in advance

On Tue, 28 Jul 2020 at 19:24, xuhangxuhang notifications@github.com wrote:

In Oulu-NPU dataset, I randoimly select 20 frames of each video of dev/test subset, for training I balance data by video name first, for example in Oulu-NPU Protocol-1 there are 240 real face video and 960 spoofed ones if my memory is correct, I copy real samples for 4 times and mixed the expaneded real video list and spoofed video list, shuffled them. What I received is a balanced training set. Next randomly select one frame from every balanced video, training them then. I set 10000 samples for an epoch, and batchsize is 20. So For each epoch my model runs 500 steps. That's it.

For your 2nd question, what you should do is just save the balanced training name list as a .txt file, the same as Oulu-NPU dataset provided.

Sorry I can not show you my code now, I am already graduated, so my code are deleted.

Any question, just ask me, I will replay you when I got time.

Best wish.

-----原始邮件----- 发件人:punitha-valli notifications@github.com 发送时间:2020-07-27 16:25:04 (星期一) 收件人: ZitongYu/CDCN CDCN@noreply.github.com 抄送: xuhangxuhang hangxu@my.swjtu.edu.cn, Mention < mention@noreply.github.com> 主题: Re: [ZitongYu/CDCN] Is there overfitting? (#21)

Thank you for your kind response

How many frames did you used for one video ?, While converting video into images. How did you saved the image names ?, I have some problem with this .

On Mon 27 Jul, 2020, 3:42 PM Punitha Valli, punithavalli.krv@gmail.com wrote:

Thank you for so much,.

I have tried that one , but my dlib in not working properly. The wheel of dlib is not building correctly.

On Mon 27 Jul, 2020, 3:07 PM xuhangxuhang, notifications@github.com wrote:

you can search PRNet iin GitHub, all map labels I have are generated from that code. best wish hangxu 邮箱:hangxu@my.swjtu.edu.cn 签名由 网易邮箱大师 定制 On 07/27/2020 14:34, punitha-valli wrote: Can you please share the code for making map_dir ? On Mon 27 Jul, 2020, 2:03 PM xuhangxuhang, < notifications@github.com> wrote: > What you should do is split the title video into single frames, for every > live frame yield depth map, for each spoof frame just yield an all zero > matrix, mapdir is the label save folder. Is this clear? hangxu 邮箱: > hangxu@my.swjtu.edu.cn 签名由 网易邮箱大师 定制 On 07/27/2020 13:08, punitha-valli > wrote: Thank you for your response I have Oulu dataset, but it's fully > video. Then I don't understand about the map_dir. Then the image path , > because I have videos in Oulu dataset,.... What should I do for the > train_image, test_image, dev_image, map_dir, Can you please help me? Thank > you so much . On Sun 26 Jul, 2020, 11:18 AM xuhangxuhang, < > notifications@github.com> wrote: > Hi,sorry I replay this email so late, > sincerely apologize for that. > > > > > In Oulu-NPU dataset, the code > provider sent single image into their > network. So what you should do is, > write a generator(in pytorch write a > class as the code in the Github > Repo, if you use TF or Keras, a simple > function is enough), the generator > should yeild image(face iamge) and the

corresponding depth map. In > training stage, get average output score of each sample, in evaluation > stage, get prediction scores of samples from original video and average > them as the final prediction result of single > video. > > > > > (ps: I am > happy to reply your question, and my English is not that good, > if I > confuse you please ask again, I will try my best to answer.) > > Best wish. > > > > > > -----原始邮件----- > 发件人:punitha-valli notifications@github.com > > 发送时间:2020-07-15 09:42:03 (星期三) > 收件人: ZitongYu/CDCN < > CDCN@noreply.github.com> > 抄送: xuhangxuhang hangxu@my.swjtu.edu.cn, > Mention < > mention@noreply.github.com> > 主题: Re: [ZitongYu/CDCN] Is > there overfitting? (#21) > > > > > > > @xuhangxuhang Thank you. > > Could > you tell me which dataset you use to train? After training, how does > your

model performance at the data out of the dataset you train? > > I got good

results inside Oulu, CASIA-FASD, and Replayattack, but > cross-test results > of CASIA-FASD and Replayattack is bad. > > can you please tell me, that how > did you use the Oulu dataset, because in > the dataset I found the videos. > > but in the source code they required the map_dir, even the train, test, > > dev required the images. > > — > You are receiving this because you were > mentioned. > Reply to this email directly, view it on GitHub, or > unsubscribe. > > — > You are receiving this because you commented. > Reply > to this email directly, view it on GitHub > < > https://github.com/ZitongYu/CDCN/issues/21#issuecomment-663930403>, or

unsubscribe > < >

https://github.com/notifications/unsubscribe-auth/AMM7B2YN3KOR4AATAVVIGKLR5ON75ANCNFSM4N7N7IKA

. > — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > < https://github.com/ZitongYu/CDCN/issues/21#issuecomment-664138577>, or

unsubscribe > <

https://github.com/notifications/unsubscribe-auth/AMM7B2YW6ARN25YF3UIY4NDR5UKDBANCNFSM4N7N7IKA

. > — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ZitongYu/CDCN/issues/21#issuecomment-664161863, or unsubscribe <

https://github.com/notifications/unsubscribe-auth/AMM7B22TINYGOLLCP62E6A3R5URRJANCNFSM4N7N7IKA

.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ZitongYu/CDCN/issues/21#issuecomment-664984254, or unsubscribe < https://github.com/notifications/unsubscribe-auth/AMM7B25MM5T3VJRPBJJSALLR52YNRANCNFSM4N7N7IKA

.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ZitongYu/CDCN/issues/21#issuecomment-665400715, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMM7B27LXE2M6N3KF2ZVVCLR56FPXANCNFSM4N7N7IKA .

boom9807 commented 3 years ago

Hi,sorry I replay this email so late, sincerely apologize for that. In Oulu-NPU dataset, the code provider sent single image into their network. So what you should do is, write a generator(in pytorch write a class as the code in the Github Repo, if you use TF or Keras, a simple function is enough), the generator should yeild image(face iamge) and the corresponding depth map. In training stage, get average output score of each sample, in evaluation stage, get prediction scores of samples from original video and average them as the final prediction result of single video. (ps: I am happy to reply your question, and my English is not that good, if I confuse you please ask again, I will try my best to answer.) Best wish. -----原始邮件----- 发件人:punitha-valli notifications@github.com 发送时间:2020-07-15 09:42:03 (星期三) 收件人: ZitongYu/CDCN CDCN@noreply.github.com 抄送: xuhangxuhang hangxu@my.swjtu.edu.cn, Mention mention@noreply.github.com 主题: Re: [ZitongYu/CDCN] Is there overfitting? (#21) @xuhangxuhang Thank you. Could you tell me which dataset you use to train? After training, how does your model performance at the data out of the dataset you train? I got good results inside Oulu, CASIA-FASD, and Replayattack, but cross-test results of CASIA-FASD and Replayattack is bad. can you please tell me, that how did you use the Oulu dataset, because in the dataset I found the videos. but in the source code they required the map_dir, even the train, test, dev required the images. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

you can search PRNet iin GitHub,  all map labels I have are generated from that code.  best wish hangxu 邮箱:hangxu@my.swjtu.edu.cn 签名由 网易邮箱大师 定制 On 07/27/2020 14:34, punitha-valli wrote: Can you please share the code for making map_dir ? On Mon 27 Jul, 2020, 2:03 PM xuhangxuhang, notifications@github.com wrote: > What you should do is split the title video into single frames, for every > live frame yield depth map, for each spoof frame just yield an all zero > matrix, mapdir is the label save folder. Is this clear? hangxu 邮箱: > hangxu@my.swjtu.edu.cn 签名由 网易邮箱大师 定制 On 07/27/2020 13:08, punitha-valli > wrote: Thank you for your response I have Oulu dataset, but it's fully > video. Then I don't understand about the map_dir. Then the image path , > because I have videos in Oulu dataset,.... What should I do for the > train_image, test_image, dev_image, map_dir, Can you please help me? Thank > you so much . On Sun 26 Jul, 2020, 11:18 AM xuhangxuhang, < > notifications@github.com> wrote: > Hi,sorry I replay this email so late, > sincerely apologize for that. > > > > > In Oulu-NPU dataset, the code > provider sent single image into their > network. So what you should do is, > write a generator(in pytorch write a > class as the code in the Github > Repo, if you use TF or Keras, a simple > function is enough), the generator > should yeild image(face iamge) and the > corresponding depth map. In > training stage, get average output score of > each sample, in evaluation > stage, get prediction scores of samples from > original video and average > them as the final prediction result of single > video. > > > > > (ps: I am > happy to reply your question, and my English is not that good, > if I > confuse you please ask again, I will try my best to answer.) > > Best wish. > > > > > > -----原始邮件----- > 发件人:punitha-valli notifications@github.com > > 发送时间:2020-07-15 09:42:03 (星期三) > 收件人: ZitongYu/CDCN < > CDCN@noreply.github.com> > 抄送: xuhangxuhang hangxu@my.swjtu.edu.cn, > Mention < > mention@noreply.github.com> > 主题: Re: [ZitongYu/CDCN] Is > there overfitting? (#21) > > > > > > > @xuhangxuhang Thank you. > > Could > you tell me which dataset you use to train? After training, how does > your > model performance at the data out of the dataset you train? > > I got good > results inside Oulu, CASIA-FASD, and Replayattack, but > cross-test results > of CASIA-FASD and Replayattack is bad. > > can you please tell me, that how > did you use the Oulu dataset, because in > the dataset I found the videos. > > but in the source code they required the map_dir, even the train, test, > > dev required the images. > > — > You are receiving this because you were > mentioned. > Reply to this email directly, view it on GitHub, or > unsubscribe. > > — > You are receiving this because you commented. > Reply > to this email directly, view it on GitHub > < > #21 (comment)>, or > > unsubscribe > < > https://github.com/notifications/unsubscribe-auth/AMM7B2YN3KOR4AATAVVIGKLR5ON75ANCNFSM4N7N7IKA> > > . > — You are receiving this because you were mentioned. Reply to this > email directly, view it on GitHub, or unsubscribe. > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <#21 (comment)>, or > unsubscribe > https://github.com/notifications/unsubscribe-auth/AMM7B2YW6ARN25YF3UIY4NDR5UKDBANCNFSM4N7N7IKA > . > — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

你好,我想请问一下包含bbox的.dat文件是怎样产生的呢,可以给一个相关网址吗,十分感谢

quangtn266 commented 3 years ago

Hi,

I also have the same problem with CelbA-spoofy data. Screenshot from 2021-09-19 11-49-23

Is it a normal or abnormal case ? Because loss reduce very fast.

jamesdongdong commented 2 years ago

Hi,

I also have the same problem with CelbA-spoofy data. Screenshot from 2021-09-19 11-49-23

Is it a normal or abnormal case ? Because loss reduce very fast.

Sorry to bother you....could you tell me how to do the data preparation? especially the bbox dat file... thanks!

quangtn266 commented 2 years ago

Hi, I also have the same problem with CelbA-spoofy data. Screenshot from 2021-09-19 11-49-23 Is it a normal or abnormal case ? Because loss reduce very fast.

Sorry to bother you....could you tell me how to do the data preparation? especially the bbox dat file... thanks!

CelebA- Spoofing data used Retinaface for face detection. In the data, they propose available bounding box information, so if you want to follow them, you should read README.md for more information. Because, the bounding box information needs to convert x,y,h,w into real_x,real_y,real_h,real_w. In my opinion, I think that the dataset isn't clean, so you should be careful for data preprocessing. (more cross-check)