azeme1 / keras2ncnn

MIT License
24 stars 0 forks source link

Conversion Issue #3

Closed rose-jinyang closed 3 years ago

rose-jinyang commented 3 years ago

Hello How are you? Thanks for contributing this project. I met the following issue when converting my Keras model to NCNN model.

image

Could u help me? if u want, I will send the keras model. Thanks

azeme1 commented 3 years ago

Hi!Could you send the model?You can sent it with random weights if you care a bout the security.Thank you!27.11.2020, 06:57, "rose-jinyang" notifications@github.com: Hello How are you? Thanks for contributing this project. I met the following issue when converting my Keras model to NCNN model.

Could u help me? if u want, I will send the keras model. Thanks

—You are receiving this because you are subscribed to this thread.Reply to this email directly, view it on GitHub, or unsubscribe.

rose-jinyang commented 3 years ago

Thanks I send the Keras model https://we.tl/t-ZdD7JQK1Xf

azeme1 commented 3 years ago

I succeed to download the model.

` Model: "sequential_1"


Layer (type) Output Shape Param #

model_2 (Model) (None, 3, 3, 2048) 5347520


flatten_1 (Flatten) (None, 18432) 0


dense_2 (Dense) (None, 12) 221196


dense_3 (Dense) (None, 8) 104


dense_4 (Dense) (None, 1) 9

Total params: 5,568,829 Trainable params: 5,540,797 Non-trainable params: 28,032


`

There some unsupported cases. 1) model in the model
2) Flatten 3) Dense

:( need a time to support this

rose-jinyang commented 3 years ago

Thanks

rose-jinyang commented 3 years ago

Hi Not yet?

azeme1 commented 3 years ago

Yesterday, I fixed two previous issues. I supported Reshape and working on model flattening. The inner model can be converted write now.

rose-jinyang commented 3 years ago

Hi Did u try to convert the model that I sent? I met the following issue in your latest code.

image

azeme1 commented 3 years ago

Hi! Currently the model you trying to export to ncnn inference is not supported by this version of the script. But it can be adapted. In dev barch there is a support for all required layers as weel as the model transformation script (./_step_by_step/unpack_model.ipynb) I succeed to get the following conversion with errors You can download the the converted model from https://www.dropbox.com/sh/8anok3k3jxjj81i/AADWMLad_V0MKs4ySN2mgPPda?dl=0 model_zoo/variouse/issue_00003/model_2/model_2.param model_zoo/variouse/issue_00003/model_2/model_2.bin

====================By Layer Comparison ==================== Layer - input_1 :: 0.0 < 1e-05 True Layer - conv1_pad :: 0.0 < 1e-05 True Layer - conv1 :: 3.7258100604731226e-08 < 1e-05 True Layer - conv1_bn :: 2.2844560021439975e-07 < 1e-05 True Layer - conv1_relu_dwc_4 :: 1.5921442297894828e-07 < 1e-05 True Layer - conv1_relu :: 8.533345408068271e-08 < 1e-05 True Layer - conv_dw_1 :: 1.8246443005409674e-06 < 1e-05 True Layer - conv_dw_1_bn :: 7.280946761056839e-07 < 1e-05 True Layer - conv_dw_1_relu_dwc_8 :: 4.5385996827462805e-07 < 1e-05 True Layer - conv_dw_1_relu :: 2.0729468985791755e-07 < 1e-05 True Layer - conv_pw_1 :: 4.940412736686994e-07 < 1e-05 True Layer - conv_pw_1_bn :: 1.119398348237155e-06 < 1e-05 True Layer - conv_pw_1_relu_dwc_12 :: 6.509073955385247e-07 < 1e-05 True Layer - conv_pw_1_relu :: 4.126076760258002e-07 < 1e-05 True Layer - conv_pad_2 :: 4.053372606449557e-07 < 1e-05 True Layer - conv_dw_2 :: 3.564406370060169e-06 < 1e-05 True Layer - conv_dw_2_bn :: 7.151179488573689e-07 < 1e-05 True Layer - conv_dw_2_relu_dwc_17 :: 4.555199382139108e-07 < 1e-05 True Layer - conv_dw_2_relu :: 3.9110190641622467e-07 < 1e-05 True Layer - conv_pw_2 :: 5.29204726262833e-07 < 1e-05 True Layer - conv_pw_2_bn :: 8.80701065852918e-07 < 1e-05 True Layer - conv_pw_2_relu_dwc_21 :: 4.3631226276374946e-07 < 1e-05 True Layer - conv_pw_2_relu :: 4.019139510091918e-07 < 1e-05 True Layer - conv_dw_3 :: 2.9757591164525365e-06 < 1e-05 True Layer - conv_dw_3_bn :: 1.0630579936332651e-06 < 1e-05 True Layer - conv_dw_3_relu_dwc_25 :: 6.782701689189707e-07 < 1e-05 True Layer - conv_dw_3_relu :: 5.83311987156776e-07 < 1e-05 True Layer - conv_pw_3 :: 1.0395164053988992e-06 < 1e-05 True Layer - conv_pw_3_bn :: 1.5218593034660444e-06 < 1e-05 True Layer - conv_pw_3_relu_dwc_29 :: 7.284947969310451e-07 < 1e-05 True Layer - conv_pw_3_relu :: 6.419008968805429e-07 < 1e-05 True Layer - conv_pad_4 :: 6.195755304361228e-07 < 1e-05 True Layer - conv_dw_4 :: 3.4399233754811576e-06 < 1e-05 True Layer - conv_dw_4_bn :: 7.606022336403839e-07 < 1e-05 True Layer - conv_dw_4_relu_dwc_34 :: 5.629441375276656e-07 < 1e-05 True Layer - conv_dw_4_relu :: 5.530654902941023e-07 < 1e-05 True Layer - conv_pw_4 :: 6.605893076994107e-07 < 1e-05 True Layer - conv_pw_4_bn :: 8.946276466303971e-07 < 1e-05 True Layer - conv_pw_4_relu_dwc_38 :: 6.792343469896878e-07 < 1e-05 True Layer - conv_pw_4_relu :: 6.625513151448104e-07 < 1e-05 True Layer - conv_dw_5 :: 2.8502643090178026e-06 < 1e-05 True Layer - conv_dw_5_bn :: 1.0309681783837732e-06 < 1e-05 True Layer - conv_dw_5_relu_dwc_42 :: 5.252525738796976e-07 < 1e-05 True Layer - conv_dw_5_relu :: 5.237130267232715e-07 < 1e-05 True Layer - conv_pw_5 :: 8.30673116070102e-07 < 1e-05 True Layer - conv_pw_5_bn :: 1.1655911293928511e-06 < 1e-05 True Layer - conv_pw_5_relu_dwc_46 :: 5.153517577127786e-07 < 1e-05 True Layer - conv_pw_5_relu :: 5.130875706527149e-07 < 1e-05 True Layer - conv_pad_6 :: 4.783123017659818e-07 < 1e-05 True Layer - conv_dw_6 :: 2.0972772745153634e-06 < 1e-05 True Layer - conv_dw_6_bn :: 6.773416885152983e-07 < 1e-05 True Layer - conv_dw_6_relu_dwc_51 :: 5.093107802167651e-07 < 1e-05 True Layer - conv_dw_6_relu :: 5.069159101367404e-07 < 1e-05 True Layer - conv_pw_6 :: 6.805627776884648e-07 < 1e-05 True Layer - conv_pw_6_bn :: 8.918196954255109e-07 < 1e-05 True Layer - conv_pw_6_relu_dwc_55 :: 5.69438441289094e-07 < 1e-05 True Layer - conv_pw_6_relu :: 5.674094722962764e-07 < 1e-05 True Layer - conv_dw_7 :: 1.9915755729016382e-06 < 1e-05 True Layer - conv_dw_7_bn :: 9.861482794804033e-07 < 1e-05 True Layer - conv_dw_7_relu_dwc_59 :: 4.528903616574098e-07 < 1e-05 True Layer - conv_dw_7_relu :: 4.5172143359195616e-07 < 1e-05 True Layer - conv_pw_7 :: 8.19523052086879e-07 < 1e-05 True Layer - conv_pw_7_bn :: 8.800943191999977e-07 < 1e-05 True Layer - conv_pw_7_relu_dwc_63 :: 5.596235723714926e-07 < 1e-05 True Layer - conv_pw_7_relu :: 5.588918270404974e-07 < 1e-05 True Layer - conv_dw_8 :: 1.674008103691449e-06 < 1e-05 True Layer - conv_dw_8_bn :: 8.048725703702075e-07 < 1e-05 True Layer - conv_dw_8_relu_dwc_67 :: 4.1949058982027054e-07 < 1e-05 True Layer - conv_dw_8_relu :: 4.1932904082386813e-07 < 1e-05 True Layer - conv_pw_8 :: 7.439441560563864e-07 < 1e-05 True Layer - conv_pw_8_bn :: 9.948827255357173e-07 < 1e-05 True Layer - conv_pw_8_relu_dwc_71 :: 5.422878075478366e-07 < 1e-05 True Layer - conv_pw_8_relu :: 5.421832725005515e-07 < 1e-05 True Layer - conv_dw_9 :: 1.6282177739412873e-06 < 1e-05 True Layer - conv_dw_9_bn :: 7.491075280086079e-07 < 1e-05 True Layer - conv_dw_9_relu_dwc_75 :: 4.229526666676975e-07 < 1e-05 True Layer - conv_dw_9_relu :: 4.2287663859497115e-07 < 1e-05 True Layer - conv_pw_9 :: 7.848150289646583e-07 < 1e-05 True Layer - conv_pw_9_bn :: 1.134727313001349e-06 < 1e-05 True Layer - conv_pw_9_relu_dwc_79 :: 5.724123752770538e-07 < 1e-05 True Layer - conv_pw_9_relu :: 5.72217516037199e-07 < 1e-05 True Layer - conv_dw_10 :: 1.7440712554162019e-06 < 1e-05 True Layer - conv_dw_10_bn :: 7.070102014949953e-07 < 1e-05 True Layer - conv_dw_10_relu_dwc_83 :: 3.967492432366271e-07 < 1e-05 True Layer - conv_dw_10_relu :: 3.9602699075658165e-07 < 1e-05 True Layer - conv_pw_10 :: 8.350396001333138e-07 < 1e-05 True Layer - conv_pw_10_bn :: 1.2172243941677152e-06 < 1e-05 True Layer - conv_pw_10_relu_dwc_87 :: 5.80161326979578e-07 < 1e-05 True Layer - conv_pw_10_relu :: 5.794913704448845e-07 < 1e-05 True Layer - conv_dw_11 :: 1.6367295074815047e-06 < 1e-05 True Layer - conv_dw_11_bn :: 7.116474307622411e-07 < 1e-05 True Layer - conv_dw_11_relu_dwc_91 :: 3.93962892530908e-07 < 1e-05 True Layer - conv_dw_11_relu :: 3.9290802078539855e-07 < 1e-05 True Layer - conv_pw_11 :: 8.766119776737469e-07 < 1e-05 True Layer - conv_pw_11_bn :: 1.2863979463872965e-06 < 1e-05 True Layer - conv_pw_11_relu_dwc_95 :: 4.781298912348575e-07 < 1e-05 True Layer - conv_pw_11_relu :: 4.707648599833192e-07 < 1e-05 True Layer - conv_pad_12 :: 4.100884893887269e-07 < 1e-05 True Layer - conv_dw_12 :: 1.5101520602911478e-06 < 1e-05 True Layer - conv_dw_12_bn :: 5.352550260795397e-07 < 1e-05 True Layer - conv_dw_12_relu_dwc_100 :: 4.168588816355623e-07 < 1e-05 True Layer - conv_dw_12_relu :: 4.148061805153702e-07 < 1e-05 True Layer - conv_pw_12 :: 7.672036304029461e-07 < 1e-05 True Layer - conv_pw_12_bn :: 1.765855131452554e-06 < 1e-05 True Layer - conv_pw_12_relu_dwc_104 :: 4.339024712862738e-07 < 1e-05 True Layer - conv_pw_12_relu :: 4.33455824122575e-07 < 1e-05 True Layer - conv_dw_13 :: 9.752612868396682e-07 < 1e-05 True Layer - conv_dw_13_bn :: 1.0222001947113313e-06 < 1e-05 True Layer - conv_dw_13_relu_dwc_108 :: 4.679841083543579e-07 < 1e-05 True Layer - conv_dw_13_relu :: 4.6782253093624604e-07 < 1e-05 True Layer - conv_pw_13 :: 1.3246053640614264e-06 < 1e-05 True Layer - conv_pw_13_bn :: 1.2528376828413457e-05 < 1e-05 False Layer - conv_pw_13_relu_dwc_112 :: 1.0314912515241303e-06 < 1e-05 True Layer - conv_pw_13_relu :: 8.512613476341357e-07 < 1e-05 True Layer - conv_pad_56 :: 6.517470865219366e-07 < 1e-05 True Layer - conv_dw_56 :: 5.803903135870314e-08 < 1e-05 True Layer - conv_dw_56_bn :: 7.061748874548357e-07 < 1e-05 True Layer - conv_dw_56_relu_dwc_117 :: 3.6092387745156884e-07 < 1e-05 True Layer - conv_dw_56_relu :: 3.6092387745156884e-07 < 1e-05 True Layer - conv_pw_56 :: 1.1012721188308205e-06 < 1e-05 True Layer - conv_pw_56_bn :: 2.4319078875123523e-06 < 1e-05 True Layer - conv_pw_56_relu_dwc_121 :: 1.2107248039683327e-06 < 1e-05 True Layer - conv_pw_56_relu :: 1.2107248039683327e-06 < 1e-05 True Layer - activation_1 :: 1.2107248039683327e-06 < 1e-05 True Layer - flatten_1 :: 1.2107248039683327e-06 < 1e-05 True Layer - dense_2 :: 2.066294428004767e-06 < 1e-05 True Layer - dense_3 :: 1.2069940567016602e-06 < 1e-05 True Layer - dense_4 :: 1.1324882507324219e-06 < 1e-05 True

rose-jinyang commented 3 years ago

Hello How are you? I met the following issue when converting my Keras model.

image

I send the Keras model and inference scripts, and test images. https://we.tl/t-nTv97o2zpD

azeme1 commented 3 years ago

Hello, I will check it tomorrow.17.12.2020, 13:45, "rose-jinyang" notifications@github.com: Hello How are you? I met the following issue when converting my Keras model.

I sent the Keras model and inference scripts. https://we.tl/t-nTv97o2zpD

—You are receiving this because you commented.Reply to this email directly, view it on GitHub, or unsubscribe.

rose-jinyang commented 3 years ago

thanks

azeme1 commented 3 years ago

There is the same problem as before.Could you take a look to the adaptation script https://github.com/azeme1/keras2ncnn/blob/main/_step_by_step/unpack_model.ipynb  ?There is an conversion of the mixed model config to the Functional one. With this script you can convert the model to Functional one.After that you can convert the model to NCNN.17.12.2020, 19:56, "rose-jinyang" notifications@github.com: thanks

—You are receiving this because you commented.Reply to this email directly, view it on GitHub, or unsubscribe.

rose-jinyang commented 3 years ago

Hi I flattened the original Keras model and the flattened model works well as the original model. I converted the flattened model to a NCNN model successfully. I send the newly flattened Keras model and the converted NCNN model, and test images. https://we.tl/t-qMnuQsiZns This model's input is a full image rather than a cropped face image. The results from the Keras model are the following.

Testing for a real face image ... Score: 0.9982519 Score: 0.9731081 Score: 0.84656346 Score: 0.9993873 Score: 0.9993707

Testing for a fake face image ... Score: 1.8225162e-05 Score: 0.00020772219 Score: 0.00014454126 Score: 6.423831e-06 Score: 3.918015e-06 Score: 0.0070014 Score: 0.00067165494 Score: 9.122233e-05

This Keras model works well. A real image score > 0.5 A fake image score < 0.5

But the results from "dense_4lsigmoid_3_0" of the converted NCNN model are the following: Testing for a real image ... score: 0.013702393 score: 0.77490234 score: 0.056793213 score: 0.04916382 score: 0.25073242

Testing for a fake image ... score: 0.027145386 score: 0.63378906 score: 0.546875 score: 0.44726562 score: 0.11468506 score: 0.85839844 score: 0.2734375 score: 0.06060791

So I can NOT separate the real images and the fake images. Could u fix this issue asap? Thanks

azeme1 commented 3 years ago

According to my test the model is converted correctly. ====================By Layer Comparison ==================== Layer - input_1 :: 0.0 < 1e-05 True Layer - conv1_pad :: 0.0 < 1e-05 True Layer - conv1 :: 3.7230854843528505e-08 < 1e-05 True Layer - conv1_bn :: 2.443424023113039e-07 < 1e-05 True Layer - conv1_relu_dwc_4 :: 1.597124139607331e-07 < 1e-05 True Layer - conv1_relu :: 8.634822279418586e-08 < 1e-05 True Layer - conv_dw_1 :: 1.8754068378257216e-06 < 1e-05 True Layer - conv_dw_1_bn :: 7.07651452103164e-07 < 1e-05 True Layer - conv_dw_1_relu_dwc_8 :: 4.4686271394311916e-07 < 1e-05 True Layer - conv_dw_1_relu :: 2.0393959232478664e-07 < 1e-05 True Layer - conv_pw_1 :: 4.802334956366394e-07 < 1e-05 True Layer - conv_pw_1_bn :: 1.0853639196284348e-06 < 1e-05 True Layer - conv_pw_1_relu_dwc_12 :: 6.162075010252011e-07 < 1e-05 True Layer - conv_pw_1_relu :: 3.9000002516331733e-07 < 1e-05 True Layer - conv_pad_2 :: 3.831279684618494e-07 < 1e-05 True Layer - conv_dw_2 :: 3.36914490617346e-06 < 1e-05 True Layer - conv_dw_2_bn :: 7.091483666954446e-07 < 1e-05 True Layer - conv_dw_2_relu_dwc_17 :: 4.5735501430499426e-07 < 1e-05 True Layer - conv_dw_2_relu :: 3.894040787599806e-07 < 1e-05 True Layer - conv_pw_2 :: 5.299741587805329e-07 < 1e-05 True Layer - conv_pw_2_bn :: 8.900323109628516e-07 < 1e-05 True Layer - conv_pw_2_relu_dwc_21 :: 4.4668630039268464e-07 < 1e-05 True Layer - conv_pw_2_relu :: 4.045095067795046e-07 < 1e-05 True Layer - conv_dw_3 :: 2.8693002604995854e-06 < 1e-05 True Layer - conv_dw_3_bn :: 1.0446132137076347e-06 < 1e-05 True Layer - conv_dw_3_relu_dwc_25 :: 6.579389264516067e-07 < 1e-05 True Layer - conv_dw_3_relu :: 5.537245897357934e-07 < 1e-05 True Layer - conv_pw_3 :: 1.002739281830145e-06 < 1e-05 True Layer - conv_pw_3_bn :: 1.4379198773895041e-06 < 1e-05 True Layer - conv_pw_3_relu_dwc_29 :: 7.001092967584555e-07 < 1e-05 True Layer - conv_pw_3_relu :: 6.137004220363451e-07 < 1e-05 True Layer - conv_pad_4 :: 5.92356002471206e-07 < 1e-05 True Layer - conv_dw_4 :: 3.202916559530422e-06 < 1e-05 True Layer - conv_dw_4_bn :: 6.890845156704017e-07 < 1e-05 True Layer - conv_dw_4_relu_dwc_34 :: 5.0095860615329e-07 < 1e-05 True Layer - conv_dw_4_relu :: 4.929805754727568e-07 < 1e-05 True Layer - conv_pw_4 :: 6.106693604124303e-07 < 1e-05 True Layer - conv_pw_4_bn :: 8.247879463851859e-07 < 1e-05 True Layer - conv_pw_4_relu_dwc_38 :: 6.322048307083605e-07 < 1e-05 True Layer - conv_pw_4_relu :: 6.193706667545484e-07 < 1e-05 True Layer - conv_dw_5 :: 2.7187311388843227e-06 < 1e-05 True Layer - conv_dw_5_bn :: 9.734371815284248e-07 < 1e-05 True Layer - conv_dw_5_relu_dwc_42 :: 4.881798076894484e-07 < 1e-05 True Layer - conv_dw_5_relu :: 4.867257530349889e-07 < 1e-05 True Layer - conv_pw_5 :: 7.771254217914247e-07 < 1e-05 True Layer - conv_pw_5_bn :: 1.0885389656323241e-06 < 1e-05 True Layer - conv_pw_5_relu_dwc_46 :: 4.826293888982036e-07 < 1e-05 True Layer - conv_pw_5_relu :: 4.808808284906263e-07 < 1e-05 True Layer - conv_pad_6 :: 4.4828837530985766e-07 < 1e-05 True Layer - conv_dw_6 :: 1.862143676589767e-06 < 1e-05 True Layer - conv_dw_6_bn :: 6.068391371627513e-07 < 1e-05 True Layer - conv_dw_6_relu_dwc_51 :: 4.58802020375515e-07 < 1e-05 True Layer - conv_dw_6_relu :: 4.5604608089888643e-07 < 1e-05 True Layer - conv_pw_6 :: 6.152034188744437e-07 < 1e-05 True Layer - conv_pw_6_bn :: 8.100576565084339e-07 < 1e-05 True Layer - conv_pw_6_relu_dwc_55 :: 5.050249569649168e-07 < 1e-05 True Layer - conv_pw_6_relu :: 5.01418469411874e-07 < 1e-05 True Layer - conv_dw_7 :: 1.8488552768758382e-06 < 1e-05 True Layer - conv_dw_7_bn :: 9.034908430294308e-07 < 1e-05 True Layer - conv_dw_7_relu_dwc_59 :: 4.1137911921396153e-07 < 1e-05 True Layer - conv_dw_7_relu :: 4.1027672637028445e-07 < 1e-05 True Layer - conv_pw_7 :: 7.594150019940571e-07 < 1e-05 True Layer - conv_pw_7_bn :: 8.168773319994216e-07 < 1e-05 True Layer - conv_pw_7_relu_dwc_63 :: 5.214026259636739e-07 < 1e-05 True Layer - conv_pw_7_relu :: 5.206233595345111e-07 < 1e-05 True Layer - conv_dw_8 :: 1.5459565929631935e-06 < 1e-05 True Layer - conv_dw_8_bn :: 7.450656198670913e-07 < 1e-05 True Layer - conv_dw_8_relu_dwc_67 :: 3.973259481426794e-07 < 1e-05 True Layer - conv_dw_8_relu :: 3.9680801933172916e-07 < 1e-05 True Layer - conv_pw_8 :: 7.383512752312527e-07 < 1e-05 True Layer - conv_pw_8_bn :: 9.912671430356568e-07 < 1e-05 True Layer - conv_pw_8_relu_dwc_71 :: 5.399718929766095e-07 < 1e-05 True Layer - conv_pw_8_relu :: 5.395442030931008e-07 < 1e-05 True Layer - conv_dw_9 :: 1.6389185475418344e-06 < 1e-05 True Layer - conv_dw_9_bn :: 7.608250598423183e-07 < 1e-05 True Layer - conv_dw_9_relu_dwc_75 :: 4.44163589463642e-07 < 1e-05 True Layer - conv_dw_9_relu :: 4.4316573166724993e-07 < 1e-05 True Layer - conv_pw_9 :: 8.272878062598465e-07 < 1e-05 True Layer - conv_pw_9_bn :: 1.2009347756247735e-06 < 1e-05 True Layer - conv_pw_9_relu_dwc_79 :: 6.041718165761267e-07 < 1e-05 True Layer - conv_pw_9_relu :: 6.039484787834226e-07 < 1e-05 True Layer - conv_dw_10 :: 1.8583956489237607e-06 < 1e-05 True Layer - conv_dw_10_bn :: 7.628405001014471e-07 < 1e-05 True Layer - conv_dw_10_relu_dwc_83 :: 4.5203196918919275e-07 < 1e-05 True Layer - conv_dw_10_relu :: 4.503023660618055e-07 < 1e-05 True Layer - conv_pw_10 :: 9.233917808160186e-07 < 1e-05 True Layer - conv_pw_10_bn :: 1.3518978221327416e-06 < 1e-05 True Layer - conv_pw_10_relu_dwc_87 :: 6.728466814820422e-07 < 1e-05 True Layer - conv_pw_10_relu :: 6.718773875036277e-07 < 1e-05 True Layer - conv_dw_11 :: 1.9539895674824947e-06 < 1e-05 True Layer - conv_dw_11_bn :: 8.644354352327355e-07 < 1e-05 True Layer - conv_dw_11_relu_dwc_91 :: 5.31340958787041e-07 < 1e-05 True Layer - conv_dw_11_relu :: 5.273733449939755e-07 < 1e-05 True Layer - conv_pw_11 :: 1.121909122048237e-06 < 1e-05 True Layer - conv_pw_11_bn :: 1.6393379382861895e-06 < 1e-05 True Layer - conv_pw_11_relu_dwc_95 :: 6.404790156011586e-07 < 1e-05 True Layer - conv_pw_11_relu :: 6.299398478404328e-07 < 1e-05 True Layer - conv_pad_12 :: 5.487475505105976e-07 < 1e-05 True Layer - conv_dw_12 :: 2.138320496669621e-06 < 1e-05 True Layer - conv_dw_12_bn :: 7.729831850156188e-07 < 1e-05 True Layer - conv_dw_12_relu_dwc_100 :: 6.154533593871747e-07 < 1e-05 True Layer - conv_dw_12_relu :: 6.082878485358378e-07 < 1e-05 True Layer - conv_pw_12 :: 9.803339935388067e-07 < 1e-05 True Layer - conv_pw_12_bn :: 2.233570285170572e-06 < 1e-05 True Layer - conv_pw_12_relu_dwc_104 :: 4.963793571732822e-07 < 1e-05 True Layer - conv_pw_12_relu :: 4.953055281475827e-07 < 1e-05 True Layer - conv_dw_13 :: 1.1610383126026136e-06 < 1e-05 True Layer - conv_dw_13_bn :: 1.2523712484835414e-06 < 1e-05 True Layer - conv_dw_13_relu_dwc_108 :: 5.511599852070503e-07 < 1e-05 True Layer - conv_dw_13_relu :: 5.49801029592345e-07 < 1e-05 True Layer - conv_pw_13 :: 1.4038182598596904e-06 < 1e-05 True Layer - conv_pw_13_bn :: 1.3129026228853036e-05 < 1e-05 False Layer - conv_pw_13_relu_dwc_112 :: 1.398661765961151e-06 < 1e-05 True Layer - conv_pw_13_relu :: 1.1579910506043234e-06 < 1e-05 True Layer - conv_pad_56 :: 8.865869176588603e-07 < 1e-05 True Layer - conv_dw_56 :: 8.279125296439815e-08 < 1e-05 True Layer - conv_dw_56_bn :: 9.61469481808308e-07 < 1e-05 True Layer - conv_dw_56_relu_dwc_117 :: 4.771437147610413e-07 < 1e-05 True Layer - conv_dw_56_relu :: 4.771437147610413e-07 < 1e-05 True Layer - conv_pw_56 :: 1.3121513120495365e-06 < 1e-05 True Layer - conv_pw_56_bn :: 2.8301169550104532e-06 < 1e-05 True Layer - conv_pw_56_relu_dwc_121 :: 1.4058588249099557e-06 < 1e-05 True Layer - conv_pw_56_relu :: 1.4058588249099557e-06 < 1e-05 True Layer - activation_1 :: 1.4058588249099557e-06 < 1e-05 True Layer - flatten_1 :: 1.4058588249099557e-06 < 1e-05 True Layer - dense_2 :: 4.291534423828125e-06 < 1e-05 True Layer - dense_3 :: 2.2193416953086853e-06 < 1e-05 True Layer - dense_4 :: 5.960464477539063e-08 < 1e-05 True

azeme1 commented 3 years ago

My output from keras Testing for a real face image ... WARNING:tensorflow:No training configuration found in the save file, so the model was not compiled. Compile it manually. 2020-12-19 21:32:09.229901: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:127] None of the MLIR optimization passes are enabled (registered 2) WARNING:tensorflow:AutoGraph could not transform <function Model.make_predict_function..predict_function at 0x000002110FCC8D38> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: 'arguments' object has no attribute 'posonlyargs' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert Score: 0.9982519 Score: 0.9731081 Score: 0.84656346 Score: 0.9993873 Score: 0.9993707

Testing for a fake face image ... Score: 1.8225162e-05 Score: 0.00020772219 Score: 0.00014454126 Score: 6.4238316e-06 Score: 3.918015e-06 Score: 0.0070014 Score: 0.00067165494 Score: 9.122233e-05

Process finished with exit code 0

My output from NCNN Testing for a real face image ... Score: 0.9982519 Score: 0.97310805 Score: 0.84656477 Score: 0.9993874 Score: 0.99937063

Testing for a fake face image ... Score: 1.8224779e-05 Score: 0.00020767072 Score: 0.00014449321 Score: 6.423796e-06 Score: 3.918037e-06 Score: 0.007001325 Score: 0.000671657 Score: 9.1214446e-05

azeme1 commented 3 years ago

The issue is in your inference python scripts. Here is the correct

import ncnn
import numpy as np
import os

class LivenessDetector:
    def __init__(self, num_thread=1):
        self.in_w = 224
        self.in_h = 224
        self.num_thread = num_thread
        self.mean_vals = [0, 0, 0]
        self.norm_vals = [1.0 / 255, 1.0 / 255, 1.0 / 255]

        self.net = ncnn.Net()
        dirname = os.path.dirname(__file__)
        param_path = os.path.join(dirname, 'model_2.param')
        bin_path = os.path.join(dirname, 'model_2.bin')
        ret = self.net.load_param(param_path)
        if ret < 0:
            print('Failed in loading the liveness detection model')
        ret = self.net.load_model(bin_path)
        if ret < 0:
            print('Failed in loading the liveness detection model')

    def is_live(self, bgr_img, threshold=0.5):
        bgr_img = np.ascontiguousarray(bgr_img[...,::-1])
        input_data = ncnn.Mat.from_pixels_resize(bgr_img, ncnn.Mat.PixelType.PIXEL_BGR,
                                                 bgr_img.shape[1], bgr_img.shape[0],
                                                 self.in_w, self.in_h)
        input_data.substract_mean_normalize(self.mean_vals, self.norm_vals)
        ex = self.net.create_extractor()
        ex.set_num_threads(self.num_thread)
        ex.input("input_1", input_data)
        scores = ncnn.Mat()
        ex.extract("dense_4lsigmoid_0", scores)
        # print(scores)
        score = np.array(scores)[0]
        print('Score: ', score)
        return True if score > threshold else False

if __name__ == '__main__':
    import time
    import cv2
    import os
    import glob

    liveness_detector = LivenessDetector()

    print('Testing for a real face image ...')
    dirname = os.path.dirname(__file__)
    img_dir = os.path.join(dirname, 'real')
    img_paths = glob.glob(os.path.join(img_dir, '*.*'))
    for img_path in sorted(img_paths):
        # print(img_path)
        face_img = cv2.imread(img_path)
        is_live = liveness_detector.is_live(face_img)

    print('\n\nTesting for a fake face image ...')
    dirname = os.path.dirname(__file__)
    img_dir = os.path.join(dirname, 'fake')
    img_paths = glob.glob(os.path.join(img_dir, '*.*'))
    for img_path in sorted(img_paths):
        # print(img_path)
        face_img = cv2.imread(img_path)
        is_live = liveness_detector.is_live(face_img)
rose-jinyang commented 3 years ago

Thanks for your help. I followed your requirements. I flattened the original Keras model by using your unpack script and checked that the flattened model works like the original. I converted the flattened model with your latest conversion script and tested the test images. The results from the flattened Keras are the following. Of course, these are the same as before.

Testing for a real face image ... Score: 0.9982519 Score: 0.9731081 Score: 0.84656346 Score: 0.9993873 Score: 0.9993707

Testing for a fake face image ... Score: 1.8225162e-05 Score: 0.00020772219 Score: 0.00014454126 Score: 6.423831e-06 Score: 3.918015e-06 Score: 0.0070014 Score: 0.00067165494 Score: 9.122233e-05

I installed the NCNN(2020.12.02 version) and PyNCNN as your README. The results from NCNN are the following.

Testing for a real image ... score: 0.9970703 score: 0.9169922 score: 0.99902344 score: 0.9975586 score: 0.93359375

Testing for a fake image ... score: 0.0010938644 score: 1.9133091e-05 score: 0.00021469593 score: 1.0251999e-05 score: 9.596348e-06 score: 0.00015616417 score: 0.0042762756 score: 0.00025200844

Of course, the real faces and fake images can be separated by threshold 0.5. But these results from NCNN model are too different from the results from Keras model. I tested several times but can NOT find the reason. Could u help me asap? I send the original Keras model and the flattened model, and the converted NCNN model and scripts. https://we.tl/t-8G3NQBXh9E Thanks.

azeme1 commented 3 years ago

Hello,I downloaded you code and run it with ncnn on windows. See the screenshot as well  envs\keras2ncnn_tf2x\python.exe keras2ncnn/model_zoo/variouse/issue_00003.a/liveness-2020.12.21/liveness_detector_ncnn.py Testing for a real image ... score: 0.9982519 score: 0.97310805 score: 0.84656477 score: 0.9993874 score: 0.99937063

Testing for a fake image ... score: 1.8224779e-05 score: 0.00020767072 score: 0.00014449321 score: 6.423796e-06 score: 3.918037e-06 score: 0.007001325 score: 0.000671657 score: 9.1214446e-05

azeme1 commented 3 years ago

Screenshot 2020-12-21 110129

rose-jinyang commented 3 years ago

Okay. This means that there is Not issue in the models and scripts. There may exist an issue in my NCNN & PyNCNN environment on aarch64 platform. Thanks

azeme1 commented 3 years ago

According to running the provided scripts there is a fix required I look like you have executed wrong script (you can not get any results from you original script) Traceback (most recent call last): File "keras2ncnn/model_zoo/variouse/issue_00003.a/liveness-2020.12.21/liveness_detector_ncnn.py", line 46, in liveness_detector = LivenessDetector(num_threads=1) TypeError: init() got an unexpected keyword argument 'num_threads'

Could you run this version:

import ncnn
import numpy as np
import os

class LivenessDetector:
    def __init__(self, num_threads=2):
        self.in_w = 224
        self.in_h = 224
        self.num_thread = num_threads
        self.mean_vals = [0, 0, 0]
        self.norm_vals = [1.0/255, 1.0/255, 1.0/255]

        self.net = ncnn.Net()
        dirname = os.path.dirname(__file__)
        param_path = os.path.join(dirname, 'New_puru_fiop_mhere_prod_flat.param')
        bin_path = os.path.join(dirname, 'New_puru_fiop_mhere_prod_flat.bin')
        ret = self.net.load_param(param_path)
        if ret < 0:
            print('Failed in loading the liveness detection model')
        ret = self.net.load_model(bin_path)
        if ret < 0:
            print('Failed in loading the liveness detection model')

    def is_live(self, bgr_img, threshold=0.5):
        bgr_img = np.ascontiguousarray(bgr_img[..., ::-1])
        input_data = ncnn.Mat.from_pixels_resize(bgr_img, ncnn.Mat.PixelType.PIXEL_BGR,
                                                 bgr_img.shape[1], bgr_img.shape[0],
                                                 self.in_w, self.in_h)
        input_data.substract_mean_normalize(self.mean_vals, self.norm_vals)
        ex = self.net.create_extractor()
        ex.set_num_threads(self.num_thread)
        ex.input("input_1_3_0", input_data)
        scores = ncnn.Mat()
        ex.extract("dense_4lsigmoid_3_0", scores)
        # print(scores)
        score = np.array(scores)[0]
        print('score: ', score)
        return True if score > threshold else False

if __name__ == '__main__':
    import time
    import cv2
    import os
    import glob

    liveness_detector = LivenessDetector(num_threads=1)

    print('Testing for a real image ...')
    dirname = os.path.dirname(__file__)
    img_dir = os.path.join(dirname, 'new/real')
    img_paths = glob.glob(os.path.join(img_dir, '*.*'))
    for img_path in img_paths:
        face_img = cv2.imread(img_path)
        st = time.time()
        is_live = liveness_detector.is_live(face_img)
        elapsed = (time.time() - st) * 1000
        # print('Elapsed: {} ms'.format(elapsed))
        # print('live: ', is_live)

    print('\nTesting for a fake image ...')
    dirname = os.path.dirname(__file__)
    img_dir = os.path.join(dirname, 'new/fake')
    img_paths = glob.glob(os.path.join(img_dir, '*.*'))
    for img_path in img_paths:
        face_img = cv2.imread(img_path)
        st = time.time()
        is_live = liveness_detector.is_live(face_img)
        elapsed = (time.time() - st) * 1000
        # print('Elapsed: {} ms'.format(elapsed))
        # print('live: ', is_live)
azeme1 commented 3 years ago

It look like the problem is in the python script. 21.12.2020, 11:15, "rose-jinyang" notifications@github.com: Okay. This means that there is Not issue in the models and scripts. There may exist an issue in my NCNN & PyNCNN environment on aarch64 platform. Thanks

—You are receiving this because you modified the open/close state.Reply to this email directly, view it on GitHub, or unsubscribe.

azeme1 commented 3 years ago

Here is the link to reproduce the result from optimized keras model. The results are the same. Could you be so kind to reproduce them in you local environment? NOTE: The model is converted from the dev branch. https://we.tl/t-qp3cABYRXY

rose-jinyang commented 3 years ago

Hi I tested the model "model_2" that u just sent me. The results from the model on aarch64 platform are the following.

Testing for a real image ... score: 0.99937063 score: 0.99825186 score: 0.9993874 score: 0.97310877 score: 0.8465702

Testing for a fake image ... score: 1.8225004e-05 score: 0.00014450065 score: 0.0070015634 score: 3.9179736e-06 score: 6.4238207e-06 score: 0.0006716534 score: 9.121627e-05 score: 0.00020766635

These results are almost equal to the results from the original Keras model. Of course, the output orders are different because of OS. This means that the model "model_2" was converted correctly from the original Keras model. That is, this means that your main branch and dev branch are different.

azeme1 commented 3 years ago

I think that everything is correct in you local environment  - it look like a typo in you python script somewhere.  I suggest you also run the conversion from the run_test.py to see by layer comparison of ther converted model.The dev branch contains the version of fully optimized profile - in 99% you should not run ncnnoptimize after it.  I will push the changes in main soon... PS: Thanx for testing my project :)21.12.2020, 12:03, "rose-jinyang" notifications@github.com: Hi I tested the model "model_2" that u just sent me. The results from the model on aarch64 platform are the following. Testing for a real image ... score: 0.99937063 score: 0.99825186 score: 0.9993874 score: 0.97310877 score: 0.8465702 Testing for a fake image ... score: 1.8225004e-05 score: 0.00014450065 score: 0.0070015634 score: 3.9179736e-06 score: 6.4238207e-06 score: 0.0006716534 score: 9.121627e-05 score: 0.00020766635 These results are almost equal to the results from the original Keras model. Of course, the output orders are different because of OS. This means that the model "model_2" was converted correctly from the original Keras model.

—You are receiving this because you modified the open/close state.Reply to this email directly, view it on GitHub, or unsubscribe.

rose-jinyang commented 3 years ago

I hope that the main branch will output a same model as the dev branch. Thanks

azeme1 commented 3 years ago

At least I can stress that correctness from the both branches.21.12.2020, 12:13, "rose-jinyang" notifications@github.com: I hope that the main branch outputs the same models as the dev branch. Thanks

—You are receiving this because you modified the open/close state.Reply to this email directly, view it on GitHub, or unsubscribe.