Closed TCBocean closed 4 years ago
It seems that the shape inference error for the eltwise
operator named by tf_op_layer_add_8/add_8
since broadcasting the first input is not supported for eltwise
operator in MNN, and it is leading to empty size output.
We will fix the problem soon, and maybe you could insert a broadcast op between the Relu op named by re_lu_23_1/Relu
and the AddV2 op named by tf_op_layer_add_8/add_8
currently.
Close this issue since problem has been resolved.
thank you very much
The model runs on an Android phone.The SoC of the phone is Snapdragon 660.
I used the 1.0.1 and commit 5b3ed465c554ce1ab7023bb6126bf8d837e4ee60 versions in github to try.
The compilation platform is ubuntu18.04, and the compilation method refers to Yuque's tutorial. No errors were reported during compilation.
My model is converted from the pb model of tf1.15.The model structure is Bisenetv2 with some modifications. My model is shared on Baidu Cloud: pb model: https://pan.baidu.com/s/1964AbWUYyF71-VM7YqY6Zw br8a mnn model: https://pan.baidu.com/s/1tiZhSC38ldwpdOkc7brdAg clut The main code of model call is as follows:
The above code can run normally. interpreter->runSession(session); The running time is about 420ms. So I guess the network is running, but it will crash when getting the output such as
float test = copy_output1->host<float>()[0];
The crash information is as follows:I have printed copy_output1->elementSize() and the value obtained is 0. Can anyone help me please? Grateful.