Open ChristopheZhao opened 1 year ago
I found JMF1108 has mentioned it in this issues( https://github.com/Tencent/ncnn/issues/3789 ),I think i met the same problem with it, someone Has this problem been fixed already?
针对onnx模型转换的各种问题,推荐使用最新的pnnx工具转换到ncnn In view of various problems in onnx model conversion, it is recommended to use the latest pnnx tool to convert your model to ncnn
pip install pnnx
pnnx model.onnx inputshape=[1,3,224,224]
详细参考文档 Detailed reference documentation https://github.com/pnnx/pnnx https://github.com/Tencent/ncnn/wiki/use-ncnn-with-pytorch-or-onnx#how-to-use-pnnx
error log | 日志或报错信息 | ログ
model | 模型 | モデル
how to reproduce | 复现步骤 | 再現方法
1.the code of using squeeze: `ncnn::Net squeezenet; ncnn::Mat input_data = ncnn::Mat(227, 227, 3);
time cost:70ms.
2.the code of using costomize model: `string voxel_res_file = "../input_sample_0_reshape.npy";
average time cost of each infer:500ms.
and the simple model i use is really small,the param file as below : ''' 7767517 5 5 Input data 0 1 data -23330=4,3,1000,200,64 0=1000 1=200 2=64 Convolution Conv_119 1 1 data 253 -23330=4,3,500,100,64 0=64 1=3 3=2 4=1 5=1 6=36864 9=1 Convolution Conv_121 1 1 253 256 -23330=4,3,500,100,64 0=64 1=3 4=1 5=1 6=36864 9=1 Convolution Conv_123 1 1 256 259 -23330=4,3,500,100,64 0=64 1=3 4=1 5=1 6=36864 9=1 Convolution Conv_125 1 1 259 262 -23330=4,3,500,100,64 0=64 1=3 4=1 5=1 6=36864 9=1
'''
Can you give me some advise that guide me to find the factor cause the slowly infrence in such sipmle model.