mit-han-lab / temporal-shift-module

[ICCV 2019] TSM: Temporal Shift Module for Efficient Video Understanding
https://arxiv.org/abs/1811.08383
MIT License
2.07k stars 417 forks source link

strided_slice get empty slice at axis 1 #138

Open faheng opened 4 years ago

faheng commented 4 years ago

imeet this error,my enviornment is ,can you give some advice onnx=1.7.0
onnx-simplifier=0.2.9 torch= 1.4.0
torchvision=0.5.0 tvm= 0.6.0 opencv-python=4.4.0.44

Open camera... <VideoCapture 0x7f67419cd610> GLib-GIO-Message: 19:00:58.906: Using the 'memory' GSettings backend. Your settings will not be saved or shared with other applications. Build transformer... /home/sunxu/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py:219: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead. warnings.warn("The use of the transforms.Scale transform is deprecated, " + Build Executor... /home/sunxu/下载/未命名文件夹/temporal-shift-module-master/online_demo/mobilenet_v2_tsm.py:95: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! x1, x2 = x[:, : c // 8], x[:, c // 8:] [19:01:01] /home/sunxu/下载/未命名文件夹/incubator-tvm/src/relay/ir/doc.h:50: text node: ' an internal invariant was violated while typechecking your program [19:01:01] /home/sunxu/下载/未命名文件夹/incubator-tvm/src/relay/op/tensor/transform.cc:1919: Check failed: begin_v < end_v (3 vs. -9223372036854775784) : strided_slice get empty slice at axis 1 ...

Traceback (most recent call last):

File "main.py", line 447, in main()

File "main.py", line 341, in main executor, ctx = get_executor()

File "main.py", line 155, in get_executor return torch2executor(torch_module, torch_inputs, target)

File "main.py", line 111, in torch2executor graph, tvm_module, params = torch2tvm_module(torch_module, torch_inputs, target)

File "main.py", line 58, in torch2tvm_module relay_module, params = tvm.relay.frontend.from_onnx(onnx_model, shape=input_shapes)

File "/home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/relay/frontend/onnx.py", line 1497, in from_onnx mod, params = g.from_onnx(graph, opset)

File "/home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/relay/frontend/onnx.py", line 1344, in from_onnx return _module.Module.from_expr(func), self._params

File "/home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/relay/module.py", line 233, in from_expr return _module.Module_FromExpr(expr, funcs, defs)

File "tvm/_ffi/_cython/./function.pxi", line 304, in tvm._ffi._cy3.core.FunctionBase.call

File "tvm/_ffi/_cython/./function.pxi", line 239, in tvm._ffi._cy3.core.FuncCall

File "tvm/_ffi/_cython/./function.pxi", line 228, in tvm._ffi._cy3.core.FuncCall3

File "tvm/_ffi/_cython/./base.pxi", line 160, in tvm._ffi._cy3.core.CALL

.... File "/home/sunxu/下载/未命名文件夹/incubator-tvm/src/relay/ir/error.cc", line 132 TVMError: Error(s) have occurred. The program has been annotated with them:

In main: v0.0.4 fn (%i0: Tensor[(1, 3, 224, 224), float32], %v788: Tensor[(32, 3, 3, 3), float32], %v790: Tensor[(32), float32], %v792: Tensor[(32, 1, 3, 3), float32], %v794: Tensor[(32), float32], %v796: Tensor[(16, 32, 1, 1), float32], %v798: Tensor[(16), float32], %v800: Tensor[(96, 16, 1, 1), float32], %v802: Tensor[(96), float32], %v804: Tensor[(96, 1, 3, 3), float32], %v806: Tensor[(96),

tonylins commented 4 years ago

Hi, please git pull to get the recent update to the code and have a try!

faheng commented 4 years ago

yes i have clone today,and try uninstall and install,but i still meet this error ,my os is ubuntu16.04,floow is the full outputs Open camera... <VideoCapture 0x7f1caa2348b0> GLib-GIO-Message: 11:04:41.192: Using the 'memory' GSettings backend. Your settings will not be saved or shared with other applications. Build transformer... /home/sunxu/anaconda3/lib/python3.7/site-packages/torchvision/transforms/transforms.py:219: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead. warnings.warn("The use of the transforms.Scale transform is deprecated, " + Build Executor... Downloading PyTorch checkpoint... /home/sunxu/下载/temporal-shift-module-master/online_demo/mobilenet_v2_tsm.py:95: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! x1, x2 = x[:, : c // 8], x[:, c // 8:] [11:04:49] /home/sunxu/下载/未命名文件夹/incubator-tvm/src/relay/ir/doc.h:50: text node: ' an internal invariant was violated while typechecking your program [11:04:49] /home/sunxu/下载/未命名文件夹/incubator-tvm/src/relay/op/tensor/transform.cc:1919: Check failed: begin_v < end_v (3 vs. -9223372036854775784) : strided_slice get empty slice at axis 1 Stack trace: [bt] (0) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x32) [0x7f1d024e2622] [bt] (1) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::StridedSliceRel(tvm::Array<tvm::relay::Type, void> const&, int, tvm::Attrs const&, tvm::relay::TypeReporter const&)+0xe41) [0x7f1d029a2bc1] [bt] (2) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue), tvm::runtime::TypedPackedFunc<bool (tvm::Array<tvm::relay::Type, void> const&, int, tvm::Attrs const&, tvm::relay::TypeReporter const&)>::AssignTypedLambda<bool ()(tvm::Array<tvm::relay::Type, void> const&, int, tvm::Attrs const&, tvm::relay::TypeReporter const&)>(bool ()(tvm::Array<tvm::relay::Type, void> const&, int, tvm::Attrs const&, tvm::relay::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0xd4) [0x7f1d028a11e4] [bt] (3) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::TypeSolver::Solve()+0x3b0) [0x7f1d02a84c50] [bt] (4) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::relay::Expr)+0x55) [0x7f1d02a50465] [bt] (5) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::relay::Module const&, tvm::relay::GlobalVar const&)+0x1d7) [0x7f1d02a50c17] [bt] (6) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ModuleNode::Add(tvm::relay::GlobalVar const&, tvm::relay::Function const&, bool)+0x28c) [0x7f1d02bad76c] [bt] (7) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ModuleNode::FromExpr(tvm::relay::Expr const&, tvm::Map<tvm::relay::GlobalVar, tvm::relay::Function, void, void> const&, tvm::Map<tvm::relay::GlobalTypeVar, tvm::relay::TypeData, void, void> const&)+0x1d5) [0x7f1d02bafbc5] [bt] (8) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(+0xa5fc51) [0x7f1d02bb0c51]

; ' should not has tab or newline. Traceback (most recent call last):

File "main.py", line 389, in main()

File "main.py", line 285, in main executor, ctx = get_executor()

File "main.py", line 99, in get_executor return torch2executor(torch_module, torch_inputs, target)

File "main.py", line 55, in torch2executor graph, tvm_module, params = torch2tvm_module(torch_module, torch_inputs, target)

File "main.py", line 38, in torch2tvm_module relay_module, params = tvm.relay.frontend.from_onnx(onnx_model, shape=input_shapes)

File "/home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/relay/frontend/onnx.py", line 1497, in from_onnx mod, params = g.from_onnx(graph, opset)

File "/home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/relay/frontend/onnx.py", line 1344, in from_onnx return _module.Module.from_expr(func), self._params

File "/home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/relay/module.py", line 233, in from_expr return _module.Module_FromExpr(expr, funcs, defs)

File "tvm/_ffi/_cython/./function.pxi", line 304, in tvm._ffi._cy3.core.FunctionBase.call

File "tvm/_ffi/_cython/./function.pxi", line 239, in tvm._ffi._cy3.core.FuncCall

File "tvm/_ffi/_cython/./function.pxi", line 228, in tvm._ffi._cy3.core.FuncCall3

File "tvm/_ffi/_cython/./base.pxi", line 160, in tvm._ffi._cy3.core.CALL

tvm._ffi.base.TVMError: Traceback (most recent call last): [bt] (7) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(TVMFuncCall+0x61) [0x7f1d02ced951] [bt] (6) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(+0xa5fc51) [0x7f1d02bb0c51] [bt] (5) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ModuleNode::FromExpr(tvm::relay::Expr const&, tvm::Map<tvm::relay::GlobalVar, tvm::relay::Function, void, void> const&, tvm::Map<tvm::relay::GlobalTypeVar, tvm::relay::TypeData, void, void> const&)+0x1d5) [0x7f1d02bafbc5] [bt] (4) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ModuleNode::Add(tvm::relay::GlobalVar const&, tvm::relay::Function const&, bool)+0x28c) [0x7f1d02bad76c] [bt] (3) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::relay::Module const&, tvm::relay::GlobalVar const&)+0x1d7) [0x7f1d02a50c17] [bt] (2) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::relay::Expr)+0x86) [0x7f1d02a50496] [bt] (1) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ErrorReporter::RenderErrors(tvm::relay::Module const&, bool)+0x230c) [0x7f1d02ba671c] [bt] (0) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x32) [0x7f1d024e2622] [bt] (8) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(+0xa5fc51) [0x7f1d02bb0c51] [bt] (7) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ModuleNode::FromExpr(tvm::relay::Expr const&, tvm::Map<tvm::relay::GlobalVar, tvm::relay::Function, void, void> const&, tvm::Map<tvm::relay::GlobalTypeVar, tvm::relay::TypeData, void, void> const&)+0x1d5) [0x7f1d02bafbc5] [bt] (6) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ModuleNode::Add(tvm::relay::GlobalVar const&, tvm::relay::Function const&, bool)+0x28c) [0x7f1d02bad76c] [bt] (5) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::relay::Module const&, tvm::relay::GlobalVar const&)+0x1d7) [0x7f1d02a50c17] [bt] (4) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::relay::Expr)+0x55) [0x7f1d02a50465] [bt] (3) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::TypeSolver::Solve()+0x3b0) [0x7f1d02a84c50] [bt] (2) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue), tvm::runtime::TypedPackedFunc<bool (tvm::Array<tvm::relay::Type, void> const&, int, tvm::Attrs const&, tvm::relay::TypeReporter const&)>::AssignTypedLambda<bool ()(tvm::Array<tvm::relay::Type, void> const&, int, tvm::Attrs const&, tvm::relay::TypeReporter const&)>(bool ()(tvm::Array<tvm::relay::Type, void> const&, int, tvm::Attrs const&, tvm::relay::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0xd4) [0x7f1d028a11e4] [bt] (1) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::StridedSliceRel(tvm::Array<tvm::relay::Type, void> const&, int, tvm::Attrs const&, tvm::relay::TypeReporter const&)+0xe41) [0x7f1d029a2bc1] [bt] (0) /home/sunxu/anaconda3/lib/python3.7/site-packages/tvm-0.6.0-py3.7-linux-x86_64.egg/tvm/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x32) [0x7f1d024e2622] File "/home/sunxu/下载/未命名文件夹/incubator-tvm/src/relay/ir/error.cc", line 132 TVMError: Error(s) have occurred. The program has been annotated with them:

In main: v0.0.4 fn (%i0: Tensor[(1, 3, 224, 224), float32], %v788: Tensor[(32, 3, 3, 3), float32], %v790: Tensor[(32), float32], %v792: Tensor[(32, 1, 3, 3), float32], %v794: Tensor[(32), float32], %v796: Tensor[(16, 32, 1, 1), float32], %v798: Tensor[(16), float32], %v800: Tensor[(96, 16, 1, 1), float32], %v802: Tensor[(96), float32], %v804: Tensor[(96, 1, 3, 3), float32], %v806: Tensor[(96), float32], %v808: Tensor[(24, 96, 1, 1), float32], %v810: Tensor[(24), float32], %i1: Tensor[(1, 3, 56, 56), float32], %v812: Tensor[(144, 24, 1, 1), float32], %v814: Tensor[(144), float32], %v816: Tensor[(144, 1, 3, 3), float32], %v818: Tensor[(144), float32], %v820: Tensor[(24, 144, 1, 1), float32], %v822: Tensor[(24), float32], %v824: Tensor[(144, 24, 1, 1), float32], %v826: Tensor[(144), float32], %v828: Tensor[(144, 1, 3, 3), float32], %v830: Tensor[(144), float32], %v832: Tensor[(32, 144, 1, 1), float32], %v834: Tensor[(32), float32], %i2: Tensor[(1, 4, 28, 28), float32], %v836: Tensor[(192, 32, 1, 1), float32], %v838: Tensor[(192), float32], %v840: Tensor[(192, 1, 3, 3), float32], %v842: Tensor[(192), float32], %v844: Tensor[(32, 192, 1, 1), float32], %v846: Tensor[(32), float32], %i3: Tensor[(1, 4, 28, 28), float32], %v848: Tensor[(192, 32, 1, 1), float32], %v850: Tensor[(192), float32], %v852: Tensor[(192, 1, 3, 3), float32], %v854: Tensor[(192), float32], %v856: Tensor[(32, 192, 1, 1), float32], %v858: Tensor[(32), float32], %v860: Tensor[(192, 32, 1, 1), float32], %v862: Tensor[(192), float32], %v864: Tensor[(192, 1, 3, 3), float32], %v866: Tensor[(192), float32], %v868: Tensor[(64, 192, 1, 1), float32], %v870: Tensor[(64), float32], %i4: Tensor[(1, 8, 14, 14), float32], %v872: Tensor[(384, 64, 1, 1), float32], %v874: Tensor[(384), float32], %v876: Tensor[(384, 1, 3, 3), float32], %v878: Tensor[(384), float32], %v880: Tensor[(64, 384, 1, 1), float32], %v882: Tensor[(64), float32], %i5: Tensor[(1, 8, 14, 14), float32], %v884: Tensor[(384, 64, 1, 1), float32], %v886: Tensor[(384), float32], %v888: Tensor[(384, 1, 3, 3), float32], %v890: Tensor[(384), float32], %v892: Tensor[(64, 384, 1, 1), float32], %v894: Tensor[(64), float32], %i6: Tensor[(1, 8, 14, 14), float32], %v896: Tensor[(384, 64, 1, 1), float32], %v898: Tensor[(384), float32], %v900: Tensor[(384, 1, 3, 3), float32], %v902: Tensor[(384), float32], %v904: Tensor[(64, 384, 1, 1), float32], %v906: Tensor[(64), float32], %v908: Tensor[(384, 64, 1, 1), float32], %v910: Tensor[(384), float32], %v912: Tensor[(384, 1, 3, 3), float32], %v914: Tensor[(384), float32], %v916: Tensor[(96, 384, 1, 1), float32], %v918: Tensor[(96), float32], %i7: Tensor[(1, 12, 14, 14), float32], %v920: Tensor[(576, 96, 1, 1), float32], %v922: Tensor[(576), float32], %v924: Tensor[(576, 1, 3, 3), float32], %v926: Tensor[(576), float32], %v928: Tensor[(96, 576, 1, 1), float32], %v930: Tensor[(96), float32], %i8: Tensor[(1, 12, 14, 14), float32], %v932: Tensor[(576, 96, 1, 1), float32], %v934: Tensor[(576), float32], %v936: Tensor[(576, 1, 3, 3), float32], %v938: Tensor[(576), float32], %v940: Tensor[(96, 576, 1, 1), float32], %v942: Tensor[(96), float32], %v944: Tensor[(576, 96, 1, 1), float32], %v946: Tensor[(576), float32], %v948: Tensor[(576, 1, 3, 3), float32], %v950: Tensor[(576), float32], %v952: Tensor[(160, 576, 1, 1), float32], %v954: Tensor[(160), float32], %i9: Tensor[(1, 20, 7, 7), float32], %v956: Tensor[(960, 160, 1, 1), float32], %v958: Tensor[(960), float32], %v960: Tensor[(960, 1, 3, 3), float32], %v962: Tensor[(960), float32], %v964: Tensor[(160, 960, 1, 1), float32], %v966: Tensor[(160), float32], %i10: Tensor[(1, 20, 7, 7), float32], %v968: Tensor[(960, 160, 1, 1), float32], %v970: Tensor[(960), float32], %v972: Tensor[(960, 1, 3, 3), float32], %v974: Tensor[(960), float32], %v976: Tensor[(160, 960, 1, 1), float32], %v978: Tensor[(160), float32], %v980: Tensor[(960, 160, 1, 1), float32], %v982: Tensor[(960), float32], %v984: Tensor[(960, 1, 3, 3), float32], %v986: Tensor[(960), float32], %v988: Tensor[(320, 960, 1, 1), float32], %v990: Tensor[(320), float32], %v992: Tensor[(1280, 320, 1, 1), float32], %v994: Tensor[(1280), float32], %classifier.weight: Tensor[(27, 1280), float32], %classifier.bias: Tensor[(27), float32]) { %0 = nn.conv2d(%i0, %v788, strides=[2, 2], padding=[1, 1], kernel_size=[3, 3]); %1 = nn.bias_add(%0, %v790); %2 = clip(%1, a_min=0f, a_max=6f); %3 = nn.conv2d(%2, %v792, padding=[1, 1], groups=32, kernel_size=[3, 3]); %4 = nn.bias_add(%3, %v794); %5 = clip(%4, a_min=0f, a_max=6f); %6 = nn.conv2d(%5, %v796, kernel_size=[1, 1]); %7 = nn.bias_add(%6, %v798); %8 = nn.conv2d(%7, %v800, kernel_size=[1, 1]); %9 = nn.bias_add(%8, %v802); %10 = clip(%9, a_min=0f, a_max=6f); %11 = nn.conv2d(%10, %v804, strides=[2, 2], padding=[1, 1], groups=96, kernel_size=[3, 3]); %12 = nn.bias_add(%11, %v806); %13 = clip(%12, a_min=0f, a_max=6f); %14 = nn.conv2d(%13, %v808, kernel_size=[1, 1]); %15 = nn.bias_add(%14, %v810); %16 = strided_slice(%15, begin=[0, 3], end=[2147483647, -9223372036854775808]) an internal invariant was violated while typechecking your program [11:04:49] /home/sunxu/下载/未命名文件夹/incubator-tvm/src/relay/op/tensor/transform.cc:1919: Check failed: begin_v < end_v (3 vs. -9223372036854775784) : strided_slice get empty slice at axis 1 ; ; %17 = (%i1, %16); %18 = concatenate(%17, axis=1); %19 = nn.conv2d(%18, %v812, kernel_size=[1, 1]); %20 = nn.bias_add(%19, %v814); %21 = clip(%20, a_min=0f, a_max=6f); %22 = nn.conv2d(%21, %v816, padding=[1, 1], groups=144, kernel_size=[3, 3]); %23 = nn.bias_add(%22, %v818); %24 = clip(%23, a_min=0f, a_max=6f); %25 = nn.conv2d(%24, %v820, kernel_size=[1, 1]); %26 = nn.bias_add(%25, %v822); %27 = add(%15, %26); %28 = nn.conv2d(%27, %v824, kernel_size=[1, 1]); %29 = nn.bias_add(%28, %v826); %30 = clip(%29, a_min=0f, a_max=6f); %31 = nn.conv2d(%30, %v828, strides=[2, 2], padding=[1, 1], groups=144, kernel_size=[3, 3]); %32 = nn.bias_add(%31, %v830); %33 = clip(%32, a_min=0f, a_max=6f); %34 = nn.conv2d(%33, %v832, kernel_size=[1, 1]); %35 = nn.bias_add(%34, %v834); %36 = strided_slice(%35, begin=[0, 4], end=[2147483647, -9223372036854775808]); %37 = (%i2, %36); %38 = concatenate(%37, axis=1); %39 = nn.conv2d(%38, %v836, kernel_size=[1, 1]); %40 = nn.bias_add(%39, %v838); %41 = clip(%40, a_min=0f, a_max=6f); %42 = nn.conv2d(%41, %v840, padding=[1, 1], groups=192, kernel_size=[3, 3]); %43 = nn.bias_add(%42, %v842); %44 = clip(%43, a_min=0f, a_max=6f); %45 = nn.conv2d(%44, %v844, kernel_size=[1, 1]); %46 = nn.bias_add(%45, %v846); %47 = add(%35, %46); %48 = strided_slice(%47, begin=[0, 4], end=[2147483647, -9223372036854775808]); %49 = (%i3, %48); %50 = concatenate(%49, axis=1); %51 = nn.conv2d(%50, %v848, kernel_size=[1, 1]); %52 = nn.bias_add(%51, %v850); %53 = clip(%52, a_min=0f, a_max=6f); %54 = nn.conv2d(%53, %v852, padding=[1, 1], groups=192, kernel_size=[3, 3]); %55 = nn.bias_add(%54, %v854); %56 = clip(%55, a_min=0f, a_max=6f); %57 = nn.conv2d(%56, %v856, kernel_size=[1, 1]); %58 = nn.bias_add(%57, %v858); %59 = add(%47, %58); %60 = nn.conv2d(%59, %v860, kernel_size=[1, 1]); %61 = nn.bias_add(%60, %v862); %62 = clip(%61, a_min=0f, a_max=6f); %63 = nn.conv2d(%62, %v864, strides=[2, 2], padding=[1, 1], groups=192, kernel_size=[3, 3]); %64 = nn.bias_add(%63, %v866); %65 = clip(%64, a_min=0f, a_max=6f); %66 = nn.conv2d(%65, %v868, kernel_size=[1, 1]); %67 = nn.bias_add(%66, %v870); %68 = strided_slice(%67, begin=[0, 8], end=[2147483647, -9223372036854775808]); %69 = (%i4, %68); %70 = concatenate(%69, axis=1); %71 = nn.conv2d(%70, %v872, kernel_size=[1, 1]); %72 = nn.bias_add(%71, %v874); %73 = clip(%72, a_min=0f, a_max=6f); %74 = nn.conv2d(%73, %v876, padding=[1, 1], groups=384, kernel_size=[3, 3]); %75 = nn.bias_add(%74, %v878); %76 = clip(%75, a_min=0f, a_max=6f); %77 = nn.conv2d(%76, %v880, kernel_size=[1, 1]); %78 = nn.bias_add(%77, %v882); %79 = add(%67, %78); %80 = strided_slice(%79, begin=[0, 8], end=[2147483647, -9223372036854775808]); %81 = (%i5, %80); %82 = concatenate(%81, axis=1); %83 = nn.conv2d(%82, %v884, kernel_size=[1, 1]); %84 = nn.bias_add(%83, %v886); %85 = clip(%84, a_min=0f, a_max=6f); %86 = nn.conv2d(%85, %v888, padding=[1, 1], groups=384, kernel_size=[3, 3]); %87 = nn.bias_add(%86, %v890); %88 = clip(%87, a_min=0f, a_max=6f); %89 = nn.conv2d(%88, %v892, kernel_size=[1, 1]); %90 = nn.bias_add(%89, %v894); %91 = add(%79, %90); %92 = strided_slice(%91, begin=[0, 8], end=[2147483647, -9223372036854775808]); %93 = (%i6, %92); %94 = concatenate(%93, axis=1); %95 = nn.conv2d(%94, %v896, kernel_size=[1, 1]); %96 = nn.bias_add(%95, %v898); %97 = clip(%96, a_min=0f, a_max=6f); %98 = nn.conv2d(%97, %v900, padding=[1, 1], groups=384, kernel_size=[3, 3]); %99 = nn.bias_add(%98, %v902); %100 = clip(%99, a_min=0f, a_max=6f); %101 = nn.conv2d(%100, %v904, kernel_size=[1, 1]); %102 = nn.bias_add(%101, %v906); %103 = add(%91, %102); %104 = nn.conv2d(%103, %v908, kernel_size=[1, 1]); %105 = nn.bias_add(%104, %v910); %106 = clip(%105, a_min=0f, a_max=6f); %107 = nn.conv2d(%106, %v912, padding=[1, 1], groups=384, kernel_size=[3, 3]); %108 = nn.bias_add(%107, %v914); %109 = clip(%108, a_min=0f, a_max=6f); %110 = nn.conv2d(%109, %v916, kernel_size=[1, 1]); %111 = nn.bias_add(%110, %v918); %112 = strided_slice(%111, begin=[0, 12], end=[2147483647, -9223372036854775808]); %113 = (%i7, %112); %114 = concatenate(%113, axis=1); %115 = nn.conv2d(%114, %v920, kernel_size=[1, 1]); %116 = nn.bias_add(%115, %v922); %117 = clip(%116, a_min=0f, a_max=6f); %118 = nn.conv2d(%117, %v924, padding=[1, 1], groups=576, kernel_size=[3, 3]); %119 = nn.bias_add(%118, %v926); %120 = clip(%119, a_min=0f, a_max=6f); %121 = nn.conv2d(%120, %v928, kernel_size=[1, 1]); %122 = nn.bias_add(%121, %v930); %123 = add(%111, %122); %124 = strided_slice(%123, begin=[0, 12], end=[2147483647, -9223372036854775808]); %125 = (%i8, %124); %126 = concatenate(%125, axis=1); %127 = nn.conv2d(%126, %v932, kernel_size=[1, 1]); %128 = nn.bias_add(%127, %v934); %129 = clip(%128, a_min=0f, a_max=6f); %130 = nn.conv2d(%129, %v936, padding=[1, 1], groups=576, kernel_size=[3, 3]); %131 = nn.bias_add(%130, %v938); %132 = clip(%131, a_min=0f, a_max=6f); %133 = nn.conv2d(%132, %v940, kernel_size=[1, 1]); %134 = nn.bias_add(%133, %v942); %135 = add(%123, %134); %136 = nn.conv2d(%135, %v944, kernel_size=[1, 1]); %137 = nn.bias_add(%136, %v946); %138 = clip(%137, a_min=0f, a_max=6f); %139 = nn.conv2d(%138, %v948, strides=[2, 2], padding=[1, 1], groups=576, kernel_size=[3, 3]); %140 = nn.bias_add(%139, %v950); %141 = clip(%140, a_min=0f, a_max=6f); %142 = nn.conv2d(%141, %v952, kernel_size=[1, 1]); %143 = nn.bias_add(%142, %v954); %144 = strided_slice(%143, begin=[0, 20], end=[2147483647, -9223372036854775808]); %145 = (%i9, %144); %146 = concatenate(%145, axis=1); %147 = nn.conv2d(%146, %v956, kernel_size=[1, 1]); %148 = nn.bias_add(%147, %v958); %149 = clip(%148, a_min=0f, a_max=6f); %150 = nn.conv2d(%149, %v960, padding=[1, 1], groups=960, kernel_size=[3, 3]); %151 = nn.bias_add(%150, %v962); %152 = clip(%151, a_min=0f, a_max=6f); %153 = nn.conv2d(%152, %v964, kernel_size=[1, 1]); %154 = nn.bias_add(%153, %v966); %155 = add(%143, %154); %156 = strided_slice(%155, begin=[0, 20], end=[2147483647, -9223372036854775808]); %157 = (%i10, %156); %158 = concatenate(%157, axis=1); %159 = nn.conv2d(%158, %v968, kernel_size=[1, 1]); %160 = nn.bias_add(%159, %v970); %161 = clip(%160, a_min=0f, a_max=6f); %162 = nn.conv2d(%161, %v972, padding=[1, 1], groups=960, kernel_size=[3, 3]); %163 = nn.bias_add(%162, %v974); %164 = clip(%163, a_min=0f, a_max=6f); %165 = nn.conv2d(%164, %v976, kernel_size=[1, 1]); %166 = nn.bias_add(%165, %v978); %167 = add(%155, %166); %168 = nn.conv2d(%167, %v980, kernel_size=[1, 1]); %169 = nn.bias_add(%168, %v982); %170 = clip(%169, a_min=0f, a_max=6f); %171 = nn.conv2d(%170, %v984, padding=[1, 1], groups=960, kernel_size=[3, 3]); %172 = nn.bias_add(%171, %v986); %173 = clip(%172, a_min=0f, a_max=6f); %174 = nn.conv2d(%173, %v988, kernel_size=[1, 1]); %175 = nn.bias_add(%174, %v990); %176 = nn.conv2d(%175, %v992, kernel_size=[1, 1]); %177 = nn.bias_add(%176, %v994); %178 = clip(%177, a_min=0f, a_max=6f); %179 = mean(%178, axis=[3]); %180 = mean(%179, axis=[2]); %181 = nn.batch_flatten(%180); %182 = multiply(1f, %181); %183 = nn.dense(%182, %classifier.weight, units=27); %184 = multiply(1f, %classifier.bias); %185 = nn.bias_add(%183, %184); %186 = strided_slice(%15, begin=[0, 0], end=[2147483647, 3]); %187 = strided_slice(%35, begin=[0, 0], end=[2147483647, 4]); %188 = strided_slice(%47, begin=[0, 0], end=[2147483647, 4]); %189 = strided_slice(%67, begin=[0, 0], end=[2147483647, 8]); %190 = strided_slice(%79, begin=[0, 0], end=[2147483647, 8]); %191 = strided_slice(%91, begin=[0, 0], end=[2147483647, 8]); %192 = strided_slice(%111, begin=[0, 0], end=[2147483647, 12]); %193 = strided_slice(%123, begin=[0, 0], end=[2147483647, 12]); %194 = strided_slice(%143, begin=[0, 0], end=[2147483647, 20]); %195 = strided_slice(%155, begin=[0, 0], end=[2147483647, 20]); (%185, %186, %187, %188, %189, %190, %191, %192, %193, %194, %195) }

faheng commented 4 years ago

i can run your test,just this online can't python main.py something ...

NessAupera commented 2 years ago

I'm facing the similar issue. My environment versions are: onnx=1.11.0 torch= 1.3.0 torchvision=0.4.2 tvm= 0.6.0 cv2=4.5.5