apache / mxnet

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
https://mxnet.apache.org
Apache License 2.0
20.78k stars 6.79k forks source link

UnboundLocalError: local variable 'convert_func' referenced before assignment #20260

Closed ajoshi-cimpress closed 3 years ago

ajoshi-cimpress commented 3 years ago

Description

ONNX export gives local variable 'convert_func' referenced before assignment for mxnet-cu112==1.9.0b20210510.

Error Message

Traceback (most recent call last): File "mx_export_onnx.py", line 68, in onnx_mxnet.export_model("./edges/models/edges-color-reduced-symbol.json", "./edges/models/edges-color-reduced-0000.params", [(1, 4, 536, 536)], 'float32', "edges-color-reduced-32.onnx") File "/home/ajoshi/.local/lib/python3.7/site-packages/mxnet/onnx/mx2onnx/_export_model.py", line 125, in export_model dynamic=dynamic, dynamic_input_shapes=dynamic_input_shapes) File "/home/ajoshi/.local/lib/python3.7/site-packages/mxnet/onnx/mx2onnx/_export_onnx.py", line 351, in create_onnx_graph_proto opset_version=opset_version File "/home/ajoshi/.local/lib/python3.7/site-packages/mxnet/onnx/mx2onnx/_export_onnx.py", line 103, in convert_layer ret = convert_func(node, **kwargs) UnboundLocalError: local variable 'convert_func' referenced before assignment

To Reproduce

I think it is reproducible for any model with slice operator.

Environment

We recommend using our script for collecting the diagnostic information with the following command curl --retry 10 -s https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py | python3

Environment Information ``` # ----------Python Info---------- Version : 3.7.9 Compiler : GCC 4.8.5 20150623 (Red Hat 4.8.5-44) Build : ('default', 'Mar 8 2021 12:13:58') Arch : ('64bit', 'ELF') ------------Pip Info----------- Version : 21.1.1 Directory : /home/ajoshi/.local/lib/python3.7/site-packages/pip ----------MXNet Info----------- Version : 1.9.0 Directory : /home/ajoshi/.local/lib/python3.7/site-packages/mxnet Commit hash file "/home/ajoshi/.local/lib/python3.7/site-packages/mxnet/COMMIT_HASH" not found. Not installed from pre-built package or built from source. Library : ['/home/ajoshi/.local/lib/python3.7/site-packages/mxnet/libmxnet.so'] Build features: ✔ CUDA ✔ CUDNN ✔ NCCL ✔ CUDA_RTC ✖ TENSORRT ✔ CPU_SSE ✔ CPU_SSE2 ✔ CPU_SSE3 ✖ CPU_SSE4_1 ✖ CPU_SSE4_2 ✖ CPU_SSE4A ✖ CPU_AVX ✖ CPU_AVX2 ✔ OPENMP ✖ SSE ✖ F16C ✖ JEMALLOC ✔ BLAS_OPEN ✖ BLAS_ATLAS ✖ BLAS_MKL ✖ BLAS_APPLE ✔ LAPACK ✔ MKLDNN ✔ OPENCV ✖ CAFFE ✖ PROFILER ✔ DIST_KVSTORE ✖ CXX14 ✖ INT64_TENSOR_SIZE ✔ SIGNAL_HANDLER ✖ DEBUG ✖ TVM_OP ----------System Info---------- Platform : Linux-3.10.0-1062.4.1.el7.x86_64-x86_64-with-centos-7.7.1908-Core system : Linux node : cimpress03.localdomain release : 3.10.0-1062.4.1.el7.x86_64 version : #1 SMP Fri Oct 18 17:15:30 UTC 2019 ----------Hardware Info---------- machine : x86_64 processor : x86_64 Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 40 On-line CPU(s) list: 0-39 Thread(s) per core: 2 Core(s) per socket: 10 Socket(s): 2 NUMA node(s): 4 Vendor ID: GenuineIntel CPU family: 6 Model: 63 Model name: Intel(R) Xeon(R) CPU E5-2687W v3 @ 3.10GHz Stepping: 2 CPU MHz: 3096.598 BogoMIPS: 6193.19 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 12800K NUMA node0 CPU(s): 0-4,20-24 NUMA node1 CPU(s): 5-9,25-29 NUMA node2 CPU(s): 10-14,30-34 NUMA node3 CPU(s): 15-19,35-39 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm epb invpcid_single intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear spec_ctrl intel_stibp flush_l1d ```
github-actions[bot] commented 3 years ago

Welcome to Apache MXNet (incubating)! We are on a mission to democratize AI, and we are glad that you are contributing to it by opening this issue. Please make sure to include all the relevant context, and one of the @apache/mxnet-committers will be here shortly. If you are interested in contributing to our project, let us know! Also, be sure to check out our guide on contributing to MXNet and our development guides wiki.

samskalicky commented 3 years ago

Looks like the issue is in here: https://github.com/apache/incubator-mxnet/blob/73274fd365214ffbc54d3d4af2434cdf8f936251/python/mxnet/onnx/mx2onnx/_export_onnx.py#L100-L103 where convert_func might never get set in the loop.

Zha0q1 commented 3 years ago

Thanks for creating the issue, I created a pr to fix this. @ajoshi-cimpress this most likely means that you are using an older onnx version than what we support (>= 1.7.0) Would you try to update onnx and what is your use case?

ajoshi-cimpress commented 3 years ago

Thanks for creating the issue, I created a pr to fix this. @ajoshi-cimpress this most likely means that you are using an older onnx version than what we support (>= 1.7.0) Would you try to update onnx and what is your use case?

I already tried it before, for your information this is what I get when I tried to upgrade onnx Requirement already satisfied: onnx in /home/ajoshi/.local/lib/python3.7/site-packages (1.9.0)

The use case is, I have a model which I want to export to ONNX and run it using onnxruntime.

ajoshi-cimpress commented 3 years ago

FYI, I tried your fix and it's better now. Get following error : AttributeError: No conversion function registered for op type _not_equal yet.

ajoshi-cimpress commented 3 years ago

For "AttributeError: No conversion function registered for op type _not_equal yet." I was able to use the recent commit related to broadcast_operator_not_equal and was able to export the model. I am sure future releases will fix this issue as well so closing the case.