onnx / optimizer

Actively maintained ONNX Optimizer
Apache License 2.0
634 stars 89 forks source link

GraphProto loses its sparse tensor initializer going through the optimizer. #5

Open jxchenus opened 4 years ago

jxchenus commented 4 years ago

Bug Report

Is the issue related to model conversion?

No.

Describe the bug

Start with an ONNX model with a sparse initializer. Run model optimizer with option "eliminate_unused_initializer". Now the returned model no longer has the original sparse initializer.

System information

Reproduction instructions

Expected behavior

The sparse initializer should persist in the optimized model.

Notes

When the optimizer converts GraphProto to onnx::Graph in the following stack, the sparse initializer is not carried over:

0 onnx::graphProtoToGraph (gp=..., nested=nested@entry=false)

at .../onnx/common/ir_pb_converter.cc:188 onnx/onnx#1 0x0000000000568d3f in onnx::ImportModelProto (mp=...) at .../onnx/common/ir_pb_converter.cc:338 onnx/onnx#2 0x000000000054e39a in onnx::optimization::Optimizer::optimize ( this=this@entry=0x7fffffffcf00, mp_in=...) at .../onnx/optimizer/optimize.h:26 onnx/onnx#3 0x0000000000534d3c in onnx::optimization::Optimize (mp_in=..., names=...) at .../onnx/optimizer/optimize.cc:32 rank2-identity.txt

askhade commented 4 years ago

ONNX optimizers are now being moved to a standalone repo under onnx tree. Please open this issue here : https://github.com/onnx/optimizer

jxchenus commented 4 years ago

Thanks for the update!

I'm not sure whether this is an ONNX specific issue, as the fix for it involves adding Sparse Tensor to onnx::Graph and the encoding and decoding logic in ir_pb_converter.cc.