onnx / onnx-tensorflow

Tensorflow Backend for ONNX
Other
1.28k stars 296 forks source link

'Tensor' object has no attribute 'sparse_read' #332

Open markb21 opened 5 years ago

markb21 commented 5 years ago

I am trying to load an onnx model with these commands:

from onnx_tf.backend import prepare
dnn_model_tf = prepare(dnn_model_onnx, device='CPU')

When doing so I get this error:

AttributeError                            Traceback (most recent call last)
~/anaconda3/envs/export-production/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py in gather(***failed resolving arguments***)
   2672     # introducing a circular dependency.
-> 2673     return params.sparse_read(indices, name=name)
   2674   except AttributeError:

AttributeError: 'Tensor' object has no attribute 'sparse_read'

Have you encountered such issue? How could I resolve it?

fumihwh commented 5 years ago

@markb21 From error message that it looks like a tf error more than onnx-tf error. Could you give more details?

markb21 commented 5 years ago

@fumihwh sure. It is a RNN classifier created in pytorch. Here is the whole network:

SequentialRNN( (0): MultiBatchRNN( (encoder): Embedding(26662, 400, padding_idx=1) (encoder_with_dropout): EmbeddingDropout( (embed): Embedding(26662, 400, padding_idx=1) ) (rnns): ModuleList( (0): WeightDrop( (module): LSTM(400, 1150) ) (1): WeightDrop( (module): LSTM(1150, 1150) ) (2): WeightDrop( (module): LSTM(1150, 400) ) ) (dropouti): LockedDropout() (dropouths): ModuleList( (0): LockedDropout() (1): LockedDropout() (2): LockedDropout() ) ) (1): PoolingLinearClassifier( (layers): ModuleList( (0): LinearBlock( (lin): Linear(in_features=1200, out_features=50, bias=True) (drop): Dropout(p=0.2) (bn): BatchNorm1d(1200, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (1): LinearBlock( (lin): Linear(in_features=50, out_features=6, bias=True) (drop): Dropout(p=0.1) (bn): BatchNorm1d(50, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) ) )

Then I just exported it into onnx format. Since this throws errors when I run it with graphpipe for onnx, I want to convert it to tensorflow-like object so that I can then serve it with graphpipe. Is that enough details? Or you need to know more?

fumihwh commented 5 years ago

@markb21 Your onnx model file would be helpful.