Open seanshpark opened 2 years ago
LSTM.onnx
Seems converter.experimental_enable_resource_variables = True
will convert to tflite without error.
tflite2circle
fails: tflite::BuiltinOperator_CALL_ONCE
tfldump result of LSTM.tflite
Operator Codes: [order] OpCodeName (OpCode Enum)
[0] CALL_ONCE (code: 129, dep_code: 127, version: 1)
[1] CUSTOM(VarHandleOp) (code: 32, dep_code: 32, version: 1)
[2] STRIDED_SLICE (code: 45, dep_code: 45, version: 1)
[3] WHILE (code: 119, dep_code: 119, version: 1)
[4] RESHAPE (code: 22, dep_code: 22, version: 1)
[5] CUSTOM(AssignVariableOp) (code: 32, dep_code: 32, version: 1)
[6] LESS (code: 58, dep_code: 58, version: 1)
[7] LOGICAL_AND (code: 86, dep_code: 86, version: 1)
[8] ADD (code: 0, dep_code: 0, version: 1)
[9] GATHER (code: 36, dep_code: 36, version: 1)
[10] CONCATENATION (code: 2, dep_code: 2, version: 1)
[11] CUSTOM(ReadVariableOp) (code: 32, dep_code: 32, version: 1)
[12] TRANSPOSE (code: 39, dep_code: 39, version: 1)
[13] FULLY_CONNECTED (code: 9, dep_code: 9, version: 1)
[14] SPLIT (code: 49, dep_code: 49, version: 1)
[15] LOGISTIC (code: 14, dep_code: 14, version: 1)
[16] TANH (code: 28, dep_code: 28, version: 1)
[17] MUL (code: 18, dep_code: 18, version: 1)
[18] SLICE (code: 65, dep_code: 65, version: 1)
These need attention. @lemmaa , do you have any comments for these?
[0] CALL_ONCE (code: 129, dep_code: 127, version: 1)
[1] CUSTOM(VarHandleOp) (code: 32, dep_code: 32, version: 1)
[3] WHILE (code: 119, dep_code: 119, version: 1)
[5] CUSTOM(AssignVariableOp) (code: 32, dep_code: 32, version: 1)
[11] CUSTOM(ReadVariableOp) (code: 32, dep_code: 32, version: 1)
For CALL_ONCE
, we need to upgrade circle schema, to follow TF2.6.0 or higher.
Or maybe try follow try removing mutable variables
in onnx-tf
ðŸ˜
Current LSTM.tflite
main subgraph
T(0:0) FLOAT32 (1, 3, 20) B(1) serving_default_2:0
T(0:1) FLOAT32 (1, 3, 20) B(2) serving_default_1:0
T(0:2) FLOAT32 (5, 3, 10) B(3) serving_default_input:0
T(0:16) FLOAT32 (5, 3, 20) B(17) StatefulPartitionedCall:2
T(0:22) FLOAT32 (1, 3, 20) B(23) StatefulPartitionedCall:1
T(0:23) FLOAT32 (1, 3, 20) B(24) StatefulPartitionedCall:0
O(0:0) CALL_ONCE
O(0:1) CUSTOM(VarHandleOp)
container() dtype(0) shared_name(lstm_bias_lstm_0)
O T(0:10) lstm_bias_lstm_0
O(0:2) CUSTOM(VarHandleOp)
container() dtype(0) shared_name(lstm_kernel_lstm_0)
O T(0:11) lstm_kernel_lstm_0
O(0:5) WHILE
cond_subgraph_index(2) body_subgraph_index(3)
I T(0:4) ExpandDims/dim
I T(0:4) ExpandDims/dim
I T(0:3) LSTM_eec48014/rnn/TensorArrayV2
I T(0:13) LSTM_eec48014/strided_slice2
I T(0:12) LSTM_eec48014/strided_slice_1
I T(0:11) lstm_kernel_lstm_0
I T(0:10) lstm_bias_lstm_0
I T(0:2) serving_default_input:0
O T(0:14) LSTM_eec48014/rnn/while
O T(0:15) LSTM_eec48014/rnn/while1
O T(0:16) StatefulPartitionedCall:2
O T(0:17) LSTM_eec48014/rnn/while2
O T(0:18) LSTM_eec48014/rnn/while3
O T(0:19) LSTM_eec48014/rnn/while4
O T(0:20) LSTM_eec48014/rnn/while5
O T(0:21) LSTM_eec48014/rnn/while6
Inputs/Outputs: I(input)/O(output) T(tensor index) OperandName
I T(0:0) serving_default_2:0
I T(0:1) serving_default_1:0
I T(0:2) serving_default_input:0
O T(0:16) StatefulPartitionedCall:2
O T(0:22) StatefulPartitionedCall:1
O T(0:23) StatefulPartitionedCall:0
Export to TF (2.6) model warnings
tf.nn.rnn_cell.LSTMCell
is deprecated and will be removed in a future version. This class is equivalent as tf.keras.layers.LSTMCell
, and will be replaced by that in Tensorflow 2.0.tf.nn.rnn_cell.MultiRNNCell
is deprecated. This class is equivalent as tf.keras.layers.StackedRNNCells
, and will be replaced by that in Tensorflow 2.0.keras.layers.RNN(cell)
, which is equivalent to this APIlayer.add_variable
is deprecated and will be removed in a future version. Please use layer.add_weight
method instead.--> #8216
Using onnx-tensorflow
from source
Uninstall installed from normal package
cd .../bin/venv/lib/python3.6/site-packages
pip3 uninstall onnx_tf
Install for source
cd (somewhere)/onnx-tensorflow
pip3 install -e .
(-e: editable)
in site-packages
folder, onnx-tf.egg-link
file will show up
To uninstall with -e
version
pip3 uninstall onnx_tf
Calling run
with the prepare
data, seems to remove Variable nodes...
in one-import-onnx
tf_savedmodel = onnx_tf.backend.prepare(onnx_model)
tf_savedmodel.run(input_data) <-- THIS, input_data with np.zeros()
--> tflite is generated
--> circle2circle fails with WHILE BODY subgraph, where Slice
Op input is NOT constant but output of Concat
--> ShapeInference fails where current implementation assumes CircleSlice
2nd, 3rd inputs are CircleConst
@seanshpark
Why solution from onnx_legalize.py
is not applicable?
It simply unrolls LSTM and there are no complex operations left.
Why solution from onnx_legalize.py is not applicable?
I cannot answer this as I do not have deep knowledge on this. @lemmaa , can you please check this with application part?
Why solution from onnx_legalize.py is not applicable?
As of current implemetation ALWAYS convert without a switch
And as I know for general,
CC @jyoungyun
About tf_savedmodel.run(input_data)
Q) how to prepare input_data
?
next Concat
input is strange... and the output also
concat((1,1,1), (1,1,1), (1,3,20)) ==> (5,3,20) ???
Slice
...@seanshpark Can you share the last tflite file?
Can you share the last tflite file?
Here is LSTM.tflite in LSTM.zip
@llFreetimell , FYI; from LSTM.onnx to LSTM.tflite, draft is #8219
tf_prep.run(module._dummy_)
is called to remove CUSTOM(VarHandleOp)
(https://github.com/Samsung/ONE/issues/8217#issuecomment-1000680271)
With current version of other packages, import LSTM op of ONNX model fails (from pytorch)
Our pytorch example also fails and we can use this example for tracing.