PINTO0309 / tflite2tensorflow

Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support. Supports inverse quantization of INT8 quantization model.
https://qiita.com/PINTO
MIT License
258 stars 38 forks source link

ValueError: Dimension size, given by scalar input 1 must be in range [-1, 1) #8

Closed SaneBow closed 3 years ago

SaneBow commented 3 years ago

I am trying to convert these tflite models: https://github.com/breizhn/DTLN-aec/tree/main/pretrained_models The command I ran was: tflite2tensorflow --model_path dtln_aec_128_2.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_pb But ran into this error:

ValueError: Dimension size, given by scalar input 1 must be in range [-1, 1) for '{{node split_functional_5/lstm_6/lstm_cell_6/StatefulPartitionedCall/split}} = Split[T=DT_FLOAT, num_split=4](split_functional_5/lstm_6/lstm_cell_6/StatefulPartitionedCall/split/split_dim, BiasAdd_functional_5/lstm_6/lstm_cell_6/StatefulPartitionedCall/BiasAdd)' with input shapes: [], [512] and with computed input tensors: input[0] = <1>.

Tried with both pip installation and docker version. Is it a bug or something that not supported?

PINTO0309 commented 3 years ago

Fixed a behavior bug with keep_num_dims in FullyConnected. v1.7.5 047b86b53c4897b1f1a383c1e74460d01511f6e7

SaneBow commented 3 years ago

Thanks for the quick fix. It can run and convert to pb now. But when I load the weight quantized model it faild:

Traceback (most recent call last):
  File "run_aec.py", line 224, in <module>
    process_folder(args.model, args.in_folder, args.out_folder)
  File "run_aec.py", line 210, in process_folder
    os.path.join(new_directories[idx], file_names[idx]),
  File "run_aec.py", line 115, in process_file
    interpreter_1.set_tensor(input_details_1[2]["index"], lpb_mag)
  File "/usr/lib/python3/dist-packages/tflite_runtime/interpreter.py", line 423, in set_tensor
    self._interpreter.SetTensor(tensor_index, value)
ValueError: Cannot set tensor: Dimension mismatch. Got 3 but expected 4 for input 2.

I see this during quantization:

WARNING:tensorflow:Issue encountered when serializing variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
to_proto not supported in EAGER mode.
WARNING:tensorflow:Issue encountered when serializing trainable_variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
to_proto not supported in EAGER mode.

This may be unrelated to this issue though.

Edit: I confirmed the error is not related to this issue here by verifying that --output_no_quant_float32_tflite works. Thanks for the fix. We can closing this issue.

PINTO0309 commented 3 years ago

@SaneBow There are three different input values, but only one is 4-dimensional. I think it's simply that the number of "tensor_index" is swapped between the float32 and INT8 models. Screenshot 2021-05-15 10:54:04

SaneBow commented 3 years ago

@PINTO0309 Thanks for the reply and your amazing project . I am not familiar with tensorflow. What does this mean? Is it the model cannot be quantized or some other bugs in the tool?

PINTO0309 commented 3 years ago

@SaneBow Rather than a bug in the tool, I am aware of the current specification of the tool, which assumes image processing. In other words, my tool basically expects [N, H, W, C] for the input shape. In addition, I know in advance that an error will occur if there are multiple shapes of input data.

However, when I think about quantizing these particular models, I also know that it is easy to quantize them by simply modifying one part of the tool.

In short, technically any model can be quantized, but I have neglected to customize the tool. :crying_cat_face:

SaneBow commented 3 years ago

I see, so this tool expect image processing models but I tried to use it on audio processing models that's why I failed. Now I have only the tflite and the first step of converting back to saved model seems to produce incorrect model with dimension mismatch. If I can have correct saved model then weight quantization should be easy. Which part do I need to modify to make it work for a different input shape? Is it an easy task or will it involve a lot of changes?

PINTO0309 commented 3 years ago

@SaneBow I took your issue seriously and understood that there is a demand for it, so I upgraded the version to support quantization of models with multiple input and multiple shape input. The current version is v1.8.1.

SaneBow commented 3 years ago

That's fast. You are amazing. Now I upgrade to v1.8.1 and try to convert those models again. But I got this error when load one of the converted model:

Traceback (most recent call last):
  File "run_aec.py", line 138, in stage2
    interpreter_2.set_tensor(input_details_2[0]["index"], estimated_block)
  File "/Users/xb/.pyenv/versions/DTLN/lib/python3.7/site-packages/tflite_runtime/interpreter.py", line 423, in set_tensor
    self._interpreter.SetTensor(tensor_index, value)
ValueError: Cannot set tensor: Dimension mismatch. Got 3 but expected 4 for input 0.

What I am trying to do is to quantize the models in https://github.com/breizhn/DTLN-aec I used the following command:

tflite2tensorflow --model_path dtln_aec_128_2.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_pb
tflite2tensorflow --model_path dtln_aec_128_2.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_weight_quant_tflite

Here this input estimated_block for dtln_aec_128_2.tflite is an output from the model dtln_aec_128_1.tflite. Please let me know if you need any help testing.

PINTO0309 commented 3 years ago

I tried to implement the same workflow, but no error occurred.

tflite2tensorflow \
--model_path dtln_aec_128_2.tflite \
--flatc_path ../flatc \
--schema_path ../schema.fbs \
--output_pb

tflite2tensorflow \
--model_path dtln_aec_128_2.tflite \
--flatc_path ../flatc \
--schema_path ../schema.fbs \
--output_weight_quant_tflite

Screenshot 2021-05-16 22:20:35

Once the .json file is generated, it will not be regenerated, so try deleting dtln_aec_128_2.json by hand and then try again.

PINTO0309 commented 3 years ago

Ahh, you mean that you get an error at runtime.

PINTO0309 commented 3 years ago

In the process of transforming the model, TensorFlow swaps the order of the input layers by itself.

From. run_aec.py

        interpreter_2.set_tensor(input_details_2[1]["index"], states_2) # float32[1,2,128,2]
        interpreter_2.set_tensor(input_details_2[0]["index"], estimated_block) # float32[1,1,257]
        interpreter_2.set_tensor(input_details_2[2]["index"], in_lpb) # float32[1,1,257]

To. run_aec.py

        interpreter_2.set_tensor(input_details_2[0]["index"], states_2)
        interpreter_2.set_tensor(input_details_2[1]["index"], estimated_block)
        interpreter_2.set_tensor(input_details_2[2]["index"], in_lpb)

Or

        interpreter_2.set_tensor(input_details_2[1]["index"], states_2)
        interpreter_2.set_tensor(input_details_2[2]["index"], estimated_block)
        interpreter_2.set_tensor(input_details_2[0]["index"], in_lpb)

Or

        interpreter_2.set_tensor(input_details_2[2]["index"], states_2)
        interpreter_2.set_tensor(input_details_2[1]["index"], estimated_block)
        interpreter_2.set_tensor(input_details_2[0]["index"], in_lpb)
SaneBow commented 3 years ago

That solved my problem. Thanks for helping a tensorflow noob out. One last question: Any way to convert the model so it aligns with the original one?

PINTO0309 commented 3 years ago

After executing the first command, we need to rewrite the json file by replacing the order of the output layers.

SaneBow commented 3 years ago

That works. Thanks. Your projects are all very cool.