google-coral / edgetpu

Coral issue tracker (and legacy Edge TPU API source)
https://coral.ai
Apache License 2.0
428 stars 125 forks source link

Run self-defined operations on Edge TPU but output is different with tflite CPU #552

Open Yin-Jiaqi opened 2 years ago

Yin-Jiaqi commented 2 years ago

Description

We are trying to run some self-defined operations on Edge TPU. But the results are not correct.

array([[0.01454818, 0.051111 , 0.01396228, 0.05225142, 0.41379207, 0.28845084, 0.04657562, 0.0190449 , 0.08210775, 0.01815593]])

array([[0.015625 , 0.05078125, 0.015625 , 0.05078125, 0.42578125, 0.27734375, 0.046875 , 0.01953125, 0.0859375 , 0.01953125]])

array([[0.09375 , 0.09375 , 0.09375 , 0.109375 , 0.1171875 , 0.09765625, 0.1171875 , 0.08984375, 0.08984375, 0.08984375]])

We can see the result is not correct.

The full code are attached in here: https://colab.research.google.com/drive/1xz623x4F3Un0z4MdNFstY-sfVTEnEAHu?usp=sharing

I appreciate your help.

input_shape=[[1,2,200,200]] datas=data(input_shape) converter = tf.lite.TFLiteConverter.from_keras_model(full_model) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.representative_dataset =datas.data_set converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,tf.lite.OpsSet.SELECT_TF_OPS] converter.target_spec.supported_types = [tf.int8,tf.float32] converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8 tflite_model = converter.convert() with open('model_quant.tflite', 'wb') as f: f.write(tflite_model)

! edgetpu_compiler model_quant.tflite


- **Run inference:**

def evaluate_edgetpu_tflite_model(edgetpu_path,data):

Initialize TFLite interpreter using the model.

model_file = os.path.join(edgetpu_path) interpreter = edgetpu.make_interpreter(model_file) interpreter.allocate_tensors()

scale, zero_point = interpreter.get_input_details()[0]['quantization'] normalize_data=np.uint8(data / scale + zero_point) common.set_input(interpreter, normalize_data) interpreter.allocate_tensors()

interpreter.invoke() scale, zero_point = interpreter.get_output_details()[0]['quantization'] output = interpreter.tensor(interpreter.get_output_details()[0]["index"]) output=output() output=scale * (output - zero_point) return output

evaluate_edgetpu_tflite_model('model_quant_edgetpu.tflite',input_data)


<details><summary>Click to expand!</summary> 

 ### Issue Type

Support

### Operating System

Linux

### Coral Device

USB Accelerator

### Other Devices

_No response_

### Programming Language

Python 3.8

### Relevant Log Output

```shell
- Tensorflow CPU output: 

array([[0.01454818, 0.051111  , 0.01396228, 0.05225142, 0.41379207,
        0.28845084, 0.04657562, 0.0190449 , 0.08210775, 0.01815593]])

- tflite CPU version output: 

array([[0.015625  , 0.05078125, 0.015625  , 0.05078125, 0.42578125,
        0.27734375, 0.046875  , 0.01953125, 0.0859375 , 0.01953125]])

- edgetpu output:

array([[0.09375   , 0.09375   , 0.09375   , 0.109375  , 0.1171875 ,
        0.09765625, 0.1171875 , 0.08984375, 0.08984375, 0.08984375]])

hjonnala commented 2 years ago

@jiaqiyin1995 the issue is with Batch Mat Mul operation. It is a bug with the compiler as it is mapping the batch mat mul operation to edgeTPU even though it is not supported.