PINTO0309 / PINTO_model_zoo

A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.
https://qiita.com/PINTO
MIT License
3.49k stars 566 forks source link

How To Convert Mediapipe BlazePose For Coral TPU #418

Closed ofekkazes closed 1 month ago

ofekkazes commented 1 month ago

Issue Type

Others

OS

Ubuntu

OS architecture

x86_64

Programming Language

Other

Framework

TensorFlow

Model name and Weights/Checkpoints URL

https://storage.googleapis.com/mediapipe-assets/pose_detection.tflite

Description

Hi,

Firstly, this is a really cool project that is great for quantization of models that are not already quantized, and a great place to get the models easily.

I am trying to quantize and convert the Mediapipe BlazePose 3D Pose Estimation for use with the Coral USB accelerator TPU. Maybe I am mistaken as in the chart it is already quantized for INT8, however I might not understood it correctly.

I followed the steps in the 053_BlazePose folder in the following order:

  1. executed download_fullkey.sh
  2. from the convert_script.txt I used the docker command with a modification (due to docker not finding the remote repository):
    docker run --gpus all -it --rm \
    -v `pwd`:/workspace/resources \
    -e LOCAL_UID=$(id -u $USER) \
    -e LOCAL_GID=$(id -g $USER) \
    ghcr.io/pinto0309/tflite2tensorflow:latest bash
  3. Inside the docker container running I first run the following command:
    tflite2tensorflow \
    --model_path pose_detection.tflite \
    --flatc_path ../flatc \
    --schema_path ../schema.fbs \
    --model_output_path saved_model_edgetpu \
    --output_pb True \
    --optimizing_hardswish_for_edgetpu True

I got two errors - the first was about the pose_detection.tflite not found, so I downloaded it from Mediapipe Model Location

And the second error is that --optimizing_hardswish_for_edgetpu True had the following error:

tflite2tensorflow: error: unrecognized arguments: True --optimizing_hardswish_for_edgetpu True

So I got rid of that flag, and then the second command also had an issue with the boolean flag so I removed the boolean in this way:

tflite2tensorflow \
  --model_path pose_detection.tflite \
  --flatc_path ../flatc \
  --schema_path ../schema.fbs \
  --model_output_path saved_model_edgetpu \
  --string_formulas_for_normalization 'data / 255' \
  --output_edgetpu    <----  Removed True

Lastly I finally got a 'Compilation succeeded!' message and I saw the files in the saved_model_edgetpu folder, however I am not sure I followed the steps correctly. Did I follow the steps correctly, and I can now use the quantized model with the coral tpu or I got an error in one of my steps?

BTW, the log for the compilation was the following:

Operator                       Count      Status

DEPTH_TO_SPACE                 3          Operation not supported

PAD                            8          Mapped to Edge TPU

CONCATENATION                  2          Mapped to Edge TPU

ADD                            15         Mapped to Edge TPU

RESIZE_BILINEAR                2          Mapped to Edge TPU

CONV_2D                        45         Mapped to Edge TPU

DEPTHWISE_CONV_2D              28         Mapped to Edge TPU

RESHAPE                        6          Mapped to Edge TPU

What is the DEPTH_TO_SPACE?

Thank you in advance, Ofek

Relevant Log Output

No response

URL or source code for simple inference testing code

No response

PINTO0309 commented 1 month ago

You're converting correctly with that.

EdgetTPU is a dead product that has not been updated in any way since 2022. In other words, DEPTH_TO_SPACE will never be able to be converted.

image

ofekkazes commented 1 month ago

Oh wow thank you for this swift response, well received. Do you know what is the DEPTH_TO_SPACE variable?

PINTO0309 commented 1 month ago

If you want to know, attach the float32 tflite and uint8 tflite here. I mean the tflite file after conversion with tflite2tensorflow.

ofekkazes commented 1 month ago

Sure, you can see all the files that were produced (including with models) in the wetransfer link: link Please note, the original file is in the root of the zip, and the converted files are in the saved_model_edgetpu folder

PINTO0309 commented 1 month ago

Please specify --output_no_quant_float32_tflite to generate a float32 model. It's a pain to convert it myself because I'm concentrating on other tasks.

ofekkazes commented 1 month ago

Sure, There it is

PINTO0309 commented 1 month ago

EdgeTPU does not support tensors beyond 5 dimensions. Since DEPTH_TO_SPACE used in the pose_detection model is an operation involving Reshape and Transpose to 6 dimensions in the internal processing, EdgeTPU skips the transformation. There is no workaround.

https://www.tensorflow.org/api_docs/python/tf/nn/depth_to_space

python -m tf2onnx.convert \
--opset 11 \
--inputs-as-nchw input_1:0 \
--tflite pose_detection.tflite \
--output pose_detection_tf2onnx.onnx

onnx2tf \
-i pose_detection_tf2onnx.onnx \
-cotof \
-oiqt \
-qt per-tensor \
-coion

edgetpu_compiler pose_detection_tf2onnx_full_integer_quant.tflite
ofekkazes commented 1 month ago

Well noted, thank you for taking the time to explain this fully. I will try out other 3D pose estimation models to test what can work with the coral tpu and let you know, as I did not know this was an already unsupported (last update in 2022) device and I got that for a small test project.