PINTO0309 / tflite2tensorflow

Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support. Supports inverse quantization of INT8 quantization model.
https://qiita.com/PINTO
MIT License
258 stars 38 forks source link

Densify ? #11

Closed geaxgx closed 2 years ago

geaxgx commented 3 years ago

1. Ubuntu 18.04

2. OS Architecture x86_64

3. OpenVINO e.g. 2021.4.582

9. Download URL for .tflite IR model https://github.com/google/mediapipe/blob/master/mediapipe/modules/pose_detection/pose_detection.tflite

Hi @PINTO0309 ! New mediapipe version 0.8.6 comes with new models for Blazepose (that's a never ending story :-) The size of the pose detection model (link above) has been significantly reduced (from ~7.5MB to ~3MB) but unfortunately the model is using a layer named Densify that is not implemented in tflite2tensorflow. I guess it is a relatively new layer. When trying to visualize its data in Netron, I get an "Invalid tensor data size" message. image

Do you think Densify can be easily implemented in your tools ? Note that it is not something I am eagerly waiting for since I can do without it by using the previous version of the pose detection model.

PINTO0309 commented 3 years ago

After analyzing the TensorFlow implementation, it seems that I need to implement the following additional steps in the following order. It's going to take some time.

  1. https://www.tensorflow.org/api_docs/python/tf/sparse/SparseTensor
  2. https://www.tensorflow.org/api_docs/python/tf/sparse/to_dense
  3. https://github.com/tensorflow/tensorflow/blob/5d442828288614d57062f77d8af5d5b090b21469/tensorflow/lite/kernels/internal/reference/densify.h#L29-L43
  4. https://github.com/tensorflow/tensorflow/blob/a6b6df2de94421ecdb1baa97aca3ffca74ee04ad/tensorflow/lite/tools/optimize/sparsity/format_converter.cc#L215-L256

    template <typename T>
    inline void Densify(const TfLiteSparsity* sparsity,
                    const RuntimeShape& input_shape, const T* input_data,
                    const RuntimeShape& output_shape, T* output_data,
                    TfLiteContext* context) {
    const int dims_count = output_shape.DimensionsCount();
    std::vector<int> vector_shape(dims_count);
    for (int i = 0; i < dims_count; i++) {
    vector_shape[i] = output_shape.Dims(i);
    }
    
    tflite::optimize::sparsity::FormatConverter<T> converter(vector_shape, *sparsity);
    converter.SparseToDense(input_data, output_shape.FlatSize(), output_data, context);
    }
geaxgx commented 3 years ago

Thanks @PINTO0309 ! As I said before, take your time. I can play with the previous version.

I guess Densify allows a smaller size of the model on disk, but probably not in memory :-)

MrNeither commented 2 years ago

@geaxgx I get the same problem. Could you please give a link to the previous version of the model, which can be converted with tflite2tensorflow?

MrNeither commented 2 years ago

@geaxgx Sorry for the trouble, it's just in another committee there is)

PINTO0309 commented 2 years ago

https://github.com/PINTO0309/PINTO_model_zoo/tree/main/053_BlazePose

PINTO0309 commented 2 years ago

Fixes: f032b3128b3bf16191e27916eded639a5db3d782

This is an experimental implementation at the moment, so it is not well tested.

tflite2tensorflow v1.11.7 https://github.com/PINTO0309/tflite2tensorflow/releases/tag/v1.11.7 model_float32 tflite (2)

PINTO0309 commented 2 years ago

Commited. TFLite Float32/Float16, EdgeTPU, ONNX, OpenVINO IR, Myriad Blob, TF-TRT, TFJS, CoreML. https://github.com/PINTO0309/PINTO_model_zoo/tree/main/053_BlazePose/20_densify_pose_detection

geaxgx commented 2 years ago

Thanks @PINTO0309 !