google-ai-edge / mediapipe

Cross-platform, customizable ML solutions for live and streaming media.
https://mediapipe.dev
Apache License 2.0
26.76k stars 5.08k forks source link

Customized and for edgeTPU compiled object detection model does not run #4717

Open robodhhb opened 1 year ago

robodhhb commented 1 year ago

Have I written custom code (as opposed to using a stock example script provided in MediaPipe)

Yes

OS Platform and Distribution

PiOS Buster

Python Version

Python 3.7.3

MediaPipe Model Maker version

Actual you get on Colab

Task name (e.g. Image classification, Gesture recognition etc.)

Object detection

Describe the actual behavior

I adapted your example for customising an object detection model with the mediapipe model-maker on Colab with my training data. The training was successful. Also successful was the compilation of the model with the EdgeTPU compiler. 
When I started my App on the Raspberry Pi the model is loaded without error and I got a runtime error when the first picture is inferenced. 
The code line is:
self.objs = get_objects(self.interpreter, self.default_threshold)[:self.default_top_k]

The error is:
File "/usr/lib/python3/dist-packages/pycoral/adapters/detect.py", line 214, in get_objects
    elif common.output_tensor(interpreter, 3).size == 1:
  File "/usr/lib/python3/dist-packages/pycoral/adapters/common.py", line 29, in output_tensor
    return interpreter.tensor(interpreter.get_output_details()[i]['index'])()
IndexError: list index out of range

Describe the expected behaviour

The call should return the detected objects.

Standalone code/steps you may have used to try to get what you need

#Imports for object detection on Coral USB Accelerator
#(EdgdeTPU)
from pycoral.adapters.common import input_size
from pycoral.adapters.detect import get_objects
from pycoral.utils.dataset import read_label_file
from pycoral.utils.edgetpu import make_interpreter
from pycoral.utils.edgetpu import run_inference
from pycoral.utils.edgetpu import get_runtime_version

#Get and process the results of inference. Update image with objects found.
    def process_inference_results(self,image, showDetectionRegions=False):
        self.objs = get_objects(self.interpreter, self.default_threshold)[:self.default_top_k]
        return self.append_objs_to_img(image, self.objs, self.labels, showDetectionRegions)

Other info / Complete Logs

No response

kuaashish commented 1 year ago

@robodhhb,

From the description, We can see that you are using the Python Version 3.7.3, Please ensure that you are using the recommend version of Python And running the package in Raspberry OS 64-bit as described here.

Further, If error still persists let us know the following to dig more into issue:


- **Complete EdgeDevice information** 

- **Steps following from the documentation**

- **Complete error logs**

Thank you

robodhhb commented 1 year ago

For the Coral USB Accelerator (EdgeTPU) the requirements are as follows: Requirements

A computer with one of the following operating systems:
    Linux Debian 10, or a derivative thereof (such as Ubuntu 18.04), and a system architecture of either x86-64, Armv7 (32-bit), or Armv8 (64-bit) (includes support for Raspberry Pi 3 Model B+, Raspberry Pi 4, and Raspberry Pi Zero 2)
    macOS 10.15 (Catalina) or 11 (Big Sur), with either [MacPorts](https://www.macports.org/) or [Homebrew](https://brew.sh/) installed
    Windows 10
One available USB port (for the best performance, use a USB 3.0 port)
Python 3.6 - 3.9

Do you know if there is a PiOS Buster 64bit? The Raspberry Pi imager does not list one!

PaulTR commented 1 year ago

Hey, can you post the links for everything you've followed related to this so I can try to replicate? Which tutorial/docs did you use for creating the model? What did you change to run it with the EdgeTPU (I haven't actually tried this yet and wasn't sure if it'd work, so want to verify)?

Thanks!

robodhhb commented 1 year ago

@PaulTR Hi Paul, I followed for training the model in the MediaPipe Tutorial: https://developers.google.com/mediapipe/solutions/customization/object_detector The whole code you can find in the attached jupyter notebook for CoLab: trainSMRC_MediaPipe.zip

At the end of the training I added the installation of the EdgeTPU compiler and then I compile the model for the EdgeTPU (USB Accelerator).

The model is used in the following code you can look at github: https://github.com/robodhhb/Smart-Modelrailway-Cam/blob/main/10_SMRC_Application/SMRC_Contr.py See lines: 19-26, 47-66, 108-111, 114-116 The code is executed on a Raspberry Pi 4B, with PiOS Buster 32bit

PaulTR commented 11 months ago

Yeah this is beyond my scope of knowledge for MediaPipe :) I also don't think I'll have the time to really dig into it, unfortunately. Maybe someone else on the team with experience in this realm can jump in.

dorive commented 6 months ago

Hi @robodhhb Did you manage to make the Mediapipe model to work on Coral USB Accelerator? Thanks.

robodhhb commented 6 months ago

Hi @dorive, the operating system requirements for the Coral USB Accelerator and the Mediapipe model are not equal. See https://coral.ai/docs/accelerator/get-started/ and https://developers.google.com/mediapipe/solutions/setup_python

As far as I know it is not possible to run a mediapipe model on the Coral USB Accelarator.