google-coral / pycoral

Python API for ML inferencing and transfer-learning on Coral devices
https://coral.ai
Apache License 2.0
347 stars 144 forks source link

error: external/org_tensorflow/tensorflow/lite/core/subgraph.cc:1044 required_bytes != bytes (602112 != 150528) #49

Closed staebchen0 closed 2 years ago

staebchen0 commented 2 years ago

Description

Since I use Windows 10, I unfortunately cannot use the current runtime, but have to use the version edgetpu_runtime_20210119.zip to use.

In order for this to work, I have to compile my model to 13 with the Edge TPU compiler edgetpu_compiler -s -m 13 model_unquant_8020.tflite new Mode: example_edetpu13.tflite

$ pip freeze
edgetpu @ https://dl.google.com/coral/edgetpu_api/edgetpu-2.14.0-cp37-cp37m-win_amd64.whl
install==1.3.4
numpy==1.21.2
opencv-contrib-python==4.5.3.56
opencv-python==4.5.3.56
Pillow==8.3.2
pycoral @ https://github.com/google-coral/pycoral/releases/download/v2.0.0/pycoral-2.0.0-cp37-cp37m-win_amd64.whl
tflite-runtime @ https://github.com/google-coral/pycoral/releases/download/v2.0.0/tflite_runtime-2.5.0.post1-cp37-cp37m-win_amd64.whl

My model is still not running

Test Code:

from edgetpu.classification.engine import ClassificationEngine
from PIL import Image
import cv2
import re
import os

#from edgetpu.classification.engine import ClassificationEngine

# the TFLite converted to be used with edgetpu
modelPath = './model/example_edetpu13.tflite'

# The path to labels.txt that was downloaded with your model
labelPath = './model/labels.txt'

# This function parses the labels.txt and puts it in a python dictionary
def loadLabels(labelPath):
    p = re.compile(r'\s*(\d+)(.+)')
    with open(labelPath, 'r', encoding='utf-8') as labelFile:
        lines = (p.match(line).groups() for line in labelFile.readlines())
        return {int(num): text.strip() for num, text in lines}

# This function takes in a PIL Image and the ClassificationEngine
def classifyImage(image, engine):
    # Classify and ouptut inference
    #classifications = engine.ClassifyWithImage(image)
    classifications = engine.classify_with_image(image)

    return classifications

def main():
    # Load your model onto your Coral Edgetpu
    engine = ClassificationEngine(modelPath)
    labels = loadLabels(labelPath)

    cap = cv2.VideoCapture(0)
    while cap.isOpened():
        ret, frame = cap.read()
        if not ret:
            break

        # Format the image into a PIL Image so its compatable with Edge TPU
        cv2_im = frame
        #cv2_im = cv2.resize(cv2_im, (224, 224))
        pil_im = Image.fromarray(cv2_im)

        # Resize and flip image so its a square and matches training
        pil_im.resize((224, 224))
        pil_im.transpose(Image.FLIP_LEFT_RIGHT)

        # Classify and display image
        results = classifyImage(pil_im, engine)
        #cv2.imshow('frame', cv2_im)
        print(results)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

    cap.release()
    cv2.destroyAllWindows()

if __name__ == '__main__':
    main()

Failure:

"__main__" geladen
"runpy" geladen
external/org_tensorflow/tensorflow/lite/core/subgraph.cc:1044 required_bytes != bytes (602112 != 150528)
Stapelüberwachung:
 >  File "C:\Users\anja-\Anja_Programme\AnjaCoral\AnjaCoral.py", line 26, in classifyImage
 >    classifications = engine.classify_with_image(image)
 >  File "C:\Users\anja-\Anja_Programme\AnjaCoral\AnjaCoral.py", line 50, in main
 >    results = classifyImage(pil_im, engine)
 >  File "C:\Users\anja-\Anja_Programme\AnjaCoral\AnjaCoral.py", line 60, in <module> (Current frame)
 >    main()
"edgetpu.swig.edgetpu_cpp_wrapper" geladen
"edgetpu.basic.basic_engine" geladen
"edgetpu.classification.engine" geladen
Traceback (most recent call last):
  File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "c:\program files (x86)\microsoft visual studio\2019\community\common7\ide\extensions\microsoft\python\core\debugpy\__main__.py", line 45, in <module>
    cli.main()
  File "c:\program files (x86)\microsoft visual studio\2019\community\common7\ide\extensions\microsoft\python\core\debugpy/..\debugpy\server\cli.py", line 430, in main
    run()
  File "c:\program files (x86)\microsoft visual studio\2019\community\common7\ide\extensions\microsoft\python\core\debugpy/..\debugpy\server\cli.py", line 267, in run_file
    runpy.run_path(options.target, run_name=compat.force_str("__main__"))
  File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\runpy.py", line 263, in run_path
    pkg_name=pkg_name, script_name=fname)
  File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\AnjaCoral.py", line 60, in <module>
Der Thread 'MainThread' (0x1) hat mit Code 0 (0x0) geendet.
    main()
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\AnjaCoral.py", line 50, in main
    results = classifyImage(pil_im, engine)
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\AnjaCoral.py", line 26, in classifyImage
    classifications = engine.classify_with_image(image)
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\envCoralPy8\lib\site-packages\edgetpu\classification\engine.py", line 99, in classify_with_image
    return self.classify_with_input_tensor(input_tensor, threshold, top_k)
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\envCoralPy8\lib\site-packages\edgetpu\classification\engine.py", line 123, in classify_with_input_tensor
    input_tensor)
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\envCoralPy8\lib\site-packages\edgetpu\basic\basic_engine.py", line 136, in run_inference
    result = self._engine.RunInference(input)
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\envCoralPy8\lib\site-packages\edgetpu\swig\edgetpu_cpp_wrapper.py", line 111, in RunInference
    return _edgetpu_cpp_wrapper.BasicEnginePythonWrapper_RunInference(self, input)
RuntimeError: external/org_tensorflow/tensorflow/lite/core/subgraph.cc:1044 required_bytes != bytes (602112 != 150528)
[ WARN:0] global C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-_xlv4eex\opencv\modules\videoio\src\cap_msmf.cpp (438) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback
Das Programm "python.exe" wurde mit Code 1 (0x1) beendet.

what is the problem now?

Click to expand! ### Issue Type Build/Install ### Operating System Windows 10 ### Coral Device USB Accelerator ### Other Devices _No response_ ### Programming Language Python 3.7 ### Relevant Log Output _No response_
hjonnala commented 2 years ago

can you please share your CPU tflite model? edgetpu apiis deprecated, please try to rewrite the code using pycoral api

staebchen0 commented 2 years ago

i know the edgetpu api is out of date, but with the pycoral api i get the error list index out of range

with this Test Script:

# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""A demo that runs object detection on camera frames using OpenCV.
TEST_DATA=../all_models
Run face detection model:
python3 detect.py \
  --model ${TEST_DATA}/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite
Run coco model:
python3 detect.py \
  --model ${TEST_DATA}/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite \
  --labels ${TEST_DATA}/coco_labels.txt
"""
import argparse
import cv2
import os

from pycoral.adapters.common import input_size
from pycoral.adapters.detect import get_objects
from pycoral.utils.dataset import read_label_file
from pycoral.utils.edgetpu import make_interpreter
from pycoral.utils.edgetpu import run_inference

def main():
    default_model_dir = './model/'
    default_model = 'example_edetpu13.tflite'
    default_labels = 'labels.txt'
    parser = argparse.ArgumentParser()
    parser.add_argument('--model', help='.tflite model path',
                        default=os.path.join(default_model_dir,default_model))
    parser.add_argument('--labels', help='label file path',
                        default=os.path.join(default_model_dir, default_labels))
    parser.add_argument('--top_k', type=int, default=1,
                        help='number of categories with highest score to display')
    parser.add_argument('--camera_idx', type=int, help='Index of which video source to use. ', default = 0)
    parser.add_argument('--threshold', type=float, default=0.1,
                        help='classifier score threshold')
    args = parser.parse_args()

    print('Loading {} with {} labels.'.format(args.model, args.labels))
    interpreter = make_interpreter(args.model)
    interpreter.allocate_tensors()
    labels = read_label_file(args.labels)
    inference_size = input_size(interpreter)

    cap = cv2.VideoCapture(args.camera_idx)

    while cap.isOpened():
        ret, frame = cap.read()
        if not ret:
            break
        cv2_im = frame

        cv2_im_rgb = cv2.cvtColor(cv2_im, cv2.COLOR_BGR2RGB)
        cv2_im_rgb = cv2.resize(cv2_im_rgb, inference_size)
        run_inference(interpreter, cv2_im_rgb.tobytes())
        #objs = get_objects(interpreter, args.threshold)[:args.top_k]
        objs = model.ClassifyWithImage(frame, top_k=1)
        cv2_im = append_objs_to_img(cv2_im, inference_size, objs, labels)

        cv2.imshow('frame', cv2_im)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

    cap.release()
    cv2.destroyAllWindows()

def append_objs_to_img(cv2_im, inference_size, objs, labels):
    height, width, channels = cv2_im.shape
    scale_x, scale_y = width / inference_size[0], height / inference_size[1]
    for obj in objs:
        bbox = obj.bbox.scale(scale_x, scale_y)
        x0, y0 = int(bbox.xmin), int(bbox.ymin)
        x1, y1 = int(bbox.xmax), int(bbox.ymax)

        percent = int(100 * obj.score)
        label = '{}% {}'.format(percent, labels.get(obj.id, obj.id))

        cv2_im = cv2.rectangle(cv2_im, (x0, y0), (x1, y1), (0, 255, 0), 2)
        cv2_im = cv2.putText(cv2_im, label, (x0, y0+30),
                             cv2.FONT_HERSHEY_SIMPLEX, 1.0, (255, 0, 0), 2)
    return cv2_im

if __name__ == '__main__':
    main()

Code line: objs = get_objects (interpreter, args.threshold) [: args.top_k]

That's why I tried the edgetpu api

attached my model: example_edetpu13.zip

hjonnala commented 2 years ago

Hi get_objects is for object detection models. we can't use it for classification models.

could you please first quantize the model and try classify_image.py

staebchen0 commented 2 years ago

okay i'm one step further now. I have my model as mymodel_edgetpu.tflite created.

and then compile it with: edgetpu_compiler -s -m 13 $ TFLITE_FILE

classify_image.py is running now :-)

Which example can I now use to test the classification with a camera?

hjonnala commented 2 years ago

Please try with classes = classify.get_classes(interpreter, args.top_k, args.threshold) to get the outputs and add labels to the image..

This calssify_capture.py might be helpful.

staebchen0 commented 2 years ago

Thanks a Million! It finally works under Win10 :-)

I hope the bug in the newest one edgetpu_runtime_20210726.zip is found quickly so that I can also use it with Windows 10.

that would have made a lot easier

google-coral-bot[bot] commented 2 years ago

Have a few minutes? We'd love your feedback about the Coral developer experience! Take our 5-minute survey.

Are you satisfied with the resolution of your issue? Yes No

staebchen0 commented 2 years ago

while testing, I just found out that the classes are no longer recognized correctly!

Could that have happened when compiling on runtime version 13?

hjonnala commented 2 years ago

Please try the with some images detections and compare the results CPU tflite model vs edgetpu tflite model.

staebchen0 commented 2 years ago

i already have that, the cpu tflite model gives 95% the right result

with the same pictures, the edgetpu model always brings the wrong class

aside from that, I only get two classes even though I set top_k to 4 parser.add_argument ( '-k', '--top_k', type = int, default = 4

hjonnala commented 2 years ago

Okay, please create new performance issue with the details below:

Thanks

staebchen0 commented 2 years ago

was created https://github.com/google-coral/pycoral/issues/50