DeGirum / PySDKExamples

DeGirum PySDK Usage Examples
https://cs.degirum.com
MIT License
18 stars 6 forks source link

Issues running model on raspberrypi5 + edgetpu #50

Open han-so1omon opened 4 months ago

han-so1omon commented 4 months ago

I have some issues running the degirum models on my raspberrypi5 + edgetpu environment with raspberrypi os 12 bookworm. Moving from this issue https://github.com/ultralytics/ultralytics/issues/1185. @shashichilappagari Can you provide some assistance?

First step

# Download the models
degirum download-zoo --path /home/errc/v --device EDGETPU --runtime TFLITE --precision QUANT --token dg_4JRLnVvtfNdKLzj4oL816wNtL9gQBT5dfqmi3 --url https://cs.degirum.com/degirum/edgetpu

Try to run with degirum pysdk

import cv2
import degirum as dg

image = cv2.imread("./test-posenet.jpg")

zoo = dg.connect(dg.LOCAL, "/home/errc/v/yolov8n_relu6_coco_pose--640x640_quant_tflite_edgetpu_1/yolov8n_relu6_coco_pose--640x640_quant_tflite_edgetpu_1.json")
model = zoo.load_model("yolov8n_relu6_coco_pose--640x640_quant_tflite_edgetpu_1")
print(model)

result = model.predict(image)
result_image = result.image_overlay
cv2.imwrite("./test-posenet-degirum.jpg", result_image)
# Result
> python pose-tracking-debug-degirum.py 
<degirum.model._ClientModel object at 0x7f73d62ed0>
terminate called without an active exception
terminate called without an active exception
Aborted

Try to run with base yolo ultralytics library

from ultralytics import YOLO

# Load model
model = YOLO('/home/errc/v/yolov8n_full_integer_quant_edgetpu.tflite') 
yolov8n_relu6_coco_pose--640x640_quant_tflite_edgetpu_1.tflite') 

# Track with the model
results = model.track(source="/home/errc/e/ai/test-infrared.mp4", save=True)
# Result
>  python pose-tracking-debug-yolo.py 
WARNING ⚠️ Unable to automatically guess model task, assuming 'task=detect'. Explicitly define task for your model, i.e. 'task=detect', 'segment', 'classify','pose' or 'obb'.
Loading /home/errc/v/yolov8n_relu6_coco_pose--640x640_quant_tflite_edgetpu_1/yolov8n_relu6_coco_pose--640x640_quant_tflite_edgetpu_1.tflite for TensorFlow Lite inference...
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
Traceback (most recent call last):
  File "/home/errc/e/ai/pose-tracking-debug-yolo.py", line 8, in <module>
    results = model.track(source="/home/errc/e/ai/test-infrared.mp4", save=True)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/errc/e/ai/venv/lib/python3.11/site-packages/ultralytics/engine/model.py", line 492, in track
    return self.predict(source=source, stream=stream, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/errc/e/ai/venv/lib/python3.11/site-packages/ultralytics/engine/model.py", line 445, in predict
    self.predictor.setup_model(model=self.model, verbose=is_cli)
  File "/home/errc/e/ai/venv/lib/python3.11/site-packages/ultralytics/engine/predictor.py", line 297, in setup_model
    self.model = AutoBackend(
                 ^^^^^^^^^^^^
  File "/home/errc/e/ai/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/errc/e/ai/venv/lib/python3.11/site-packages/ultralytics/nn/autobackend.py", line 341, in __init__
    interpreter.allocate_tensors()  # allocate
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/errc/e/ai/venv/lib/python3.11/site-packages/tflite_runtime/interpreter.py", line 531, in allocate_tensors
    return self._interpreter.AllocateTensors()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Encountered unresolved custom op: edgetpu-custom-op.
See instructions: https://www.tensorflow.org/lite/guide/ops_custom Node number 0 (edgetpu-custom-op) failed to prepare.Encountered unresolved custom op: edgetpu-custom-op.
See instructions: https://www.tensorflow.org/lite/guide/ops_custom Node number 0 (edgetpu-custom-op) failed to prepare.

I can see that the edgetpu is connected. Although I am not sure that it is being used

> lspci
0000:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries Device 2712 (rev 21)
0000:01:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU
0001:00:00.0 PCI bridge: Broadcom Inc. and subsidiaries Device 2712 (rev 21)
0001:01:00.0 Ethernet controller: Device 1de4:0001
> ls /dev/apex_0 
/dev/apex_0
shashichilappagari commented 4 months ago

@kteodorovich or @boristeo Can you please help @han-so1omon. I suspect pcie driver is not installed properly. @han-so1omon can you please provide us the output of the following command

degirum sys-info
han-so1omon commented 4 months ago

Sure, here's the output @shashichilappagari @boristeo @kteodorovich

$ degirum sys-info
Devices:
  N2X/CPU:
  - '@Index': 0
  - '@Index': 1
  TFLITE/CPU:
  - '@Index': 0
  - '@Index': 1
  TFLITE/EDGETPU:
  - '@Index': 0
Software Version: 0.12.1
shashichilappagari commented 4 months ago

@han-so1omon So, it appears that PySDK is able to recognize the EdgeTPU. To ensure that driver is properly installed and is working, we made a small test script that does not depend on pysdk (this will allow us to diagnose if problem is with pysdk or the basic setup). Please see if you can run the following code without errors:

import tflite_runtime.interpreter as tflite
from PIL import Image
import numpy as np
import os

print('Downloading test model and test image')
os.system('wget -nc https://raw.githubusercontent.com/google-coral/test_data/master/mobilenet_v1_1.0_224_quant_edgetpu.tflite')
os.system('wget -nc https://github.com/DeGirum/PySDKExamples/blob/main/images/Cat.jpg?raw=true -O Cat.jpg')

print('Running...')
m = tflite.Interpreter('mobilenet_v1_1.0_224_quant_edgetpu.tflite', experimental_delegates=[tflite.load_delegate('libedgetpu.so.1')])
img = Image.open('Cat.jpg')

m.allocate_tensors()
n, h, w, c = m.get_input_details()[0]['shape']
m.set_tensor(m.get_input_details()[0]['index'], np.array(img.resize((h, w)))[np.newaxis,...])
m.invoke()
out = m.get_tensor(m.get_output_details()[0]['index']).flatten()

assert np.argmax(out) == 288, 'Wrong output result'
assert out[np.argmax(out)] == 83, 'Wrong output probability'
print('OK')
han-so1omon commented 4 months ago

Here is the output:

$ python edge-tpu-debug-degirum.py 
Downloading test model and test image
--2024-05-16 10:45:04--  https://raw.githubusercontent.com/google-coral/test_data/master/mobilenet_v1_1.0_224_quant_edgetpu.tflite
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8001::154, 2606:50c0:8002::154, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4749998 (4.5M) [application/octet-stream]
Saving to: ‘mobilenet_v1_1.0_224_quant_edgetpu.tflite’

mobilenet_v1_1.0_224_quant_edgetpu 100%[==============================================================>]   4.53M  13.5MB/s    in 0.3s    

2024-05-16 10:45:05 (13.5 MB/s) - ‘mobilenet_v1_1.0_224_quant_edgetpu.tflite’ saved [4749998/4749998]

--2024-05-16 10:45:05--  https://github.com/DeGirum/PySDKExamples/blob/main/images/Cat.jpg?raw=true
Resolving github.com (github.com)... 140.82.114.4
Connecting to github.com (github.com)|140.82.114.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github.com/DeGirum/PySDKExamples/raw/main/images/Cat.jpg [following]
--2024-05-16 10:45:05--  https://github.com/DeGirum/PySDKExamples/raw/main/images/Cat.jpg
Reusing existing connection to github.com:443.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/Cat.jpg [following]
--2024-05-16 10:45:06--  https://raw.githubusercontent.com/DeGirum/PySDKExamples/main/images/Cat.jpg
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8000::154, 2606:50c0:8001::154, 2606:50c0:8002::154, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8000::154|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 334467 (327K) [image/jpeg]
Saving to: ‘Cat.jpg’

Cat.jpg                            100%[==============================================================>] 326.63K  --.-KB/s    in 0.1s    

2024-05-16 10:45:06 (3.15 MB/s) - ‘Cat.jpg’ saved [334467/334467]

Running...
OK
shashichilappagari commented 4 months ago

@han-so1omon Thanks for checking this. So, it does appear that edgetpu is properly functioning. Can you please try other models in the zoo to make sure that it is not a problem specific to the model?

han-so1omon commented 4 months ago

@shashichilappagari I have tried with another model and it works fine. Using the test image from the project posenet from google, the yolov8n_relu6_coco--640x640_quant_tflite_edgetpu_1 model is able to classify the image

import cv2
import degirum as dg

image = cv2.imread("./test-posenet.jpg")

#zoo = dg.connect(dg.LOCAL, "/home/errc/v/yolov8n_relu6_coco_pose--640x640_quant_tflite_edgetpu_1/yolov8n_relu6_coco_pose--640x640_quant_tflite_edgetpu_1.json")
zoo = dg.connect(dg.LOCAL, "/home/errc/v/yolov8n_relu6_coco--640x640_quant_tflite_edgetpu_1/yolov8n_relu6_coco--640x640_quant_tflite_edgetpu_1.json")
model = zoo.load_model("yolov8n_relu6_coco--640x640_quant_tflite_edgetpu_1")
print(model)

result = model.predict(image)
result_image = result.image_overlay
cv2.imwrite("./test-posenet-degirum.jpg", result_image)

shashichilappagari commented 4 months ago

@han-so1omon Is there a typo in the above code? Did you load the pose model or the coco detection model? Assuming you loaded the detection model and that it is working, it is partially good news as this shows that pysdk is working with local edgetpu. It now seems that the problem is specific to the model. We will upload models compiled at lower resolution to see if they resolve the issue. Thanks for your patience and quick responses.

han-so1omon commented 4 months ago

Some more context, I'm using feranick's edgetpu runtime and python libraries, as recommended in the ultralytics setup instructions, as these runtimes are kept up to date since google abandoned the coral project. I'm also using the default python 3.9 on raspberry pi os bookworm. Should I perhaps try running from the docker container? If so, do you have examples of that?

han-so1omon commented 4 months ago

Thank you, I am trying to get this issue resolved this week, and I appreciate your responsiveness quite a lot

han-so1omon commented 4 months ago

Yes, I corrected the typo. I loaded the coco detection model, and it seems to work fine. I commented out loading the coco pose model, as it was throwing the error

shashichilappagari commented 4 months ago

@han-so1omon Since other models are working, the issue seems to be specific to the pose model. As you can see from our cloud platform, all models in the edgetpu model zoo are working properly on our cloud farm machines which have the google edge tpu pcie module. As I mentioned before, we will compile pose models at lower resolution and see if the problem goes away. Another option is to use google's mobilenet posenet model.

han-so1omon commented 4 months ago

@shashichilappagari Do you have instructions on how you've setup your google edge tpu pcie modules? Additionally, do you have the mobilenet posenet in your model zoos for edgetpu?

shashichilappagari commented 4 months ago

@han-so1omon We will add it and let you know. Please give us a couple of hours to get lower resolution pose models. We will also share our setup guide with you.

han-so1omon commented 4 months ago

Ok, thank you

shashichilappagari commented 4 months ago

@han-so1omon From the error message you are seeing, there could be some race condition in the code. We are unable to replicate it on our side but we have some ideas to test. Before I explain the ideas, I want to mention that you do not have to download the models to run locally. You can connect to cloud zoo and pysdk will automatically download the models. This will make your code simpler. Once you have finished debugging, you can of course switch to local zoo in case you want offline deployment. Your code should look like below:

import cv2
import degirum as dg

image = cv2.imread("./test-posenet.jpg")

zoo = dg.connect(dg.LOCAL, "https://cs.degirum.com/degirum/edgetpu", <your token>)
model = zoo.load_model("yolov8n_relu6_coco--640x640_quant_tflite_edgetpu_1")
print(model)

result = model.predict(image)
result_image = result.image_overlay
cv2.imwrite("./test-posenet-degirum.jpg", result_image)

With the above code you can just change model name every time you want to experiment with a different model.

Now to rule out the race condition that could be killing your python, we can try the following. PySDK supports three types of inference: cloud, ai_server, and local. We can try ai_server. In a terminal window, activate the python environment in which you installed pysdk. Then type

degirum server

You will see a message saying that degirum server started.

Then run the following code:

import cv2
import degirum as dg

image = cv2.imread("./test-posenet.jpg")

zoo = dg.connect('localhost', "https://cs.degirum.com/degirum/edgetpu", <your token>)
model = zoo.load_model("yolov8n_relu6_coco--640x640_quant_tflite_edgetpu_1")
print(model)

result = model.predict(image)
result_image = result.image_overlay
cv2.imwrite("./test-posenet-degirum.jpg", result_image)

Note that we changed dg.Local to localhost to switch to ai_server.

Please try this code and see if it works. We also added 512x512 pose model and 320x320 pose model. You can try those models also. We are in the process of adding mobilenet_posenet to the zoo and will let you know once it is added.

Hope that this helps.

han-so1omon commented 4 months ago

Ok. What do you think the race condition is, and is there a way to perform a wait to prevent it?

shashichilappagari commented 4 months ago

@han-so1omon At this point we are not sure as it could be system dependent. That is why we want you to try the localhost option. There will not be any performance impact on using this option. If localhost option works, we will at least know that the problem is localized to local inference case and we will investigate further. But if localhost also does not work on your side, we have to think of other ways to debug.

shashichilappagari commented 4 months ago

@han-so1omon We also added the mobilenet_v1_posenet model to the edetpu model zoo. Please see if it works on your side.

han-so1omon commented 4 months ago

@shashichilappagari I will try all later in the day. Can you share how you've setup the edgetpu modules?

shashichilappagari commented 4 months ago

@kteodorovich can you share our user guide for edge tpu with @han-so1omon?

kteodorovich commented 4 months ago

@han-so1omon Hello! Our guide for USB Edge TPU is available here. You might be past all these steps already, given that you got the detection model to work.

By the way, the base Ultralytics library will only recognize a model for Edge TPU if the filename ends with _edgetpu.tflite. Also, our export modifies the structure of the model in a way that is incompatible with the Ultralytics built-in postprocessor.

han-so1omon commented 4 months ago

@shashichilappagari @kteodorovich Thank you! It looks like it was indeed a race condition with the local degirum setup. All of the models appear to work correctly with the localhost-based ai server as recommended. Do you have recommendations on how to setup a pose tracking algorithm on top of the pose prediction algorithm?

shashichilappagari commented 4 months ago

@han-so1omon We are glad to hear that localhost option is working with the models you need. Just to give you some background information: the pose models have a python based postprocessor and it could be causing some issues with the python interpreter running the model. In case of localhost the python interpreter running the inference and postprocessor are two separate instances and hence do not have issues. We are currently investigating how to fix the issue for dg.LOCAL use case and will let you know when we release a pysdk version that fixes the issue. Until then you can use localhost as it does not have any real performance impact.

shashichilappagari commented 4 months ago

@han-so1omon Here is an example of how you can add tracking on top of a model: https://github.com/DeGirum/PySDKExamples/blob/main/examples/specialized/multi_object_tracking_video_file.ipynb

shashichilappagari commented 4 months ago

@vlad-nn Python post-processor indeed seems to have a race condition when using dg.LOCAL option as confirmed by @han-so1omon

han-so1omon commented 4 months ago

@shashichilappagari Does your pose algorithm from yolo_pose support landmarks?

shashichilappagari commented 4 months ago

@han-so1omon Do you mean if the tracking algorithm tracks landmarks?

han-so1omon commented 4 months ago

@shashichilappagari basically is there a way to present it as a skeleton with each part of the body denoted. Like 'right ear', 'right forearm', etc

shashichilappagari commented 4 months ago

@han-so1omon so you want the output of prediction to have a label for each keypoint?

han-so1omon commented 4 months ago

Basically, yes. I would like to know what part of the body the keypoint comes from

shashichilappagari commented 4 months ago

@han-so1omon Unfortunately, the checkpoint itself does not contain this information. So this type of label information needs to be added manually. We added the label information to yolov8n_relu6_coco_pose--640x640_quant_tflite_edgetpu_1. Can you run inference using this model and see if the results contain what you expect.

han-so1omon commented 4 months ago

@shashichilappagari I would like to read camera frames from the raspberry pi camera module v2 or v3. Is it recommended to use a stream from picamera2, or is it better to read directly from /dev/video0 without using picamera2 at all? I am wondering the best practice for reading camera frames on raspberry pi 5

shashichilappagari commented 4 months ago

@boristeo or @kteodorovich Can you please take a look at picamera2 and see if it works directly with opencv? If it works, can you please provide a code snippet to @han-so1omon showing how to work with video streams using pysdk?

han-so1omon commented 4 months ago

@shashichilappagari I am trying to use the degirum_tools.predict_stream() function with the yolov8n_relu6_coco_pose--640x640_quant_tflite_edgetpu_1 model to get the pose of a human in real time

han-so1omon commented 4 months ago

@shashichilappagari I am currently unable to get a local camera to connect. I can verify that the camera works with raspicam tools

shashichilappagari commented 4 months ago

@han-so1omon any chance you have a usb web camera to try? We are also experimenting with picam but we were unable to get it to work yet.

han-so1omon commented 4 months ago

I can try, but I need to be using the camera module with infrared vision

han-so1omon commented 4 months ago

@shashichilappagari I can get something to run with the rtsp server sending a video for a bit, but the degirum server quickly freezes up. I also do not know what to do with the result degirum_tools.inference_support._create_analyzing_postprocessor_class.<locals>.AnalyzingPostprocessor object, as I can't find documentation for degirum_tools

RTSP feed command: rpicam-vid -t 0 --codec libav --libav-format mpegts --libav-audio --inline --framerate 15 -o - | cvlc - --sout '#rtp{sdp=rtsp://0.0.0.0:8554/stream1}' :demux=ts

Error:

degirum.exceptions.DegirumException: [ERROR]Timeout detected
Timeout 10000 ms waiting for response from AI server '127.0.0.1:8778
dg_client_asio.cpp: 365 [string&)::<lambda]

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/errc/e/ai/pose-tracking-debug-degirum3.py", line 28, in <module>
    print(next(res))
          ^^^^^^^^^
  File "/home/errc/e/ai/venv/lib/python3.11/site-packages/degirum_tools/inference_support.py", line 207, in predict_stream
    for res in model.predict_batch(video_source(stream, fps=fps)):
  File "/home/errc/e/ai/venv/lib/python3.11/site-packages/degirum/model.py", line 287, in predict_batch
    for res in self._predict_impl(source):
  File "/home/errc/e/ai/venv/lib/python3.11/site-packages/degirum/model.py", line 1174, in _predict_impl
    raise DegirumException(
degirum.exceptions.DegirumException: Failed to perform model 'degirum/edgetpu/yolov8n_relu6_coco_pose--640x640_quant_tflite_edgetpu_1' inference: [ERROR]Timeout detected
Timeout 10000 ms waiting for response from AI server '127.0.0.1:8778
dg_client_asio.cpp: 365 [string&)::<lambda]

Note: Let me know if that race condition was fixed in the local version of degirum, as it would be helpful here since the server appears to be causing timeout/freezing issues

han-so1omon commented 4 months ago

@shashichilappagari Just for reference, when I run with the indexed camera, it results in the following error

print('Creating object tracker...')
# create object tracker
tracker = degirum_tools.ObjectTracker(
    track_thresh=0.35,
    track_buffer=100,
    match_thresh=0.9999,
    trail_depth=20,
    anchor_point=degirum_tools.AnchorPoint.BOTTOM_CENTER,
)

print('Predicting stream...')
next_res = degirum_tools.predict_stream(
    model, 0, analyzers=[tracker], fps=15,
)

try:
    while True:
        res = next(next_res)

except KeyboardInterrupt:
    print('Stopping prediction stream...')
...
Successfully opened video stream '0'
degirum.exceptions.DegirumException: Fail to capture camera frame. May be camera was opened by another notebook?

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/errc/e/ai/pose-tracking-debug-degirum3.py", line 34, in <module>
    res = next(next_res)
          ^^^^^^^^^^^^^^
  File "/home/errc/e/ai/venv/lib/python3.11/site-packages/degirum_tools/inference_support.py", line 207, in predict_stream
    for res in model.predict_batch(video_source(stream, fps=fps)):
  File "/home/errc/e/ai/venv/lib/python3.11/site-packages/degirum/model.py", line 287, in predict_batch
    for res in self._predict_impl(source):
  File "/home/errc/e/ai/venv/lib/python3.11/site-packages/degirum/model.py", line 1174, in _predict_impl
    raise DegirumException(
degirum.exceptions.DegirumException: Failed to perform model 'degirum/edgetpu/yolov8n_relu6_coco_pose--640x640_quant_tflite_edgetpu_1' inference: Fail to capture camera frame. May be camera was opened by another notebook?
shashichilappagari commented 4 months ago

@han-so1omon What is the FPS for the RTSP stream? If inference is slower than the source FPS, frames can start buffering ultimately leading to crash. In USB cameras, this does not happen as the throttling is handled by videostream itself.

han-so1omon commented 4 months ago

@shashichilappagari FPS is set to 15. Should I set it to something else? I'm not sure off the top of my head how to allow for the appropriate buffering. Is there a way to clear the buffer if it starts to get full? It looks like the above processing loop ranges from 2.5-5 fps

kteodorovich commented 4 months ago

@han-so1omon We were able to get the RTSP stream to work by using 'rtsp://0.0.0.0:8554/stream1' as the video source for degirum_tools.predict_stream().

First, start the RTSP feed using the same command you had before

rpicam-vid -t 0 --codec libav --libav-format mpegts --libav-audio --inline --framerate 15 -o - | cvlc - --sout '#rtp{sdp=rtsp://0.0.0.0:8554/stream1}' :demux=ts

Then, run this snippet in Python

# load model
zoo = dg.connect('localhost', 'https://cs.degirum.com/degirum/edgetpu', degirum_tools.get_token())
model = zoo.load_model('yolov8n_relu6_coco_pose--640x640_quant_tflite_edgetpu_1')

# create object tracker
tracker = degirum_tools.ObjectTracker(
    track_thresh=0.35,
    track_buffer=100,
    match_thresh=0.9999,
    trail_depth=20,
    anchor_point=degirum_tools.AnchorPoint.BOTTOM_CENTER
)

# show overlay on stream
video_source = 'rtsp://0.0.0.0:8554/stream1'

with degirum_tools.Display("Video") as display:
    for res in degirum_tools.predict_stream(model, video_source, analyzers=[tracker], fps=15):
        display.show(res.image_overlay)

This will show a preview window of the camera feed with the overlay of bounding boxes and keypoints. We were able to have this run at ~15 FPS, though with a slight delay.

If this doesn't work, you can try getting the video preview on its own, checking whether you're able to connect to the RTSP feed.

video_source = 'rtsp://0.0.0.0:8554/stream1'

with degirum_tools.open_video_stream(video_source) as stream:
    with degirum_tools.Display("Video") as display:
        for frame in degirum_tools.video_source(stream):
            display.show(frame)

The other thing to check is benchmarking the FPS of the model without using your camera stream. This can be done with the model_time_profile function in degirum_tools.

zoo = dg.connect('localhost', 'https://cs.degirum.com/degirum/edgetpu', degirum_tools.get_token())
model = zoo.load_model('yolov8n_relu6_coco_pose--640x640_quant_tflite_edgetpu_1')
iterations = 100
results = degirum_tools.model_time_profile(model, iterations)
print(results.observed_fps)
han-so1omon commented 4 months ago

@kteodorovich I am trying this on a raspberry pi 5 with bookworm os where the rtsp server and the camera and the coral tpu are all connected on the same host machine. Is that the same as your setup?

shashichilappagari commented 4 months ago

@han-so1omon yes, we have the same setup as you described.

vlad-nn commented 4 months ago

BTW, DeGirum offers ORCA1 AI accelerator M.2 module, which can be used with RPi5 when equipped with M.2 adapter

This setup can potentially give much higher FPS compared to EdgeTPU (I never tried myself, but with high performance host DeGirum states 85 FPS on that model on ORCA1; on RPi5 FPS will be lower due to slow host, but I would expect it will be for sure higher than camera FPS).

han-so1omon commented 4 months ago

@shashichilappagari How can I guarantee that it is being sent to the coral edgetpu?

@vlad-nn For this specific application I need to operate in an environment with uncertain internet access, but yes in other scenarios I think orca is a great option

vlad-nn commented 4 months ago

For this specific application I need to operate in an environment with uncertain internet access

Please be advised, that ORCA operation does not require Internet access, it can be used exactly the same way as EdgeTPU: with local zoo and AI server running on localhost.

han-so1omon commented 4 months ago

@vlad-nn Yes, I see, but I need to order it. In this case I would like to continue using the EdgeTPU

han-so1omon commented 4 months ago

What is the programmatic way to generate an access token? It looks like it expires in max 2 weeks

shashichilappagari commented 4 months ago

@shashichilappagari How can I guarantee that it is being sent to the coral edgetpu?

@han-so1omon The model is compiled for edgetpu. It cannot run on CPU. Can you please let us know where you are stuck? @kteodorovich previously provided several code snippets to figure out where the issue could be. Did you get a chance to run those snippets? If so, can you please share what the results are?