Closed neilyoung closed 1 year ago
Are you having the same system load for uncompiled tflite model?
It could be due to the ML model running on EdgeTPU..Please check the power consumption section on page 5 for more details at: https://coral.ai/static/files/Coral-Accelerator-Module-datasheet.pdf
Could you please give me a hint, how I could make use of the uncompiled tflite model?
Strange. New day, new attempt. Now system load is 1.11, which would be ok
EDIT: But inference frame rate just 15 fps... Makes no sense
EDIT2: FPS seems to depend on content. I can also reach 30 fps with another view. Then the load is again at 2.5
While having you here may I ask you, if you are aware of any Coral based inference solution, which delivers 6DOF poses? I mean not only bounding boxes but also translation and rotation?
Could you please give me a hint, how I could make use of the uncompiled tflite model?
Please download the ssd_mobilenet_v2_coco_quant_postprocess.tflite from test_data repo to the models directory and update the model name at: https://github.com/google-coral/example-object-tracker/blob/master/gstreamer/detect.py#L147
While having you here may I ask you, if you are aware of any Coral based inference solution, which delivers 6DOF poses? I mean not only bounding boxes but also translation and rotation?
I have responded to this one at: https://github.com/google-coral/project-posenet/issues/89.. Thanks!!
Thank you very much for both answers.
OK, system load is higher with uncompiled tflite model, between 2.5 and 3, while the original model is about 2. 6 fps inference rate.
OK, system load is higher with uncompiled tflite model, between 2.5 and 3, while the original model is about 2. 6 fps inference rate.
Then the high system load is not due to ML model running on TPU. Please try the CPU usage monitoring tools to analyze what is causing this system load: https://linuxhint.com/raspberry-pi-cpu-usage-monitoring/ and check with rpi forum for any additional questions at: https://forums.raspberrypi.com/viewtopic.php?t=283330.. Thanks!
May I ask one last additional question? It is about the ID. I'm aware of the fact, that it can't really be a unique ID since it is detection, not recognition: Once you leave the FOV and return you'll get another ID. But is my assumption correct, that the ID is something not coming from the inference results (like bounding boxes), but is something "post-processed"?
The ID is coming from inference results and it would be mapped to bounding boxes.
Oh? Could you please point me to where the ID is derived? I was using the model with an older Coral application. I'm a bit lost where to find the ID.
This is how I'm deriving the bounding boxes, classes and scores:
boxes = self.interpreter.get_tensor(self.output_details[0]["index"])[0]
classes = self.interpreter.get_tensor(self.output_details[1]["index"])[0]
scores = self.interpreter.get_tensor(self.output_details[2]["index"])[0]
for i in range(len(scores)):
if scores[i] >= self.threshold and scores[i] <= 1.0:
ymin = int(max(1, (boxes[i][0] * height)))
ymax = int(min(height, (boxes[i][2] * height)))
xmin = int(max(1, (boxes[i][1] * width)))
xmax = int(min(width, (boxes[i][3] * width)))
object_name = self.labels[int(classes[i])]
...
Where to find the ID?
OK, then I think we are misunderstanding each other. I have run this model and got myself tagged as "id 34215" or so. Left the view. Another person got another ID.
I think you are referring to the ID given in the coco.labels file
But this is not what I'm after :)
I mean this ID
EDIT:
It basically comes from "trackID" out of trdata
. I can't figure out, where this trdata
finally comes from. A GStreameruserCallback... ~~Hmmm.. ~~
What is this "motTracker"?
EDIT:
That "tracker.update(detections)" - is it a "Sort" function?
Does that "Sort" object finally provide the ID?
EDIT: Ok, yes. "update" is provided by "sort". https://github.com/abewley/sort/blob/master/sort.py
Still not clear, what makes that "trackID"...
OK, after having made a deep dive into the code I confirm my own statement. The trackID is not coming from the inference, like bbox and score. It is resulting from a post process "sort", which assigns that "unique" id as long as the object is in view.
Today I ran it on a 32 bit Raspbian Buster. Runs with system load around 1. And it achieves 30 fps. I need to double check this, but my above attempts have been made with Bullseye Lite 64 bit.
Description
Hi, I'm wondering what can make a system load of about 2.5-3 on an RPI. I mean, it is more or less some GStreamer stuff and USB 3 transfer, inference is running in the Coral TPU.
I'm using a USB camera as input (/dev/video1). PI is not overclocked, 2GB RAM
What could cause this high load, any ideas?
Other than that the inference is running fine.
Click to expand!
### Issue Type _No response_ ### Operating System Linux ### Coral Device USB Accelerator ### Other Devices Rapsberry Pi 4 ### Programming Language Python 3.9 ### Relevant Log Output _No response_