Closed HanChangHun closed 2 years ago
Hi, pipe lining is possible with detection models. You might have to change the post processing of results.
In fact, I can't understand that the shape of the result in ssd_mobilenet_v2 is (320,).
Do I have to re-create the model from the beginning to pipeline the detection model? Is it not possible to segment and use the edgetpu compiled model on this site?
Or, may I know how to change the post-processing of results?
Hi, no need to re-create the model from the beginning to pipeline the detection model. It is possible to segment and use the edgetpu compiled model.
Please refer to this detect_image.py for getting the outputs and post processing of results for detection models.
Oh, I didn't have to call pop().
I solve errors like below.
size = common.input_size(runner.interpreters()[0])
name = common.input_details(runner.interpreters()[0], 'name')
org_image = Image.open(image_file)
resized_image = org_image.resize(size, Image.ANTIALIAS)
np_image = np.array(resized_image)
_, scale = common.set_resized_input(
runner.interpreters()[0], org_image.size, lambda size: org_image.resize(size, Image.ANTIALIAS))
runner.push({name: np_image})
runner.push({}) # Must add empty dict
objs = detect.get_objects(runner.interpreters()[-1], threshold, scale)
for obj in objs:
print(labels.get(obj.id, obj.id))
print(' id: ', obj.id)
print(' score: ', obj.score)
print(' bbox: ', obj.bbox)
org_image = org_image.convert('RGB')
# result
#
# dog
# id: 17
# score: 0.95703125
# bbox: BBox(xmin=1, ymin=0, xmax=19, ymax=19)
But, for the correct bounding box location, I have to modify the function detect.get_objects(). Because It scales based on the interpreter that received as a parameter.
In addition, an exception occurs at __del__() in pycoral/pipeline/pipeline_model_runner.py, which is fine to ignore.
Exception ignored in: <function PipelinedModelRunner.__del__ at 0xffff9c745400>E20210809 05:58:17.208966 15944 pipelined_model_runner.cc:240] Thread: 281473659101200 Pipeline was turned off before.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/pycoral/pipeline/pipelined_model_runner.py", line 83, in __del__
self.push({})
File "/usr/lib/python3/dist-packages/pycoral/pipeline/pipelined_model_runner.py", line 152, in push
self._runner.Push(input_tensors)
RuntimeError: Pipeline was turned off before.
I need a little more modification, but your advice is very very helpful.
Thank you!!!
You still can use same push and pop if you want to invoke multiple times..
But for now, try passing runner.interpreters()[0] In this line https://github.com/google-coral/pycoral/blob/master/pycoral/adapters/detect.py#L225
Hi, were you able to run the pipelineModelRunner with detection models?
Hi, were you able to run the pipelineModelRunner with detection models?
Yes, but I can inference only one input. And can't run pop(). Instead of pop(), detect.get_objects(runner.interpreters()[-1], threshold, scale) should be used.
These problems may be overcome by extending pipelineModelRunner for detection.
Unfortunately, though it has been considered due to the other priorities the request won't be possible in the near future.
Thanks.
@hjonnala @HanChangHun I'm having the same issue here. I was trying to segment efficientdet_lite2_448 and delegate to two TPUs to speed up the inference time. The goal is trying to hit 12fps with multi threading. Looking at the output tensor from runner.pop, seems like the output tensor got x4 inflated. Not sure if it's caused by the TFLite_Detection_PostProcess stage: This also seems to be a major difference with the classification model used for pipeline example. I don't know what's the technical challenge to enable pipeline for object detection models, can you share more detail or general idea to work around this? I think it's a very useful technique if works as expected.
@vincent-jyq can you please try this script and check if inference time is getting improved or not for your model.
@hjonnala, thanks for your quick response, I appreciate it. Yes, that helps improve the inference time even though it's not doing much in the consumer. I also see your reply in https://github.com/google-coral/tutorials/issues/17 Which really helps me to get the consumer side working.
Thanks for your help and hope it would also help other viewer.
@HanChangHun please check this sample code if you are still working on it: https://github.com/google-coral/tutorials/issues/17#issuecomment-972277946.
I divided the detection models(efficientdet, ssd_mobilenet, ...) into 4 segments with edgetpu_compiler v16 and pop in the pipeline. However, the following error occurred:
The model above is efficientdet_lite3x, and similar errors occur in ssd_mobilenet_v2(both tf1 and tf2) models.
These errors did not appear in the classification model.
Is pipelining not possible in the detection model?