I'm encountering a "Segmentation fault" error while trying to run a segmented YOLOv5s model on my Coral Dev Board with an external Coral ML Accelerator (USB). I followed the recommended steps to export, compile, and segment the model using edgetpu_compiler.
I follow these steps:
I trained Yolov5su model, then I exported the model as tflite - 8int by using:
model.export()
then I use the edgetpu_compiler to compile the tflite model and segmented it into 2 models:
edgetpu_compiler model.tflite --num_segments=2
then I get 12 files, 8 models and 4 logs, the models are:
1- best_float32_segment_0_of_2.tflite
2- best_float32_segment_1_of_2.tflite
3- best_float32_segment_0_of_2_edgetpu.tflite
4- best_float32_segment_1_of_2_edgetpu.tflite
5- best_full_integer_quant_segment_0_of_2.tflite
6- best_full_integer_quant_segment_1_of_2.tflite
7- best_full_integer_quant_segment_0_of_2_edgetpu.tflite
8- best_full_integer_quant_segment_1_of_2_edgetpu.tflite
I transfer the "best_full_integer_quant_segment_0_of_2_edgetpu.tflite" and
"best_full_integer_quant_segment_1_of_2_edgetpu.tflite" to the coral board and I tried to run the models to detect object in a video, the code:
import argparse
import re
import threading
import time
import numpy as np
import cv2
from pycoral.adapters import classify
from pycoral.adapters import common
import pycoral.pipeline.pipelined_model_runner as pipeline
from pycoral.utils.dataset import read_label_file
from pycoral.utils.edgetpu import list_edge_tpus
from pycoral.utils.edgetpu import make_interpreter
def _get_devices(num_devices):
edge_tpus = list_edge_tpus()
if len(edge_tpus) < num_devices:
raise RuntimeError(
'Not enough Edge TPUs detected, expected %d, detected %d.' %
(num_devices, len(edge_tpus)))
num_pci_devices = sum(1 for device in edge_tpus if device['type'] == 'pci')
return ['pci:%d' % i for i in range(min(num_devices, num_pci_devices))] + [
'usb:%d' % i for i in range(max(0, num_devices - num_pci_devices))
]
def _make_runner(model_paths, devices):
"""Constructs PipelinedModelRunner given model paths and devices."""
print('Using devices: ', devices)
print('Using models: ', model_paths)
print(" hey runner 1")
if len(model_paths) != len(devices):
raise ValueError('# of devices and #of model_paths should match')
print(" hey runner 2", model_paths[0], devices[1])
interpreters = []
interpreter = make_interpreter(model_paths[0], devices[0])
print(" hey runner 3")
interpreter.allocate_tensors()
print(" hey runner 4")
interpreters.append(interpreter)
print(" hey runner 5", model_paths[1], devices[0])
interpreter2 = make_interpreter(model_paths[1], devices[1])
print(" hey runner 6")
interpreter.allocate_tensors()
interpreter2.allocate_tensors()
print(" hey runner 7")
interpreters.append(interpreter2)
print(" hey runner 8")
return pipeline.PipelinedModelRunner(interpreters)
def main():
parser = argparse.ArgumentParser(
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument(
'-m',
'--models',
required=True,
help=('File path template of .tflite model segments, e.g.,'
'inception_v3_299_quant_segment_%d_of_2_edgetpu.tflite'))
parser.add_argument(
'-i', '--input', required=True, help='Image to be classified.')
parser.add_argument('-l', '--labels', help='File path of labels file.')
parser.add_argument(
'-k',
'--top_k',
type=int,
default=1,
help='Max number of classification results')
parser.add_argument(
'-t',
'--threshold',
type=float,
default=0.0,
help='Classification score threshold')
parser.add_argument(
'-c',
'--count',
type=int,
default=1,
help='Number of times to run inference')
args = parser.parse_args()
labels = {0:"pedestrian", 1:"people", 2:"bicycle", 3:"car", 4:"van", 5:"truck", 6:"tricycle", 7:"awning-tricycle", 8:"bus", 9:"motor"}
print("hey lets begin 1")
result = re.search(r'^.*_segment_%d_of_(?P<num_segments>[0-9]+)_.*.tflite',
args.models)
print("hey lets begin 2")
if not result:
print("hey lets begin 3")
raise ValueError(
'--models should follow *_segment%d_of_[num_segments]_*.tflite pattern')
print("hey lets begin 4")
num_segments = int(result.group('num_segments'))
model_paths = [args.models % i for i in range(num_segments)]
print("hey lets begin 5")
devices = _get_devices(num_segments)
print("hey lets begin 6")
runner = _make_runner(model_paths, devices)
print("hey lets begin 7")
# Get the input name and size
size = common.input_size(runner.interpreters()[0])
print("hey lets begin 8")
name = common.input_details(runner.interpreters()[0], 'name')
print("hey lets begin 9")
cap = cv2.VideoCapture(args.input)
print("hey lets begin 10")
if not cap.isOpened():
print("Error opening video capture device")
return
print("hey lets begin 11")
while True:
ret, frame = cap.read()
if not ret:
break
# Preprocess the frame
height, width, _ = frame.shape
new_height = int(640 * height / width)
image = cv2.resize(frame, (640, new_height))
# Run inference
runner.push({name: image})
output_details = runner.interpreters()[-1].get_output_details()[0]
scale, zero_point = output_details['quantization']
result = runner.pop()
if result:
# Process the results
values, = result.values()
scores = scale * (values[0].astype(np.int64) - zero_point)
classes, scores = classify.get_classes_from_scores(result[0], args.top_k, args.threshold)
for klass in classes:
print('%s: %.5f' % (labels.get(klass.id, klass.id), klass.score))
print('-------RESULTS--------')
else:
print("No results")
if __name__ == '__main__':
main()
Description
I'm encountering a "Segmentation fault" error while trying to run a segmented YOLOv5s model on my Coral Dev Board with an external Coral ML Accelerator (USB). I followed the recommended steps to export, compile, and segment the model using edgetpu_compiler.
I follow these steps:
I trained Yolov5su model, then I exported the model as tflite - 8int by using: model.export()
then I use the edgetpu_compiler to compile the tflite model and segmented it into 2 models: edgetpu_compiler model.tflite --num_segments=2
then I get 12 files, 8 models and 4 logs, the models are: 1- best_float32_segment_0_of_2.tflite 2- best_float32_segment_1_of_2.tflite 3- best_float32_segment_0_of_2_edgetpu.tflite 4- best_float32_segment_1_of_2_edgetpu.tflite 5- best_full_integer_quant_segment_0_of_2.tflite 6- best_full_integer_quant_segment_1_of_2.tflite 7- best_full_integer_quant_segment_0_of_2_edgetpu.tflite 8- best_full_integer_quant_segment_1_of_2_edgetpu.tflite
I transfer the "best_full_integer_quant_segment_0_of_2_edgetpu.tflite" and "best_full_integer_quant_segment_1_of_2_edgetpu.tflite" to the coral board and I tried to run the models to detect object in a video, the code:
python3 test2tpu.py -m best_full_integer_quantsegment%d_of_2_edgetpu.tflite -i "/coral/720p/Busy Road.mp4"
the output that I get:
hey lets begin 1 hey lets begin 2 hey lets begin 4 hey lets begin 5 hey lets begin 6 Using devices: ['pci:0', 'usb:0'] Using models: ['best_full_integer_quant_segment_0_of_2_edgetpu.tflite', 'best_full_integer_quant_segment_1_of_2_edgetpu.tflite'] hey runner 1 hey runner 2 best_full_integer_quant_segment_0_of_2_edgetpu.tflite usb:0 hey runner 3 hey runner 4 hey runner 5 best_full_integer_quant_segment_1_of_2_edgetpu.tflite pci:0 Segmentation fault
Note: pycoral version: 2.0.0
Click to expand!
### Issue Type _support_ ### Operating System _LInux Mendel 5.3 _ ### Coral Device _Coral dev board_ ### Other Devices _USB ML Accelerator_ ### Programming Language _Python3.7_ ### Relevant Log Output _No response_