Hello I 'm doing train and eval quite well(also using tensorboard)
but wonder how I can use https://github.com/Cartucho/mAP
for input , ground-truth is quite easy
but I' dont know how I can export with input / detection-results
example is
tvmonitor 0.471781 0 13 174 244
cup 0.414941 274 226 301 265
book 0.460851 429 219 528 247
bottle 0.287150 336 231 376 305
chair 0.292345 0 199 88 436
book 0.269833 433 260 506 336
book 0.462608 518 314 603 369
book 0.298196 592 310 634 388
book 0.382881 403 384 517 461
book 0.369369 405 429 519 470
pottedplant 0.297364 259 183 304 239
pottedplant 0.510713 279 178 340 248
pictureframe 0.261096 187 206 237 258
book 0.272826 433 272 499 341
book 0.619459 413 390 515 459
second column looks scores, and third 4th, 5th,6th is box location that is predicted.
when I tested my tflite file with code
I detect and plot box with code below.
def detect(interpreter, input_tensor):
"""Run detection on an input image.
Args:
interpreter: tf.lite.Interpreter
input_tensor: A [1, height, width, 3] Tensor of type tf.float32.
Note that height and width can be anything since the image will be
immediately resized according to the needs of the model within this
function.
Returns:
A dict containing 3 Tensors (`detection_boxes`, `detection_classes`,
and `detection_scores`).
"""
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# We use the original model for pre-processing, since the TFLite model doesn't
# include pre-processing.
preprocessed_image, shapes = detection_model.preprocess(input_tensor)
interpreter.set_tensor(input_details[0]['index'], preprocessed_image.numpy())
interpreter.invoke()
boxes = interpreter.get_tensor(output_details[0]['index'])
classes = interpreter.get_tensor(output_details[1]['index'])
scores = interpreter.get_tensor(output_details[2]['index'])
return boxes, classes, scores
# Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="./model.tflite")
interpreter.allocate_tensors()
label_id_offset = 1
count=0
for i in range(len(test_images_np)):
print(count)
input_tensor = tf.convert_to_tensor(test_images_np[i], dtype=tf.float32)
boxes, classes, scores = detect(interpreter, input_tensor)
plot_detections(
test_images_np[i][0],
boxes[0],
classes[0].astype(np.uint32) + label_id_offset,
scores[0],
category_index, figsize=(15, 20), image_name="320320foody" + ('%02d' % i) + ".jpg")
def plot_detections(image_np,
boxes,
classes,
scores,
category_index,
figsize=(12, 16),
image_name=None):
print("plot_detection come")
print(image_np)
print(image_name)
print(boxes)
print(classes)
print(scores)
print(category_index)
image_np_with_annotations = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_annotations,
boxes,
classes,
scores,
category_index,
use_normalized_coordinates=True,
min_score_thresh=0.2)
if image_name:
plt.imsave(image_name, image_np_with_annotations)
else:
plt.imshow(image_np_with_annotations)
test_image_dir = 'Tensorflow/workspace/images/test_food'
test_images_np = []
for i in range(1, 11):
image_path = os.path.join(test_image_dir, 'out' + str(i) + '.jpg')
print(image_path)
test_images_np.append(np.expand_dims(
load_image_into_numpy_array(image_path), axis=0))
Hello I 'm doing train and eval quite well(also using tensorboard) but wonder how I can use https://github.com/Cartucho/mAP for input , ground-truth is quite easy but I' dont know how I can export with input / detection-results example is tvmonitor 0.471781 0 13 174 244 cup 0.414941 274 226 301 265 book 0.460851 429 219 528 247 bottle 0.287150 336 231 376 305 chair 0.292345 0 199 88 436 book 0.269833 433 260 506 336 book 0.462608 518 314 603 369 book 0.298196 592 310 634 388 book 0.382881 403 384 517 461 book 0.369369 405 429 519 470 pottedplant 0.297364 259 183 304 239 pottedplant 0.510713 279 178 340 248 pictureframe 0.261096 187 206 237 258 book 0.272826 433 272 499 341 book 0.619459 413 390 515 459
second column looks scores, and third 4th, 5th,6th is box location that is predicted.
when I tested my tflite file with code I detect and plot box with code below.
How can I use it in cartucho?