Closed AkkiSony closed 3 years ago
please share inference accuracy on PC vs uncompiled tflite vs edgetpu.tflite with same example. How many objects you are able to get with PC model vs uncompiled tflite model vs edgetpu model..
Just a small question with respect to inference time. In the detect_image.py code, how is the inference time measured? Does inference time also include loading the image and then the object detection? or Just object detection on the image, after the image is loaded?
I need to get this clarified as, I would like to measure the inference time on the PC and compare with coral USB inference time.
@AkkiSony
Does inference time also include loading the image and then the object detection?
Yes, the inference time also includes the image loading latency.
Yes, the inference time also includes the image loading latency.
Thanks for the info. Please correct me if I am wrong. So the inference time also depends on the size of input images?
start=time.time()
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
print(image.size)
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
output_dict = run_inference_for_single_image(image_np, detection_graph)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=2)
print((time.time() - start))
@AkkiSony Yes, image size/resolution plays an important role here. Larger the image => Greater the inference time.
@manoj7410 Thank for the clarification. :) I have trained a model using Darknet-Yolov3. As Coral TPU only works with tensorflow framework, is there a pssibility for me to convert the model into tensorflow format and further into tflite? Does this work well with edge tpu? Do you have any idea where can I convert my darknet model into tensorflow (.pb) format?
How many objects you are able to get with PC model vs uncompiled tflite model vs edgetpu model..
Just for my understanding and better clarity. @hjonnala
- uncompiled tflite - ? (Is there an inference code for this? Just like detect_image.py)
import numpy as np
import tensorflow as tf
# Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test the model on random input data.
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
# The function `get_tensor()` returns a copy of the tensor data.
# Use `tensor()` in order to get a pointer to the tensor.
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
Just a small question with respect to inference time. In the detect_image.py code, how is the inference time measured? Does inference time also include loading the image and then the object detection? or Just object detection on the image, after the image is loaded?
I need to get this clarified as, I would like to measure the inference time on the PC and compare with coral USB inference time.
please check line 84 to 86: https://github.com/google-coral/pycoral/blob/master/examples/detect_image.py#L84
@hjonnala The inference done on frozen_inference_graph.pb(on PC) is as follows: inference time on PC: 22 seconds
The inference on edgetpu.tflite on coral USB is as follows: inference time on Coral USB: 18ms
How can I increase the accuracy of the model? I would also like to visualize the trainig accuracy graph during this process if possible. If not, can I monitor the accuracy of the existing model now?
When I had trained the model with yolov3, it could detect all the holes in the image. But this was infered on PC only. My next task will be to convert the darknet model to tensorflow model. Is there any official documentation from google? :) Any help highly appreciated! Thanks :)
As the inference time of edgetpu is very less compared to PC inference time. I see that detect_image.py converts the input image size. However, I could not get to know to what dimension is it being resized?
image = Image.open(args.input) _, scale = common.set_resized_input( interpreter, image.size, lambda size: image.resize(size, Image.ANTIALIAS))
I found the above snippet. Can you tell me to what dimensions are all the input image being resized into? Thank you :) @hjonnala
Edge devices are not built for training purposes.You can expect the same accuracy as CPU tflite (uncompiled) model. Please check with tensor flow team regarding training. And convert to edgetpu.tflite only after you are satisfied with CPU tflite model accuracy..
You can write the custom code to pass of number of images and compare the number of objects detected (or save the inference images) for PC model vs CPU tflite vs edgetpu tflite model.
Is there also a depletion in the accuracy, after converting from .pb format to CPU tflite model?
I found the above snippet. Can you tell me to what dimensions are all the input image being resized into?
Can you please give me a clarity on this as well?
My next task will be to convert the darknet model to tensorflow model. Is there any official documentation from google
Do you think this will have better accuracy, as the yolov3 model could detect all the holes in the image? So is it possible to convert?
My next task will be to convert the darknet model to tensorflow model. Is there any official documentation from google
Do you think this will have better accuracy, as the yolov3 model could detect all the holes in the image? So is it possible to convert? I haven't worked on yolov3 model. Please refer to https://coral.ai/models/object-detection/
image = Image.open(args.input) _, scale = common.set_resized_input( interpreter, image.size, lambda size: image.resize(size, Image.ANTIALIAS))
In this snippet found in detect _image.py, what is the input image being resized into? I could'nt find the value of "size" in the code! :/ Can you please let me know? Thanks again for for the suggestion.
you can try this model https://colab.research.google.com/github/google-coral/tutorials/blob/master/retrain_ssdlite_mobiledet_qat_tf1.ipynb
image = Image.open(args.input) _, scale = common.set_resized_input( interpreter, image.size, lambda size: image.resize(size, Image.ANTIALIAS))
In this snippet found in detect _image.py, what is the input image being resized into? I could'nt find the value of "size" in the code! :/ Can you please let me know?
you can get that info from this function: https://github.com/google-coral/pycoral/blob/9972f8ec6dbb8b2f46321e8c0d2513e0b6b152ce/pycoral/adapters/common.py#L78 you can add some logs to this function to know what it is doing in lib/site-packages/pycoral/adapters/common.py
you can get that info from this function: https://github.com/google-coral/pycoral/blob/9972f8ec6dbb8b2f46321e8c0d2513e0b6b152ce/pycoral/adapters/common.py#L78 you can add some logs to this function to know what it is doing in lib/site-packages/pycoral/adapters/common.py
Thanks for the info! :) I added print statements intoit, yet, I could not see any output during executuion. :/
have added the print statements in this file path lib/site-packages/pycoral/adapters/common.py?
have added the print statements in this file path lib/site-packages/pycoral/adapters/common.py?
Sorry, I was on the pycoral directory. I did it on the wrong file. I got it right now! Thank you! :)
@hjonnala I converted my darknet model to tflite edge tpu compatible. But when I tryo to run the inference, I get the following error. Do I have to make any changes in detect.py script? Please help me trace the error. Thanks in advance! :)
can you share your model and the labels? you can start debugging from by printing out interpreter.get_output_details() before detect.get_objects()
you can share via google drive or zip file. Please share model and example image trying to detect.
https://drive.google.com/drive/folders/181npG1SDJnMBBQOm_gXbL0XguSA4gGsy?usp=sharing Please find the attachment.
you can start debugging from by printing out interpreter.get_output_details() before detect.get_objects()
Can you please let me know which IDE did you install for debugging? Also, how did you use virtual environment within that IDE?
Your output tensors are different form the model examples. So, in this case you can't use detect_image.py to get the inference. Please check for resources on how to run inference on yolov3 tflite models.
Here is some third party repo to run infernece on .h5 models: https://github.com/kaka-lin/object-detection
Reference to visualize models: https://netron.app/
I am using the folllwing link to convert the model and get it compatible with edge tpu. https://github.com/guichristmann/edge-tpu-tiny-yolo
Now during the infernce, I am getting the follwong error. Can you please help me with the error?
Please find the below error. I can see that it has problem with fetching the libedgetpu. Please correct me if I am wrong.
@AkkiSony Yes, you will have to install the edgetpu_runtime again in the new Virtual Environment.
Hi @AkkiSony please let us know if you have any questions here.
closing due to inactivity. Feel free to reopen if you still have any questions.
I trained a custom object detection model using https://colab.research.google.com/github/google-coral/tutorials/blob/master/retrain_detection_qat_tf1.ipynb . My dataset have 3 classes.
The inference on the PC was better and i could detect more objects in an image compard to inference using Edge TPU.