TannerGilbert / Tensorflow-Object-Detection-with-Tensorflow-2.0

Use the Tensorflow Object Detection API with Tensorflow 2
https://gilberttanner.com/blog/object-detection-with-tensorflow-2
MIT License
131 stars 78 forks source link

Error from when i running this code at jetson nano(ubuntu 18.04) #13

Closed jae960713 closed 2 years ago

jae960713 commented 2 years ago

Dear Gilbert Tanner. Thank you for sharing this code. I'm trying to learing object detection to using this code, i got a problem. I'm trying to run this code at windows and linux both. Windows environment is my LG laptop(no gpu), linux environment is Jetson nano. When i using my laptop, this code is running no problem, but Jetson nano is making error at this code. Laptop's tensorflow version is 2.7.0, window 11, Jetson's version is 2.4.1, Ubuntu 18.04 This is my code and error. Please check this proplem. Thank you.

code: import numpy as np import argparse import tensorflow as tf import cv2

from object_detection.utils import ops as utils_ops from object_detection.utils import label_map_util from object_detection.utils import visualization_utils as vis_util

patch tf1 into utils.ops

utils_ops.tf = tf.compat.v1

Patch the location of gfile

tf.gfile = tf.io.gfile

def load_model(model_path): model = tf.saved_model.load(model_path) return model

def run_inference_for_single_image(model, image): image = np.asarray(image)

The input needs to be a tensor, convert it using tf.convert_to_tensor.

input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]

# Run inference
output_dict = model(input_tensor)

# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key: value[0, :num_detections].numpy()
               for key, value in output_dict.items()}
output_dict['num_detections'] = num_detections

# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)

# Handle models with masks:
if 'detection_masks' in output_dict:
    # Reframe the the bbox mask to the image size.
    detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
                                output_dict['detection_masks'], output_dict['detection_boxes'],
                                image.shape[0], image.shape[1])      
    detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5, tf.uint8)
    output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()

return output_dict

def run_inference(model, category_index, cap): while True: ret, image_np = cap.read()

Actual detection.

    output_dict = run_inference_for_single_image(model, image_np)
    # Visualization of the results of a detection.
    vis_util.visualize_boxes_and_labels_on_image_array(
        image_np,
        output_dict['detection_boxes'],
        output_dict['detection_classes'],
        output_dict['detection_scores'],
        category_index,
        instance_masks=output_dict.get('detection_masks_reframed', None),
        use_normalized_coordinates=True,
        line_thickness=8)
    cv2.imshow('object_detection', cv2.resize(image_np, (800, 600)))
    if cv2.waitKey(25) & 0xFF == ord('q'):
        cap.release()
        cv2.destroyAllWindows()
        break

if name == 'main': ''' parser = argparse.ArgumentParser(description='Detect objects inside webcam videostream') parser.add_argument('-m', '--model', type=str, required=True, help='Model Path') parser.add_argument('-l', '--labelmap', type=str, required=True, help='Path to Labelmap') args = parser.parse_args()

detection_model = load_model(args.model)
'''
detection_model = "./ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8"
labelmap = "./data/mscoco_label_map.pbtxt"
category_index = label_map_util.create_category_index_from_labelmap(labelmap, use_display_name=True)

cap = cv2.VideoCapture(0)
run_inference(detection_model, category_index, cap)

error :

2022-03-02 11:52:37.679280: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2 2022-03-02 11:52:48.768399: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2 2022-03-02 11:52:54.598350: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1 2022-03-02 11:52:54.639131: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:949] ARM64 does not support NUMA - returning NUMA node zero 2022-03-02 11:52:54.639313: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:00:00.0 name: NVIDIA Tegra X1 computeCapability: 5.3 coreClock: 0.9216GHz coreCount: 1 deviceMemorySize: 3.86GiB deviceMemoryBandwidth: 194.55MiB/s 2022-03-02 11:52:54.639404: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2 2022-03-02 11:52:54.805647: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2022-03-02 11:52:54.880207: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2022-03-02 11:52:54.988255: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2022-03-02 11:52:55.114407: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2022-03-02 11:52:55.197495: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2022-03-02 11:52:55.201461: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8 2022-03-02 11:52:55.202007: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:949] ARM64 does not support NUMA - returning NUMA node zero 2022-03-02 11:52:55.202471: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:949] ARM64 does not support NUMA - returning NUMA node zero 2022-03-02 11:52:55.202639: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2022-03-02 11:52:55.459890: W tensorflow/core/platform/profile_utils/cpu_utils.cc:108] Failed to find bogomips or clock in /proc/cpuinfo; cannot determine CPU frequency 2022-03-02 11:52:55.460513: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x181a1a00 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2022-03-02 11:52:55.460618: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2022-03-02 11:52:55.461394: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:949] ARM64 does not support NUMA - returning NUMA node zero 2022-03-02 11:52:55.462019: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:00:00.0 name: NVIDIA Tegra X1 computeCapability: 5.3 coreClock: 0.9216GHz coreCount: 1 deviceMemorySize: 3.86GiB deviceMemoryBandwidth: 194.55MiB/s 2022-03-02 11:52:55.462265: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2 2022-03-02 11:52:55.462590: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2022-03-02 11:52:55.462732: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2022-03-02 11:52:55.462843: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2022-03-02 11:52:55.462948: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2022-03-02 11:52:55.463050: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2022-03-02 11:52:55.463137: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8 2022-03-02 11:52:55.463525: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:949] ARM64 does not support NUMA - returning NUMA node zero 2022-03-02 11:52:55.463857: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:949] ARM64 does not support NUMA - returning NUMA node zero 2022-03-02 11:52:55.463957: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2022-03-02 11:54:09.451073: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix: 2022-03-02 11:54:09.743041: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0 2022-03-02 11:54:09.743204: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N 2022-03-02 11:54:10.300106: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:949] ARM64 does not support NUMA - returning NUMA node zero 2022-03-02 11:54:10.843553: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:949] ARM64 does not support NUMA - returning NUMA node zero 2022-03-02 11:54:10.859127: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:949] ARM64 does not support NUMA - returning NUMA node zero 2022-03-02 11:54:11.096350: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3) 2022-03-02 11:54:11.572725: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x48bb8e90 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2022-03-02 11:54:11.572818: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3 Traceback (most recent call last): File "/home/ai/Tensorflow/models/research/object_detection/detect_from_webcam_0.py", line 91, in run_inference(detection_model, category_index, cap) File "/home/ai/Tensorflow/models/research/object_detection/detect_from_webcam_0.py", line 59, in run_inference output_dict = run_inference_for_single_image(model, image_np) File "/home/ai/Tensorflow/models/research/object_detection/detect_from_webcam_0.py", line 30, in run_inference_for_single_image output_dict = model(input_tensor) TypeError: 'str' object is not callable