dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.74k stars 2.97k forks source link

AttributeError: detectNet.Detection object has no attribute 'TrackStatus' #1783

Open ryan-sime opened 8 months ago

ryan-sime commented 8 months ago

The code used when receiving this error is a modification of the detectnet.py code, with modifications to the initialized network:

net = detectNet(args.network, sys.argv, args.threshold)
net.SetTrackingEnabled(True)
net.SetTrackingParams(minFrames=3, dropFrames=15, overlapThreshold=0)

as well as modifications to the output statements in the loop through the detections array.

for detection in detections:
    print(detection)
    try:
      if detection.TrackStatus >= 0:  # actively tracking
          print(f"object {detection.TrackID} at ({detection.Left}, {detection.Top}) has been tracked for {detection.TrackFrames} frames")
      else:  # if tracking was lost, this object will be dropped the next frame
          print(f"object {detection.TrackID} has lost tracking")
    except Attribute error as e:
          print(f"except {e}")

As defined in detectNet.h, TrackFrames and TrackStatus are members of the Detection object as shown here:

struct Detection
{
    // Detection Info
    uint32_t ClassID;   /**< Class index of the detected object. */
    float Confidence;   /**< Confidence value of the detected object. */

    // Tracking Info
    int TrackID;        /**< Unique tracking ID (or -1 if untracked) */
    int TrackStatus;    /**< -1 for dropped, 0 for initializing, 1 for active/valid */ 
    int TrackFrames;    /**< The number of frames the object has been re-identified for */
    int TrackLost;      /**< The number of consecutive frames tracking has been lost for */

    // Bounding Box Coordinates
    float Left;     /**< Left bounding box coordinate (in pixels) */
    float Right;        /**< Right bounding box coordinate (in pixels) */
    float Top;      /**< Top bounding box cooridnate (in pixels) */
    float Bottom;       /**< Bottom bounding box coordinate (in pixels) */
};

yet, when trying to access them in the loop through the detections array, the result is except jetson.inference.detectNet.Detection object has no attribute 'TrackStatus'

The same error also occurs when trying to access TrackFrames and TrackLost. The most confusing part is that both print(detection) and the outputted statement from the tracker are both printing out updated values for those attributes.

Am I overlooking something simple or is there a larger issue at play?

dusty-nv commented 8 months ago

@ryan-sime hmm it's strange, because the getter/setters for these tracking attributes are indeed defined here in the Python/C++ bindings:

https://github.com/dusty-nv/jetson-inference/blob/fe8b42c8da75c1c353dc59fa1fd079820024b89d/python/bindings/PyDetectNet.cpp#L515

And I just double-checked and confirmed that I'm able to access TrackStatus from Python without issue. Are you sure you don't have an old version of jetson-inference installed in your Python environment or something? While iterating over the detections, what does print(dir(detection)) show for you?

['Area', 'Bottom', 'Center', 'ClassID', 'Confidence', 'Contains', 'Height', 'Instance', 'Left', 'ROI', 'Right', 'Top', 'TrackFrames', 'TrackID', 'TrackLost', 'TrackStatus', 'Width', '__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__']
ryan-sime commented 8 months ago

Printing dir(detection) provides a list of all of those attributes excluding TrackFrames, TrackLost, and TrackStatus. This was the case previously and is still the case after removing the jetson-inference folder I had, cloning it again, and building it again fresh.

The getters and setters are there and working properly, as it is still outputting the correct TrackStatus, TrackID, and TrackFrames with print(detection) when tracking is enabled, but if I try to use detection.TrackStatus or the other Track attributes later in that same loop, it no longer has those attributes.

I am running this in the docker container, not sure if that would be impacting this at all. Is there also a possibility that there was an error in the build/make process after cloning that would cause a problem here?

dusty-nv commented 8 months ago

@ryan-sime I'm not sure what version of JetPack-L4T you are running, but my guess is that the container image is outdated vs the upstream source. To rebuild the jetson-inference container from your current sources, see here:

https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-docker.md#building-the-container

The normal cmake build process will not rebuild the code inside the container. Alternatively, you could just try running your script outside of container.

ryan-sime commented 8 months ago

Deleting the folder and rerunning with

$ git clone --recursive --depth=1 https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ docker/run.sh

leaves us in the same scenario as before. I thought this might have been an issue with the runtime, so I tried building with the nvidia runtime and using version r34.1.1 of l4t with PyTorch 2.0 and received this error:

Step 24/26 : RUN mkdir docs &&     touch docs/CMakeLists.txt &&     sed -i 's/nvcaffe_parser/nvparsers/g' CMakeLists.txt &&     cp -r /usr/local/include/gstreamer-1.0/gst/webrtc /usr/include/gstreamer-1.0/gst &&     ln -s /usr/lib/$(uname -m)-linux-gnu/libgstwebrtc-1.0.so.0 /usr/lib/$(uname -m)-linux-gnu/libgstwebrtc-1.0.so &&     mkdir build &&     cd build &&     cmake ../ &&     make -j$(nproc) &&     make install &&     /bin/bash -O extglob -c "cd /jetson-inference/build; rm -rf -v !($(uname -m)|download-models.*)" &&     rm -rf /var/lib/apt/lists/*     && apt-get clean
 ---> Running in 55b4602712ef
mkdir: cannot create directory 'docs': File exists
The command '/bin/sh -c mkdir docs &&     touch docs/CMakeLists.txt &&     sed -i 's/nvcaffe_parser/nvparsers/g' CMakeLists.txt &&     cp -r /usr/local/include/gstreamer-1.0/gst/webrtc /usr/include/gstreamer-1.0/gst &&     ln -s /usr/lib/$(uname -m)-linux-gnu/libgstwebrtc-1.0.so.0 /usr/lib/$(uname -m)-linux-gnu/libgstwebrtc-1.0.so &&     mkdir build &&     cd build &&     cmake ../ &&     make -j$(nproc) &&     make install &&     /bin/bash -O extglob -c "cd /jetson-inference/build; rm -rf -v !($(uname -m)|download-models.*)" &&     rm -rf /var/lib/apt/lists/*     && apt-get clean' returned a non-zero code: 1

Just to be safe and to make sure I'm not making a silly mistake, here's the code we're running inside the main folder in the docker container:

#!/usr/bin/env python3
#
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
import sys
import argparse # command line args
import requests # http requests
import cv2 # video writer
from jetson_inference import detectNet
from jetson_utils import videoSource, videoOutput, Log
API_ENDPOINT = ""
PERSON = 1
# parse the command line
parser = argparse.ArgumentParser(description="Locate objects in a live camera stream using an object detection DNN.",
                                 formatter_class=argparse.RawTextHelpFormatter,
                                 epilog=detectNet.Usage() + videoSource.Usage() + videoOutput.Usage() + Log.Usage())
parser.add_argument("input", type=str, default="", nargs='?', help="URI of the input stream")
parser.add_argument("output", type=str, default="", nargs='?', help="URI of the output stream")
parser.add_argument("--network", type=str, default="ssd-mobilenet-v2", help="pre-trained model to load (see below for options)")
parser.add_argument("--overlay", type=str, default="box,labels,conf", help="detection overlay flags (e.g. --overlay=box,labels,conf)\nvalid combinations are:  'box', 'labels', 'conf', 'none'")
parser.add_argument("--threshold", type=float, default=0.5, help="minimum detection threshold to use")
parser.add_argument("--output_video", type=str, default="", help="Path to save the output video file")
try:
    args = parser.parse_known_args()[0]
except:
    print("")
    parser.print_help()
    sys.exit(0)
def print_green(str):
    print('\033[92m' + str + '\033[0m')
def print_red(str):
    print('\033[91m' + str + '\033[0m')
# create video sources and outputs
input = videoSource(args.input, argv=sys.argv)
output = videoOutput(args.output, argv=sys.argv)
# Video writer might be able to use the videoOutput for the jetson
img = input.Capture()
output_video_path = args.output_video
if output_video_path:
    fourcc = cv2.VideoWriter_fourcc(*'XVID')
    output_video = cv2.VideoWriter(output_video_path, fourcc, 20.0, (img.shape[1], img.shape[0]))
else:
    output_video = None
# load the object detection network
net = detectNet(args.network, sys.argv, args.threshold)
net.SetTrackingEnabled(True)
net.SetTrackingParams(minFrames=3, dropFrames=15, overlapThreshold=0)
# note: to hard-code the paths to load a model, the following API can be used:
#
# net = detectNet(model="model/ssd-mobilenet.onnx", labels="model/labels.txt",
#                 input_blob="input_0", output_cvg="scores", output_bbox="boxes",
#                 threshold=args.threshold)
# process frames until EOS or the user exits
tracked_object_ids = set()
frame_id = 0
while True:
    # capture the next image
    img = input.Capture()
    if img is None:  # timeout
        continue
    print(f"processing frame {frame_id}===========================================")
    frame_id += 1
    # detect and track objects in the image (with overlay)
    detections = net.Detect(img, overlay=args.overlay)
    # print the detections
    # print("detected {:d} objects in image".format(len(detections)))
    for detection in detections:
        print(detection)
        print(dir(detection))
        try:
            print(detection.TrackStatus)
            if detection.TrackID < 0: # -1 when untracked
                continue
            tmp = str(detection)
            detection_track_status = int(tmp[tmp.find("TrackStatus")+12:tmp.find("\n", tmp.find("TrackStatus"))])
            detection_track_frames = int(tmp[tmp.find("TrackFrames")+12:tmp.find("\n", tmp.find("TrackFrames"))])
            if detection_track_status >= 0:
                if detection_track_frames == 4:
                    print_green(f"appear {detection.TrackID} {net.GetClassDesc(detection.ClassID)} {detection.Confidence} has been tracked for {detection_track_frames} frames")
            else:
                print_red(f"object {detection.TrackID} has lost tracking")
        except AttributeError as e:
            print(f"except {e}")
    new_object_ids = set()
    """
    for detection in detections:
        #if detection.ClassID != PERSON:
          #  continue
        # print(detection)
        # https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-tracking.md
        object_id = detection.TrackID
        #if object_id == -1: # -1 means untracked?
         #   continue
        # Check if it's a new object
        if object_id not in tracked_object_ids:
            new_object_ids.add(object_id)
            # Generate HTTP request for new object
            print(f"New object detected with ID: {object_id}")
            # requests.post(API_ENDPOINT, json={'object_id': object_id, 'status': 'new'})
    # Check for disappeared objects
    disappeared_object_ids = tracked_object_ids - new_object_ids
    #print(f"disappeared object ids: {disappeared_object_ids}")
    #print(f"tracked object ids: {tracked_object_ids}")
    for object_id in disappeared_object_ids:
        # Generate HTTP request for disappeared object
        print(f"Object with ID {object_id} disappeared")
        # requests.post(API_ENDPOINT, json={'object_id': object_id, 'status': 'disappeared'})
    # Update tracked object IDs
    tracked_object_ids = new_object_ids
    """
    # render the image
    output.Render(img)
    # write frame to video file
    if output_video is not None:
        import numpy as np
        image = np.array(img)
        output_video.write(image)
    # update the title bar
    output.SetStatus("{:s} | Network {:.0f} FPS".format(args.network, net.GetNetworkFPS()))
    # print out performance info
    # net.PrintProfilerTimes()
    # exit on input/output EOS
    if not input.IsStreaming() or not output.IsStreaming():
        break

which gives output like this:

<detectNet.Detection object>
   -- Confidence:  0.556152
   -- ClassID:     28
   -- TrackID:     1
   -- TrackStatus: 1
   -- TrackFrames: 88
   -- TrackLost:   2
   -- Left:    23.0469
   -- Top:     20.0098
   -- Right:   319
   -- Bottom:  138.516
   -- Width:   295.953
   -- Height:  118.506
   -- Area:    35072.2
   -- Center:  (171.023, 79.2627)
['Area', 'Bottom', 'Center', 'ClassID', 'Confidence', 'Contains', 'Height', 'Instance', 'Left', 'ROI', 'Right', 'Top', 'TrackID', 'Width', '__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__']
except 'jetson.inference.detectNet.Detection' object has no attribute 'TrackStatus'

It is strange that it is actively updating these attributes while they are not accessible even though they are public. I appreciate all of the help you've given so far!

ryan-sime commented 8 months ago

I managed to get it to work by following the guide to building-repo-2 more closely. Things went smoothly until I tried to run the cmake script, an issue with the CUDA settings popped up, but following the steps taken here to fix the path resolved that problem.

It now works as I hoped, with the attributes now existing in the Detection object, but then I realized that I was running it from the main folder and not the docker container. Previously, I could not run the program from outside of the docker container, but after rebuilding, it managed to work. I tried to run it in the docker container and the same issues as before popped up again where the attributes do not exist and I still do not understand why, but I do not need to run it in the docker container, so I am satisfied with it working properly in the main folder.

Thanks for your quick responses, I appreciate the help!