Open RichardHMaxwell opened 3 years ago
Hello @JasonHMaxwell, sorry about the inconvenience. This shouldn't happen, but we, unfortunately, can't do anything about it. Object tracker logic is proprietary software of Intel and is compiled so we can't change/fix anything. There are also other problems with the current object tracker:
one thing I notice is that tracklets get removed sometime after 1 "LOST" status and sometimes after 30. Not really sure how to explain that. I also believe that the threshold isn't (always) working as expected. Also there is some random issue that it will create multiple tracklets for the same object, link here (I'm not sure if it has been resolved)
So my suggestion would be to either use mobilenet (maybe better detection accuracy?) or run a custom tracking NN.
Thanks, Erik
Thanks for your reply @Erol444. I've manged to work around this issue by pulling the nn detections back to the host, setting the label of each detection to a constant, e.g. 1, and then sending the modified detections to the on-device tracker. It's possible that throwing away the detection labels will reduce the accuracy of the tracker, but I'm not sure what algorithm is being used. Is the tracking algorithm documented somewhere?. If the algorithm doesn't use the image data at all, then I may as well do the tracking on the host.
Would you be able to raise a bug with intel? Having said that, it sounds like Intel's tracker is a bit of a disaster, so maybe it would be faster for you to implement your own.
Describe the bug For objects of any class other than the first detected class, the ObjectTracker pipeline node will emit a NEW tracklet followed by a REMOVED tracklet in the next frame. This behavior repeats over the entire sequence of frames, i.e. you'll see NEW, REMOVED, NEW, REMOVED, NEW, REMOVED... I have observed this behavior only with the YOLO pipeline node.
To Reproduce To make this as easy to reproduce as possible, I updated the
object_tracker_video
example code to use YOLO + the yolov4 model zoo model instead of mobilenet. I then took the darknet sample image and turned it into a 4 second video (i.e. 100 frames of the same stationary image, see attached) so that the tracker's job is as easy as possible. The tracker has been configured to use the UNIQE_ID TrackerIdAssigmentPolicy, so you should see the id for the dog and horse detections go up with each frame. The ID of the person object will remain constant.Expected behavior The dog and horse detections should each be assigned a track in the first frame and these tracks should last the entire video.
Screenshots
https://user-images.githubusercontent.com/91002185/133919361-52398f3c-ede0-4cef-870d-e218fad70178.mp4