sightmachine / SimpleCV

The Open Source Framework for Machine Vision
http://simplecv.org
BSD 3-Clause "New" or "Revised" License
2.7k stars 799 forks source link

Image.track fails with blob.boundingBox with CV2 error #362

Closed mjmare closed 11 years ago

mjmare commented 11 years ago

I'm calling image.track for the first time in my program like this:

bb = blob.boundingBox()
ts = img.track(img=img, bb=bb, nframes=20)

I get this error:

ERROR: Traceback (most recent call last): File "/Users/mjm/Projects/python/simplecv/blobber/test.py", line 18, in ts = img.track(img=img, bb=bb, nframes=20) File "/Users/mjm/Projects/python/simplecv/env/lib/python2.7/site-packages/SimpleCV/ImageClass.py", line 11312, in track new_ellipse, track_window = cv2.CamShift(prob, bb, term_crit) SystemError: new style getargs format but argument is not a tuple

This is the complete program:

from SimpleCV import *

cam = VirtualCamera("/Users/mjm/Projects/python/simplecv/blobber/traffic.mov", "video")
display = Display()
trackingsets = []

while display.isNotDone():
    if display.mouseLeft:
        break

    img = cam.getImage().scale(0.5)

    # first time round
    if not trackingsets:
        blobs = img.findBlobs(threshval=-1, minsize=1000)
        for blob in blobs:
            bb = blob.boundingBox()
            ts = img.track(img=img, bb=bb, nframes=20)
            trackingsets.append(ts)
    else:
        new_trackingsets = []
        for ts in trackingsets:
            new_ts = img.track(ts=ts)
            new_trackingsets.append(ts)
        trackingsets = new_trackingsets

    for ts in trackingsets:
        ts.draw()

    img.show()

IMHO users would benefit very much from a complete tracking example. Maybe first tracking only a single box. A more advanced example would show how to track one blob. A final one would demonstrate tracking more blobs.

jayrambhia commented 11 years ago

Hey,

I'm sorry that you are facing such problems. I'm working on it. Meanwhile, you can go through these tracking examples implemented using SimpleCV. https://github.com/ingenuitas/SimpleCV/tree/develop/SimpleCV/examples/tracking

jayrambhia commented 11 years ago

Hey,

So this is just a temporary solution. while providing bounding box, you need to convert it to tuple.

bb = blob.boundingBox()
bb = tuple(bb)
ts = img.track(img=img, bb=bb, nframes=20)

You can update SimpleCV once the solution gets merged. till then, I am afraid you have to use this.

mjmare commented 11 years ago

Hi Jay

Thanks for the prompt reply. The workaround helped, at least until I got to the next roadblock. I get an error:

File "/Users/mjm/Projects/python/simplecv/env/lib/python2.7/site-packages/SimpleCV/Features/TrackSet.py", line 57, in append f.sizeRatio = float(ts[-1].area)/float(ts[0].area) ZeroDivisionError: float division by zero

It appears that a FeatureSet has an area of 0, a situation for which the TrackSet doesn't check abviously.

If I naively change the line in TrackSet.py: f.sizeRatio = float(ts[-1].area) / float(ts[0].area) to: if ts[0].area == 0: f.sizeRatio = 1 # might be wrong else: f.sizeRatio = float(ts[-1].area) / float(ts[0].area)

then I get another error that I do not understand:

Traceback (most recent call last): File "/Users/mjm/Projects/python/simplecv/blobber/test.py", line 26, in new_ts = img.track(ts=ts) File "/Users/mjm/Projects/python/simplecv/env/lib/python2.7/site-packages/SimpleCV/ImageClass.py", line 11312, in track new_ellipse, track_window = cv2.CamShift(prob, bb, term_crit) error: /tmp/opencv-m4sW/opencv-2.4.4/modules/video/src/camshift.cpp:80: error: (-5) Input window has non-positive sizes in function cvMeanShift

Kind regards, Marcel

FWIW I'm trying to create a program that estimates the speed of cars in a video. To get the areas of interest I try to find the blobs in the initial image, get their bounding boxes and use those as input of Image.track. I then filter on TrackSets that have a reasonable area ("car sized") and a certain minimal pixel velocity ("moving cars"). I proceed until no more TrackSets are available and then start the process again.

The code makes more sense I guess:

from SimpleCV import *

cam = VirtualCamera("/Users/mjm/Projects/python/simplecv/blobber/traffic.mov", "video") display = Display() trackingsets = []

while display.isNotDone(): if display.mouseLeft: break

img = cam.getImage().scale(0.5)

# first time round
if not trackingsets:
    print "Find Blobs"
    blobs = img.findBlobs(threshval=-1, minsize=1000)
    for blob in blobs:
        bb = blob.boundingBox()
        bb = tuple(bb) # workaround
        ts = img.track(img=img, bb=bb, num_frames=5, nframes=20)
        trackingsets.append(ts)
else:
    print "Track sets"
    new_trackingsets = []
    for ts in trackingsets:
        new_ts = img.track(ts=ts)
        vel = new_ts.pixelVelocity()[-1]
        velsqr = np.square(vel).sum()  # squared length
        a = new_ts[-1].area
        # only use trackers that are not too small or big and those that are moving
        if (a>10000) and (a<100000) and (velsqr>25):
            new_trackingsets.append(new_ts)
    trackingsets = new_trackingsets

print len(trackingsets)

for ts in trackingsets:
    ts.drawBB()
    ts.drawPath()

img.show()

Is this a good approach, you think?

On 31 mrt. 2013, at 14:06, Jay Rambhia notifications@github.com wrote:

Hey,

So this is just a temporary solution. while providing bounding box, you need to convert it to tuple.

bb = blob.boundingBox() bb = tuple(bb) ts = img.track(img=img, bb=bb, nframes=20) You can update SimpleCV once the solution gets merged. till then, I am afraid you have to use this.

— Reply to this email directly or view it on GitHub.

jayrambhia commented 11 years ago

when finding blobs, if you could pass a well thresholded image, it'd be much easier to find good blobs. You can even do a bit background subtraction. Once you have found appropriate blobs, instead of tracking them using CAMShift, you should try LK method. It is based on Optical Flow and hence I think it'd be better suited for this application.

As far as ts[0].area is concerned, it means that bounding box was not found in the first frame, so there's no point continuing the tracking as it doesn't know what to track. In (#363), I have taken care of this. But you should keep a checker whether tracker is working or not (in terms of bounding box). Tracking class seems to be a little messed up because when I was working on it, I never had such problems, but it's good that they are coming out, so we can fix them.

Please keep reporting bugs that you find. We try our best to resolve them as soon as possible,

mjmare commented 11 years ago

Hi Jay

I'm working with SimpleCV from GitHub, develop branch. The area=0 error has gone now (thanks), but the other error remains. This is the output:

/Users/mjm/Projects/python/simplecv/env/bin/python /Users/mjm/Projects/python/simplecv/blobber/test.py Find Blobs Newly found sets: 5 5 Tracking sets 5 Tracking sets 3 Tracking sets 2 Tracking sets 2 Tracking sets 2 Tracking sets 1 Tracking sets 0 Find Blobs Newly found sets: 5 5 Tracking sets OpenCV Error: Bad argument (Input window has non-positive sizes) in cvMeanShift, file /tmp/opencv-m4sW/opencv-2.4.4/modules/video/src/camshift.cpp, line 80 ERROR: Traceback (most recent call last): File "/Users/mjm/Projects/python/simplecv/blobber/test.py", line 29, in new_ts = img.track(ts=ts) File "/Users/mjm/Projects/python/simplecv/env/lib/python2.7/site-packages/SimpleCV/ImageClass.py", line 11363, in track new_ellipse, track_window = cv2.CamShift(prob, bb, term_crit) error: /tmp/opencv-m4sW/opencv-2.4.4/modules/video/src/camshift.cpp:80: error: (-5) Input window has non-positive sizes in function cvMeanShift

Process finished with exit code 1

What happens is that the program initially finds 5 blobs, which in turn gives 5 trackingsets. These 5 get weeded out because they are too small/big or don't move enough. The program then tries to find blobs again. So it's only the second time around when it tries to track the tracksets from the new blobs that things g wrong.

But there is more funkiness. I'm now also drawing the blobs on the image, and I noticed that the Tracksets, although based on the blobs' boundingboxes seem to be in the wrong place, even after the first call to img.track. Here's a screenshot I took after putting a breakpoint after the first loop.

I've put the program and the test movie op DropBox: https://www.dropbox.com/sh/3v7uu87l94s2v45/VyqauT0NYa

PS I'm not expecting you to debug my program, but I hope it will help you pinpoint the problem.

Kind regards Marcel

On 1 apr. 2013, at 06:29, Jay Rambhia notifications@github.com wrote:

when finding blobs, if you could pass a well thresholded image, it'd be much easier to find good blobs. You can even do a bit background subtraction. Once you have found appropriate blobs, instead of tracking them using CAMShift, you should try LK method. It is based on Optical Flow and hence I think it'd be better suited for this application.

As far as ts[0].area is concerned, it means that bounding box was not found in the first frame, so there's no point continuing the tracking as it doesn't know what to track. In (#363), I have taken care of this. But you should keep a checker whether tracker is working or not (in terms of bounding box). Tracking class seems to be a little messed up because when I was working on it, I never had such problems, but it's good that they are coming out, so we can fix them.

Please keep reporting bugs that you find. We try our best to resolve them as soon as possible,

— Reply to this email directly or view it on GitHub.

jayrambhia commented 11 years ago

Hi Marcel,

The error that you are getting is due to size of the bounding box. It's zero. (0, 0, 0, 0). As of now you have to keep a manual check on bounding box or the area and if it's greater than certain value, then pass it on. In my latest pull request (#365), I have completely restructured tracking part of SimpleCV. This update won't break anything. It is still the same Image.track but with many better advantages. You can set certain parameters required for better tracking. I have added new tracking algorithm, Median Flow tracker, which is much more stable and faster than any of the tracking algorithms present in SimpleCV. It is also based on Optical Flow so it'd be better suited for your application.

I have found a fix. I will add changes in #365

Thanks for reporting these issues.

mjmare commented 11 years ago

Hi Jay,

Making progress. Caught a bug in you code ;-)

I was stymed by the discrepancy between the blob bounding box and the trackset bounding box. I have switched to MFTracker. I think the line 90 in MFTracker.py is wrong: bb = [bb[0], bb[1], bb[0]+bb[3], bb[1]+bb[3]] should be: bb = [bb[0], bb[1], bb[0]+bb[2], bb[1]+bb[3]]

The bounding boxes are equal now and I'm actually able to track some cars!

BTW the docs for FeatureSet.boundingBox() is wrong: it returns (x,y,w,h) and not "A list of (x,y) corner tuples. The order is top left, bottom left, bottom right, top right." (iow 4 tuples).

BTW2 It would be helpful to include more info on the tracking methods and their parameters. The current docstrings are not enough (for me). Maybe point to some info elsewhere?

Finallly I'm left with questions on the big picture: Currently I'm first finding blobs in order to find suitable candidates for the bb to give to img.track. This feels wrong, because I select blobs based on color sameness (or something) and then track those areas.

What I think I need is to detect (blob) movement and then track those areas. It would seem to uninformed me that the MFTrack algorithm is suitable for that. So how would one go about in setting suitable params (like the bb) for the MFTrack algo to achieve this effect? TIA

Marcel

On 1 apr. 2013, at 14:18, Jay Rambhia notifications@github.com wrote:

Hi Marcel,

The error that you are getting is due to size of the bounding box. It's zero. (0, 0, 0, 0). As of now you have to keep a manual check on bounding box or the area and if it's greater than certain value, then pass it on. In my latest pull request (#365), I have completely restructured tracking part of SimpleCV. This update won't break anything. It is still the same Image.track but with many better advantages. You can set certain parameters required for better tracking. I have added new tracking algorithm, Median Flow tracker, which is much more stable and faster than any of the tracking algorithms present in SimpleCV. It is also based on Optical Flow so it'd be better suited for your application.

I have found a fix. I will add changes in #365

Thanks for reporting these issues.

— Reply to this email directly or view it on GitHub.

jayrambhia commented 11 years ago

Hi Marcel,

Thanks for reporting that huge bug :). That was a horrible mistake. Anyway, I have make amends and updating the code. The new tracking algorithm Median Flow Tracker is similar to LK Tracker which is based on Optical Flow. In a nutshell, it calculates how much a pixel has moved from frame 1 to frame 2 and according to that it tracks the object. If you know what's optical flow, then I'm really sorry for the previous sentence. So that's why I think it'd be suitable for your project as you want to track cars.

As far as giving the initial coordinates of the object to tracker is concerned, I have an idea. First you start as usual. Find blobs and feed the bounding box to the tracker. In next 5-10 frames, you compute how much each object has traveled. I tried your code on your video and I realized that blue sky and green backgrounds are recognized as a blob and are fed to the tracker. Now, here's the catch. Your sky blob doesn't move at all. So, in next 5-10 frames you check which blobs haven't moved and don't feed them to the tracker from then onwards. Store some properties of these blobs somewhere so whenever these blobs are found, the program directly discards them and it is not fed to the tracker. This will increase the efficiency and run time speed. Now, once a car appears, the program will feed its location to the tracker and it will track. Once it goes out of the vision, the bounding box will stay at the end of the image where the car was last seen. This box will be there forever until some object appears in that box and moves. So, this is kind of a pitfall that I need to eliminate. So, now since the box hasn't moved for 5-10 frames, your program will discard it and not feed it to the tracker again.

This was a method. Another one is you train a classifier to detect cars in the image. Once the car is detected, you can feed it to Median Flow tracker and it will track the car. Or if you want to detect cars with a particular color, you can threshold the image and locate the car (using blob or some method that you'd prefer) and feed it to the tracker.

I will try to add more docs and make it much more helpful ASAP.

Thanks a ton!