Open JosephBrooksbank opened 7 years ago
Can you share the generated Python file and the GRIP save file?
Here is my python running script: http://pastebin.com/X0edmjwT
Here is my grip python file: http://pastebin.com/1aa9ddH8
Here is the grip save file: here
EDIT: here is the generated python file for a configuration that works and identifies (many) contours.
You do have some problems in the main python script here:
for contour in pipeline.filter_contours_output:
x, y, w, h = cv2.boundingRect(contour)
center_x_positions.append(x + w / 2)
center_y_positions.append(y + h / w) # should be h / 2
widths.append(w)
heights.append(y) # should be h
areas.append(w * h)
Do GRIP and the Pi always disagree? What requirements do you change to make it find the contours you want?
Huh, I don't know how I didn't notice those. Nevertheless, it should still find the contours and display them with a drawContours function I call later.
GRIP and the Pi don't always disagree, as linked in my first comment here is a generated pipeline that appears to have the same output as GRIP does. When I increase the HSV or contour size ranges, the Pi finds contours with a much larger reliability. Our test robot was just taken by build team to fix a broken gear box, but when I have it back (a controlled location / height for generating grip files from) I will see what modifications specifically cause a loss in reliability.
Research into issue: All data hosted here on github
Here is a more in depth look into the issue. Without moving the camera or the vision target, I went through a system of controlled tests, changing one value per test. The python file is found in the root folder of the git repo.
The initial values, while looking great on grip(osx), do not output any values on the pi. These findings can be seen in screenshots in this folder.. The screenshots are labeled by their step in the system, with mjpg_stream being the data from the pi.
My first theory was that the pi was not detecting as many vertices as grip. I lowered the "minimum vertices" value to 1, and contours began to appear. However, they were all very small, and not on the vision target in particular. These contours are most likely nothing more than camera noise.
My next thought was that perhaps the size constraints on the areas in the contours were calculated differently on grip than the pi, but it does not appear that this had any effect. No contours visible.
With the actual filtering of the contours proved to be of little importance, I started messing with the color threshold of the image. I have tried both HSL and HSV thresholds in the past, without much change. However, for these tests I only used HSL for the sake of time. Perhaps in the future I will test with HSV.
This was an interesting one, as it caught the edges of the contours, creating weird inverted shapes. Not helpful, however.
Did not appear to change much, caught one small contour but appears to be random.
I had the best luck with removing (allowing all) of the hue. The pi changed from not being able to find any of the proper contours, to finding basically the same contours as grip did. They aren't perfectly similar, but very similar.
Something involving thresholds is being calculated differently on the pi than it is in grip, causing opencv to interpret the image incorrectly.
What's the difference between raw_image.png
and mjpeg_stream.png
? raw_image is more saturated, so if you're not using the same camera, settings, and compression, that would be causing this problem.
raw_image is the image as seen in grip on osx. mjpeg_stream is the same settings when viewed from the pi. I believe the slight difference is because of the compresssion mjpeg does when streaming.
also, raw_image is slightly outdated, was taken about half an hour before the rest.
Hey Brookshank, I've been pulling what little hair I've got out with the GRIP stuff as well, and many thanks to you for asking for help. I took your code and managed to get it working with different resolutions and it consistently identifies my target.
Things I did to the code (guessing you'll know the sections that the modifications were made in): center_x_positions.append(ximage_scale + w image_scale) # X and Y are coordinates of the top-left corner of the bounding box center_y_positions.append(yimage_scale + h image_scale) widths.append(wimage_scale) heights.append(himage_scale) areas.append(wimage_scale h*image_scale)
center = (x*image_scale + (w * image_scale)), (y*image_scale + (h * image_scale))
Added the static values at the top of the script, and did this in main: cap.set(3,x_resolution) # width cap.set(4,y_resolution) # heigh
Really appreciate you sharing your code!!!! This was definitely the pick-me-up we were needing after all of the problems with GRIP and OpenCV.
Now on to the Pi for us and to make sure targeting works for the boiler!
Thanks for commenting on the issue! I have one question. In your code for center_x_positions
and the like, did you mean x * image_scale
or is ximage_scale
a new variable? I assume you meant the first but I thought I'd clarify.
If you need any help getting things working on the pi let me know, I have much more experience in systems and linux than I do in coding so I would probably be more help there.
Hiya brookshank!
Sure thing! I'm just glad that someone other than us is having a rough time getting this stuff to work.
No - the scaling factor you're using needs to be applied to all of the variables (x, y, w, h). That way you've gotten the reduction in size applied. So just multiply the variables by the scaling factor. I forgot to mention that I'm using our generated GRIP Python code, and haven't made too many tweaks to it so it's still moderately vanilla (so now maybe French vanilla?).
As for the x_resolution and y_resolution, I have mine set at the top of the script. I set the resolution at the top of our script (not elegant I know), and then used your scaling factor for the contours. One word of caution, the default on the Microsoft cameras is 640x480, and if you decide you want something more square to fit on the drive station (250x250 as an example), your math will be a little off, but if you set your resolution at the top and apply the scaling factor, it should be good. I didn't mention it, but I have the min area for the rectangle set to something a lot higher than zero to reduce the false positive matches. We had most of the area covered by matches until that was done.
The testing I did last night looks good on Ubuntu, and I really hope we don't hit a problem with the Pi. Given the versions are the same, I hope we don't have anything odd there. The NetworkTables is all new to us, so I'm really hoping nothing funky there.
I've got our Pi reading from two of the Microsoft cameras, and have throttled them down to 10fps due to a huge heat problem (no - I didn't get the case with the fan unfortunately) at 20fps each, but am considering adding a second Pi to allow for two reading at a higher rate and to allow for the distance calculations as well. Guess we'll figure that out over this week.
Definitely appreciate the offer of help since this is a whole new world for us. Vision has been discussed for years, and this is the closest we've come to having an operational test case.
Cool! I hope everything works well for you. I'll test your changes tonight when I have access to the pi (ours is already mounted on our robot -- There are mounting holes on the rio that line up with the standoffs on the pi, makes it really easy to secure) and if I find anything else to change I'll post it here.
Thanks!
If you check our repo here, in the RaspberryPi folder, you can see everything necessary for getting a pi to run grip and mjpg streamer on startup. I've even included a little install script, so if you want to try it out just run the install script and it should move everything to where it needs to go. I'll look at the script to make sure it works properly today though, as I haven't actually used it yet.
Just trying out your changes now...
After setting the resolution of the camera with cap.set
, it doesn't appear that the image_scale
variable is necessary, as everything can be done in the same scale as the base resolution. Is there any reason why you are continuing to use it?
EDIT: I think I see what you did. I had been using image_scale to fix my mjpg stream (which, in all honesty, should have been a red flag for a larger bug) but you're only applying it to the Network Table data. Correct?
I hadn't gotten to the Network Table yet. That said, I had to include the x/y resolution to get the reflective tape area identified correctly, and since I didn't use your GRIP code (mostly because I used the one I'd worked on and was somewhat happy with the results), I didn't look to see if you'd set the resolution in there which I should have done.
My intent is to use your image scale percent to help reduce the image size for our dashboard and so our drivers can have more cameras on the robot. At this point we've got three cameras, and only two are webcams, but we may add another two if everything works out.
I've been messing with it today, and it seems that the values returned by the x, y, w, h of the grip file are not scaled to the resolution of the camera (at least in the python file I wrote, might be different in yours. not a grip issue I don't think). By scaling all network tables by the image_scale
value, it seems to give much better results.
So has this been resolved? I seem to be having a very similar problem using Java, also on a RasPi 3.
Are you having issues with the contours not showing up in the same way? I'm not sure how things work in Java with openCV, and my help probably won't be super helpful because python is non compiled, and so I did a lot of editing of the grip file on the Raspberry Pi.
Probably the biggest thing I would try would be messing with the camera settings using v4l2. The thing that helped me the most was turning the brightness on our camera way down. This can be done with v4l2-ctl --set-ctl brightness 0
There are a lot of settings you can mess with with v4l2, try to get the camera on the pi to look as close to your PC as possible for best results. I'd suggest setting up mjpg-streamer, so you can view what the pi looks like while its running headless. Code to run mjpg-streamer can be found in my Raspberry Pi folder as mjpg_streamer.sh
for the script to run the stream on port 80, and start_stream.sh
for the script to run on startup. You'll want to see how to save images from openCV in java, its probably pretty similar to python which can be found here. The contours are drawn to the image here
Nevermind, i figured it out.
For everyone reading this in the future, what was wrong?
I was failing to account for a different resolution between what I was creating the GRIP filter for and what I was capturing. Oops.
good for me
Running generated code (in python, on raspberry pi 3) does not find contours past a certain level of specificity.
Without moving the camera from its mounted position, it is possible to load the camera into grip, edit the settings to a point at which only the necessary contours are found.
However, when the code from these settings are generated and sent to run on the pi, if the requirements for contours are too specific the pi does not find any contours at all.
I have verified that it can find contours if the requirements are less specific, both with networktable output and a mjpg stream of of the contours.