Open AhmedSamara opened 8 years ago
This looks promising now
It looks pretty promising.
Is this running on the BBB?
On Sun, Jan 24, 2016 at 5:43 PM, Ahmed Samara notifications@github.com wrote:
[image: matches] https://cloud.githubusercontent.com/assets/2405319/12539494/ebbd985c-c2c1-11e5-92b3-af8da39d49a5.png
It looks pretty promising.
— Reply to this email directly or view it on GitHub https://github.com/IEEERobotics/bot/issues/442#issuecomment-174349006.
http://stackoverflow.com/questions/7263621/how-to-find-corners-on-a-image-using-opencv
Here's another strategy for generic objects.
Two cameras might help, here are instructions for setting up the second:
After much struggling, it seems that we won't be able to use the modified QRCodeStateEstimation library on the final robot, due to the ZBAR library being incompatible with the BeagleBone, as well as our struggling to get BOOST library working to work.
Looking at the source code for that library though, it's not that complicated and we should be easily recreated in python.
All you really need is a set of (known) points, and to be able to identify them in the images captured by the webcam.
This is the part that made Zbar really simple. When it finds a QR code, it also tells you where the vertices are, and this is the part that I'm running into problems with now.
I was hoping to solve this problem using SURF points
The change in how we're approaching this is explained in my senior design presentation:
Basically the problem I'm running into is that even though I can now identify a bunch of known points on the QR code, there's still a few problems:
How do we isolate the points just on the QR code?