RobotCasserole1736 / RobotCasserole2016

Robot Source Code for 2016 FRC Game
2 stars 0 forks source link

Develop SmartDashboard Plugin (if needed) #25

Open imdunne8 opened 8 years ago

imdunne8 commented 8 years ago

GRIP may not run fast enough to be useful for vision processing. The alternative would be to use the algorithm developed in GRIP and implement it in OpenCV through a SmartDashboard plugin. It doesn't look like the SmartDashboard installer (which includes some extra jar files that the driver station installer leaves out) is readily available anymore, so everything necessary for that will have to be built from source. Links below document how to create the plugin using Netbeans, and a link to the SmartDashboard repository is available as well. We'll probably need to use Gradle to build the main project and then use Netbeans to build some of the extensions. There is obviously a lot of detail missing from this issue, so I'll work with students on this as needed.

How to create SmartDashboard extension: https://docs.google.com/document/d/1Lm6Y4I9pY_qeHSgAlJZ3hXY5yU6-d_EkYHS5nSPMNSo/edit?usp=sharing DSCamera2013.java: https://drive.google.com/file/d/0B0WihF4cgLY6ZnczVjBjSXlKd0U/view?usp=sharing RunFile.java: https://drive.google.com/file/d/0B0WihF4cgLY6Ulh6UTlsZlRSY1k/view?usp=sharing SmartDashboard Source: https://usfirst.collab.net/gerrit/p/smart_dashboard.git

jlee58540 commented 8 years ago

Or investigate a co-processor.

imdunne8 commented 8 years ago

Unfortunately I think we'd be in the same situation with a coprocessor. Some teams have been deploying GRIP to Raspberry Pis and similar devices but it's GRIP itself that causes the delay, whether it's running on the RIO, laptop, or a coprocessor. Based on what I remember of the 2013 SmartDashboard plugin, it was very quick to process and write to network tables, definitely not a 1-2s lag. We could also look into writing OpenCV code that runs on the roboRIO, but we'll have to investigate processor load there. On Feb 1, 2016 12:00 PM, "jlee58540" notifications@github.com wrote:

Or investigate a co-processor.

— Reply to this email directly or view it on GitHub https://github.com/RobotCasserole1736/RobotCasserole2016/issues/25#issuecomment-178098609 .

gerth2 commented 8 years ago

We should check out what their next release is like, this may have been fixed up as part of them swapping the camrea-feed reading libraries to fix the no-cam-feed on windows 7.

Alternately, we could use the image to calculate an angle error, and then rotate closed-loop on a gyro to achieve the correction angle. Would still be really slow for precision alignment.

gerth2 commented 8 years ago

if anyone has gradle experience, we could try pulling their bleeding-edge development tonight and building GRIP ourselves.

gerth2 commented 8 years ago

However, looking at their changes on this commit: https://github.com/JLLeitschuh/GRIP/commit/dbc156d13b76fd0ec5e02687fb46877e67eb169b

I'm not seeing anything which would cause a giant performance improvement - the InputStream library swap they did, according to the javadocs, shouldn't impact anything from a timing standpoint.

gerth2 commented 8 years ago

Also, just read the CD post finally - looks like my gyro idea was not original. Also, figuring out angle from X/Y pixels will require distance-to-target, which is another layer of complication....

jlee58540 commented 8 years ago

What do we need vision for again?

imdunne8 commented 8 years ago

Autonomous at a bare minimum. Based on how sensitive the prototype shooter was, it may be needed for even very close up shots. On Feb 1, 2016 5:32 PM, "jlee58540" notifications@github.com wrote:

What do we need vision for again?

— Reply to this email directly or view it on GitHub https://github.com/RobotCasserole1736/RobotCasserole2016/issues/25#issuecomment-178254883 .

jlee58540 commented 8 years ago

I see drive x distance from encoders with gyro maintained heading and turn y degrees as basic auto biulding blocks for this year. Do we have a line of sight on these basic elements? Maybe something that can be tested with last years bot if we are using the same components like gyro? How about auto selection thats foolproof? Did we figure out an auto scripting method to make changes instead of downloading software? If these things are working, then lets go for vision. Otherwise, I don't see the value in vision.

imdunne8 commented 8 years ago

By saying this you're basically throwing away any chance at making either a high or low goal in autonomous. That just isn't acceptable. Encoders and gyro will be nearly useless after going over most of the defenses. We have code we can copy/paste from last year if we want to use encoder/gyro based drive, but that will at most get us over a defense at an arbitrary location. If we want more than 10 points then we have to rely on vision. We have multiple students who have put in a lot of good work to get a vision algorithm developed and tuned and there's no reason to just throw all of that out. I agree that it's not a top priority, but I still fully plan for it to be in the code.

jlee58540 commented 8 years ago

Whats the plan get into position before you activate vision for final positioning?

imdunne8 commented 8 years ago

I think the plan is to do an encoder based drive forward for as many defenses as we can. If that doesn't work for some defense, we may be able to use recorded driver input and play it back. We have a "is crossing the defense" algorithm planned, and if it works then hopefully we'll be able to use that to determine that we're fully over and ready to start vision. We may also be able to use vision to determine if we actually are fully over the defense in auto (based on distance/angle) although that may be too small of a variance to work reliably. On Feb 1, 2016 7:45 PM, "jlee58540" notifications@github.com wrote:

Whats the plan get into position before you activate vision for final positioning?

— Reply to this email directly or view it on GitHub https://github.com/RobotCasserole1736/RobotCasserole2016/issues/25#issuecomment-178301685 .

jlee58540 commented 8 years ago

Lets assume low bar auto is our highest priority and easiest to accomplish.

Encoder based drive should work for low bar at least. This is what I refered to in saying "drive x distance". We can test this on last years robot. 10 points.

Next, how do we handle the approach angle? Turn based on gyro? It would be nice to test gyro error driving over low bar as I think it'll maintain heading. If it does, an additional drive x may be all thats needed to score a low goal. Sweet. That'll get us by if we have vision problems. Again, easy test with last years robot. 15 points.

Now, high goal. I agree vision is best to align high goal shot if possible on approach. If that doesn't work, then we could have encoders and gyro still to fall back on. How effective we can be without vision remains to be seen. I suspect it'll be iffy as you do.

If vision works, then use it to tweak the drive x approach angle to goal from earlier and aid with correct stopping position for shooter. Boom, 20 points.

gerth2 commented 8 years ago

(if needed)

Yup.