Closed DrewTheRat closed 8 years ago
see also #1
can be accomplished either with a transparent .png added to the smartdash and dragged to the correct position, or drawing rectangles on the image before it's sent over the network using NIVision or OpenCV
Transparent .PNG is definitely the way to go. That will be really easy.
So only a static crosshair? I was thinking of the target highlighted at the same time... so that we can easily align it to the crosshair (i.e. use the info from grip to draw the bounding boxes of the main target) Raj, how does NIVision compare to GRIP (which we currently use to easily access OpenCV's features)?
Drew thinking about it the static crosshair might be confusing because the center of the image is not necessarily where we want the target to be, is it? (that depends on where the camera is located, and the angle and speed at which the boulder is ejected...)
GRIP is new this year so I'm not familiar with it, but if it breaks out OpenCV's drawing functionality, I'd use it just to have everything in one pipeline. I tried NIVision a little but it felt a little unwieldy. The opencv docs are also really helpful, with the caveat that they're in C++ and the pointers and such are wrapped in Java objects and the conversion can be unintuitive. Shouldn't be a problem since it's all packaged into GRIP though.
Also, NIVision runs on the roboRIO, while the openCV work that I did was part of a smartdash widget on the driver station laptop, after the image was already transmitted over the network.
Ok. We need to investigate GRIP further. I don't know that it will let us draw on the images but if it could we would just have to use the contours as an input to a step that draw rectangles. Of course that would not allow us to change the color of the rectangles based on our shooting criteria (since they are computed in the robot code) but maybe that is good enough combined with the gamepad feedback? (it seems that we can at least have a step so that GRIP behaves as a streaming device so I suppose that it is possible to display the output from GRIP in the smart dashboard).
I don't think the cross-hairs give us much, really. I just thought it would look cooler. I think for actual game play we need to rely on our targeting logic and telling the drive team when to shoot...
OK. I played a little with GRIP (you can download it by itself and use a webcam as input). It is a cool concept but like the other FRC tools it lacks proper documentation. I have most steps in place to display the contour of the targets in the smart display... but not all... so it might be a dead end.
Workaround is to download GRIP SmartDashboard Extension from https://github.com/WPIRoboticsProjects/GRIP-SmartDashboard/releases
Okay, now I am realizing what you are envisioning with this.I have renamed the issue to account for that. This would be good.
Thanks. Ideally we would not only highlight the targeted goal but also provide indication that it is ready to be shot. I don't know however how we could provide the indication as this would require overlaying something on the video stream dynamically. The GRIP SmartDashboard extension will not allow showing that the goal is ready to be shot in its current implementation and not even show which target is the one the robot is aiming at if multiple targets are found. I have therefore created a request for enhancement so that the target with the largest area be highlighted more than the others at https://github.com/WPIRoboticsProjects/GRIP-SmartDashboard/issues (as we always consider the target with the larget area to be the primary target)
Not sure if the source for the GRIP smartdash plugin is released for you to modify, but if not, creating a custom widget is fairly simple. The one we made in 2014 is on github.
The source is available at https://github.com/WPIRoboticsProjects/GRIP-SmartDashboard/tree/master/src/main/java/edu/wpi/grip/smartdashboard. You are right that we could customize it... I will look at the 2014 project.
OK, nice widget. So in 2014 the autonomous mode did not have access to target info (as it was computed on the PC rather than the robot itself)?
All the info was added to the network table, both so it could be displayed as a separate widget and so it was accessible to the robot.
Ah very nice! This year we use GRIP to replace what the 2014 widget was doing (i.e. compute what to put in the network tables)... but we are missing the custom rendering to the smart display part. So we can use the GRIP extension as an easy not-too-bad approach or indeed do our own widget (more work, not sure if we will have volunteers for it). Thanks.
Why not just display a green border around the video panel? We can probably do this by adding a larger green square the video. That would probably take less time to build, and is more visible to the drive team.... I really think we need our display feedback feedback to be as binary as possible. It should either say "yes-shoot now!" or "no-can't shoot." I am worried that trying interpret the video screen could be distracting and slow us down during a match. This is also why I want to rumble the game pad - so there is no confusion on when to take the shot...
OK
good enough, closing
note: if this cannot done directly in the smart dashboard, this might require to stream the edited video with the cross-hair burned-in back from grip