Closed cglagovich closed 7 years ago
As of last meeting (Wednesday because of the bad weather), @PizzaFriday was able to see contours in GRIP of the reflective tape. Today, we will:
@PizzaFriday and I came up with the following pipeline written here in the proper order as pseudo code. We implemented this in the GRIP user interface. Anything with (D) is the default value.
From the Vision processing on an Arm coprocessor wiki page, @PizzaFriday and I found the detect-raspbian
command works, while the detect-hf
does not. This means raspbian is the "proper artifact" and we do not need to build all artifacts for the Pi.
@kiallsop made the boilerplate Main.java that ran on the Pi to send the image to the SmartDashboard. We've created a new PiVision branch in 5bdedf302a22ebe229d4cd20edc3cebd4ff17afe with his code and added the grip pipeline. The new build for the Pi is in 927d52019f5c787a55c4525f0a1d4795ba5d0f5e. Testing it now...
There's till some testing we need to do, but we're looking good right now! :)
Programmers:
The 3rd post in this thread has example java code for roborio & coprocessor to write and read networktable variables: https://www.chiefdelphi.com/forums/showthread.php?t=155206
The coprocessor (Pi) networktables code is very close to what is already in the Main.java Pi code (not the GRIP generated code).
There is a debug tool to inspect/change networktable variables: http://wpilib.screenstepslive.com/s/3120/m/7912/l/84115-using-tableviewer-to-see-networktable-values
It looks like the GRIP contours list
I was hoping to attend for part of tonight but won't be able to. I can support Suffield on Saturday if we go, and would have the minivan which can seat 7 (including driver).
On Sun, Feb 12, 2017 at 9:12 AM, Motley Mentor notifications@github.com wrote:
From the Vision processing on an Arm coprocessor http://wpilib.screenstepslive.com/s/4485/m/24194/l/682949 wiki page, @PizzaFriday https://github.com/PizzaFriday and I found the detect-raspbian command works, while the detect-hf does not. This means raspbian is the "proper artifact" and we do not need to build all artifacts for the Pi.
@kiallsop https://github.com/kiallsop made the boilerplate Main.java that ran on the Pi to send the image to the SmartDashboard. We've created a new PiVision branch in 5bdedf3 https://github.com/FRCTeam3182/FRC2017/commit/5bdedf302a22ebe229d4cd20edc3cebd4ff17afe with his code and added the grip pipeline. The new build for the Pi is in 927d520 https://github.com/FRCTeam3182/FRC2017/commit/927d52019f5c787a55c4525f0a1d4795ba5d0f5e. Testing it now...
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/FRCTeam3182/FRC2017/issues/28#issuecomment-279175316, or mute the thread https://github.com/notifications/unsubscribe-auth/AX9QxI3FnNzspMnIEAi6blAbHSl2fShpks5rbxNVgaJpZM4L7lvD .
Finish up the vision on the Raspberry Pi that can detect shapes and will eventually guide the robot in competition. This issue can be resolved when vision works and can be implemented into driving.