Closed ozrien closed 3 years ago
Omar, is there anything we as a community can do to help prevent this from being another kickoff release of an entire new API?
"...entire new api"
@schreiaj API hasn't changed, I was very careful to point that out. It's just compiled multiple times for different platforms. We've added API for new configs, but I don't believe we've broken anything.
In other words, all the heavy lifting was done last summer.
The only wrinkle is https://github.com/CrossTheRoadElec/Phoenix-frc-lib/issues/22 but we may just end up not changing anything for simplicity/compatibility.
But weigh in on the tracker so we can make the best decision based on feedback.
"Omar, is there anything we as a community can do"
@schreiaj We cut installers in the off season, just like always. You can try/test/provide feedback anytime.
@ozrien - Can we potentially have a master jar? Something that we can have one dependency for and contains all libraries.
The reasoning is that this allows a lot more simplification for teams that use CI systems. (One massive library instead of 4 to set up depends for.)
@JamieSinn What's funny is that some would prefer separate libs, see the request here .
What is the effort in adding more libs to your CI? I thought it was just adding another maven path to the pull list. Maybe some CIs are more difficult than others.
@ozrien Understood "no api changes". But it's also things like build systems, import paths, and documentation... mostly documentation. That's what got me last year.
I'll actually have access to hardware this off season so hopefully I can work with Marshall to do some testing.
Thanks.
@schreiaj build systems should be less of a problem now that GradleRIO is official, compile ctre()
can be changed to encompass many dependencies if/when it is split and if there isn't a master archive
@ozrien - yes, that's basically the idea but for us we also have a deploy task to a rio. It complicates things regarding what we shade into the final jar and what we don't.
Keeping it an option for teams is something I think should be considered.
Just a thought.
If you're taking requests, we'd really love a way to run this on a linux-arm64 (Jetson TX1/2) target. I'm not sure if there's a way to hack the armhf libs into binaries built on the Jetson but it would be lot less painful not to have to.
If we haven't already, I'm sure we can loan / give remote access to / donate a TX1 or TX2 system to build on if that's the deciding factor.
@kjaget We are interested in the TX2, we will try targeting linux-arm64. I don't remember what CAN bus looks like on it, first milestone would probably be TX2 + socket CAN because it's easy. If that works then target native can bus controller next. For non-frc platforms, there is a platform library that implements the CAN hooks. So someone would have to code up a platform library that connects the Phoenix CAN calls to the native controller, which shouldn't be too bad.
I might want to borrow a TX2 at that point :)
Looks good Omar, thanks for taking the suggestion and acting towards it, you're all stars
Based on the above, this is probably what the changes will look like in GradleRIO (very tentative):
phoenix_cci() // Includes the Phoenix CCI (package member for Java, linked lib for Native)
phoenix_api() // Depends on phoenix_cci()
phoenix_robotics() // Depends on phoenix_api()
From there, we'll have some composites to make it a bit easier (and be seen in most peoples' build scripts)
phoenix() // phoenix_api() + Phoenix WPIAPI
phoenix_all() // phoenix() + phoenix_robotics()
Or, visually:
@ozrien do you see any problems in the above dependency graph?
@JamieSinn hopefully the above puts your mind at ease for the user-facing changes in build-system land. Most users will probably just pull in phoenix()
or phoenix_all()
Phoenix Diagnostics it seems will probably be done in a separate manner, similar to how we install the JDK since it's been noted above that it doesn't need linking to the user program.
CANbus on arm64 should be the same as it is on Armhf. Linux kernel drivers for CAN make this easy-ish. Kevin can probably help with a TX2 though. ;)
Windows + simulation (with software simulation adapter) Linux-Desktop + simulation (with software simulation adapter)
What will that simulation adapter look like?
Also, wpilib has Mac Desktop distributables, this probably should too.
Closing as our dependency chain has been fairly stable the past couple years and our multi-platform builds are stable. Current platforms are:
Current Dependency Tree (athena) is: Phoenix WPIAPI -> Phoenix API -> Phoenix CCI -> Diagnostics -> Core For all other platforms, diagnostics and above also depend on Platform and Canutils
Phoenix-frc-lib is going to be compiled to support multiple platforms. The goal is to target...
The closed source CCI binary will also be compiled for these platforms, binaries+headers will be available for download (maven and/or launchpad-ppa and/or something else).
For clarity this repos and corresponding binary will be renamed "Phoenix frc lib" => "Phoenix api lib", since this will support other platforms beyond FRC.
This also means that devices such as Talon SRX can be updated/controlled outside of the roboRIO via RaspPi (or similar) while the roboRIO provides the FRC-specific enable, required by FRC firmware. This is similar to the (HERO + roboRIO, replace HERO with whatever you want) example found here.
This is achieved by moving the platform specific aspects (robot-enable, can interface) into a separate library when compiled for nonFRC platforms. This is accomplished without changing any of the user-facing API.
Library breakout
frc-lib will likely be broken up into multiple libraries.
In the case of nonFRC, the breakout will likely be...
In the case of FRC, the breakout will likely be...
Most FRC teams will likely not care about the library breakout. Whether it's two libs (like it is now) or four (proposed) most teams will likely select everything. However, I think it will be possible to select only what you want depending on what the installation strategy will be.