Closed jmackay2 closed 3 years ago
It took me a bit (never used github ci before) but I was able to create a github action downloading the drivers from ids and building the package.
How it works
industrial_ci
parameter called BEFORE_INIT
. As the download URL for the ids suite is static, no repackaging (manual extracting of the headers) for unofficial builds is needed - or am I missing something why unofficial builds were created in the first place? I already prepared the script to be able to download other ids versions as well (in case we would like to test multiple or specific versions)Quick demo For playing around I have it currently set up in my fork:
Whats next? This is surely just a first step towards a proper CI but I think it could work as a good base to work on. If you agree, I can integrate the workflow here as well - suggestions and improvements to the current version are welcomed.
Open questions
testing
of each ros distro as well?I think you are complicating things a bit. I already have a cmake module for downloading the thin drivers, and another one for downloading the official drivers. We just need to switch to the official one and use your new url. That should build just fine. Leave the ci to ros, they use Jenkin to automatically verify every commit.
We need the url for both amd64 and armhf though
I already have a cmake module
Sure it makes sense to update the cmake modules so it can also be tested on the ros ci jenkins side to validate the release. Nonetheless, I would welcome an automated build check in this in this repo as well - so we can always see if the master builds correctly and PRs can be sanity checked before anybody has to start reviewing them. As soon as you fix the main cmake file and the modules, we could still integrate the github action provided above (just without the BEFORE_INIT
bash script) - in case you agree to have this kind of a build check here as well.
We just need to switch to the official one and use your new url.
The flag --auto
should help auto automate the process when in CI here:
https://github.com/anqixu/ueye_cam/blob/858385c05d8d5a39852eabdedda834007adda91b/cmake_modules/DownloadUEyeDrivers.cmake#L49
I am new to CIs but I think some (especially docker container) run directly as root in a minimal environment. sudo
might not be available. Maybe an additional check (if already root) is required?
We need the url for both amd64 and armhf though
# 32 bit
https://en.ids-imaging.com/files/downloads/ids-software-suite/software/linux-desktop/ids-software-suite-linux-4.94-32.tgz
# x86
https://en.ids-imaging.com/files/downloads/ids-software-suite/software/linux-desktop/ids-software-suite-linux-4.94-64.tgz
# arm v7 32 bit hard
https://en.ids-imaging.com/files/downloads/ids-software-suite/software/linux-armhf/ids-software-suite-linux-armhf-4.94.tgz
# arm v8 64 bit hard
https://en.ids-imaging.com/files/downloads/ids-software-suite/software/linux-arm64/ids-software-suite-linux-arm64-4.94.tgz
yeah, I now realize that it was because of sudo
that we had to do this unofficial installer mess.
I never used the Ros buildfarm before and looking at the docs I found I haven’t gotten an answer to this:
I assume the Ros buildfarm runs as the root User insider the docker (not confirmed, the docks were not so clear about this). Why don’t we just check who has invoked the cmake modul (regular user or already root) and use sudo only when a normal user (and a buildfarm aka root) calls the install command?
It doesn't look like it's root
during the build, but rather the user SYSTEM
.
https://build.ros.org/job/Mbin_ubv8_uBv8__ueye_cam__ubuntu_bionic_arm64__binary/127/consoleFull
I suppose we could conditionally install the official driver (if we confirm having sudo permissions) versus install the unofficial driver (if we don't). But that seems messier to support in the future...
Even though it's a nuisance to hand-build the updated driver packages presently, it hopefully should work. Given my limited work time, I'm going to merge #103 and update the package to ROS, as a blunt way to have the ROS release buildfarm test this fix.
@nullket I quickly scanned through your contribution, and the sample output of your CI. It looks great! It does seem that we are only targeting AMD64 arch, but that's perfectly okay for now (infinitely better than not having any CI).
My (limited) thoughts on your open questions:
I'd recommend
master
only. I hope that we might be able to manually trigger the action on other branches, but otherwise we can do quick-and-dirty devs+commits on branches other thanmaster
.
Similarly to above, I recommend PR for master only
Yes, noetic should be the key target at this point, given that ROS1 has reached EOL, and the older ROS versions are targeting distros that either have EOL-ed or are about to be EOL
Can you please explain further what this means?
I agree with @anqixu , I would just build master
and any PRs going onto master
. I am not too sure what is different about the testing
branch, but that is the default. I set mine up with both main
and testing
. It looks like the current travis setup used both main
and testing
, as denoted by the ros-shadow-fixed
matrix repo.
Thanks for the fix for 4.94 @anqixu !
Should we include the ros branches testing of each ros distro as well?
Can you please explain further what this means?
We can either check against the current released (stable) branch of ros or also the testing branch which you might know under the name ros-shadow-fixed
. In the initial proposed rosindustrial_ci solution (which I stated above) it only takes one line more to check also against the ros testing repo as well. More infos about the ros testing repo can be found here
What ever tool we use, it might makes sense to test against testing
as well, as long it is easy to implement. Concerning the other questions I asked: I totally agree with both of you.
I do not understand the ROS buildfarm completely yet. So you merged you PR. Then you manually have to create a new release and the (non verbose) output and binary will be pushed to https://github.com/anqixu/ueye_cam-release? I haven`t found the right ros doc for this yet...
Concearning testing of PRs and the Master with CI According to this ROS page we have three recommended options: Vanilla Travis, ros_buildfirm, industrial_ci So I would ay we should either use the ros_buildfarm to have everything set up like in the release build environment or make use of the simplicity of the industrial_ci way. The ros docs states that is might simplify the process. (Which is confirmed by my attempt as the file to set it up is about 40 lines of code including comments)
So the question resulting from this: Is someone willing to get ci for PR/Master testing working with ros_buildfarm or do we just use the industrial_ci here in github with your updated cmake module? For the sake of less work I would favor the latter.
Update: With PR https://github.com/anqixu/ueye_cam/pull/104 I have integrated (already in master) the industrial_ci which works fine with the updated cmake module (no additional script needed anymore!).
The setup is really simple as it is solely based on this file. The output of every test run can be seen here, while new PRs will get automatically checked and the result is shown in the PR.
Edit: I have created an new PR to also add a badge the readme informing whether the master branch builds fine without any issues. The PR also demonstrates the build test functionality for PRs https://github.com/anqixu/ueye_cam/pull/105
Previously this would build even if the IDS uEye drivers were not installed by downloading a temporary version of the header/library. The temporary downloads were version 4.61, where the new 4.94 EVENT changes do not exist. This means that the temporary build does not work.
It is not really possible to continue using the previous source, which was the data of the ueye node.
This was discovered when setting up GitHub actions to do build testing, which would be nice.