Closed GurliGebis closed 3 months ago
Just checked the Jenkinsfile for the two branches: https://github.com/dd010101/vyos-build/blob/equuleus/packages/linux-kernel/Jenkinsfile https://github.com/dd010101/vyos-build/blob/sagitta/packages/linux-kernel/Jenkinsfile
It looks like the equuleus one is completely different from all other packages (only checked a few) - do we need to change it in your vyos-build repo, and change it to use that one instead?
Commit when they changed it for current
(back before sagitta
was branched out from current
): https://github.com/dd010101/vyos-build/commit/d127e81f0ca1ad7d358f376c0112a5c9c4d16a88
linux-kernel equuleus does build after indexing by itself - I can confirm this isn't the case, I did delete the linux-kernel and add it back as new job and branch indexing didn't trigger build
linux-kernel equuleus uses wrong vyos-build docker container - I can confirm this isn't the case, I see that the build pulls what is should.
docker pull 172.17.17.17:5000/vyos/vyos-build:equuleus
equuleus: Pulling from vyos/vyos-build
Digest: sha256:418c6e91565a12a3ead7f59ac8fab7a9eaebdef97c07e178ef4561c2a3e2c925
Status: Image is up to date for 172.17.17.17:5000/vyos/vyos-build:equuleus
172.17.17.17:5000/vyos/vyos-build:equuleus
I also don't see the point of replacing the Jenkinsfile just because it's different - if there is something wrong then we should fix it not replace it - because of merging future changes.
Is there anything wrong with the Jenkinsfile though?
Why would the build start by itself? Well I don't know but I saw this before and I originally was thinking that Jenkins launches first build automatically but it doesn't - well most of the time anyway. I saw "auto build" with different random packages, it wasn't the kernel for equuleus, it was rare but it did happen.
Why would the build pull wrong vyos-build docker container? I can only guess but I would say that's makes sense because you don't have your local vyos-build built yet? Then docker will fallback to dockerhub.
That's why I would like to know if you can reproduce this again - if you create another linux-kernel pipeline (by hand) - will it launch build every time? Like three times in row? It doesn't matter if you add it by hand or the script that's why debugging you can use Jenkins GUI.
Here you see fresh first branch indexing of linux-kernel equuleus and it doesn't trigger build. It knows the condition wasn't met:
Jenkins should launch auto-build only if it detects that:
1) new commit happened (it compares previous last commit and current last commit hash)
2) the commit contains changed file by specific pattern and this pattern is specified in the Jenskinfile in question by changeset pattern
- https://github.com/dd010101/vyos-build/blob/equuleus/packages/linux-kernel/Jenkinsfile#L83-L84
Both of those conditions need to be true. Thus that's why I can't explain why any package would do that when added and I don't think you can repeat it. The linux-kernel has these conditions as well - same conditions as other packages. The only other option is triggeredBy cause: "UserIdCause"
- if the user launches build.
Maybe there is another possibility - if the last commit matched the changeset pattern - then you would trigger the build even if no commit changed since Jenkins runs the first time thus has nothing to compare with? But last commit doesn't match the patterns so this doesn't explain what you see, but it could explain what I saw in past. I'm not sure if Jenkins has condition for this case where it would skip because the previous hash is null.
What we really want is to not do any actual build under any condition or we want to queue actual build after the indexing. The latter I don't know how to do and the former I know but that would require to modify every standalone Jenskinfile and shared library and that's a lot of changes.
I do have the containers built, it was run using the vyos-build-container job first, after that I imported the jobs π Waiting for the initial provision of jobs is easy enough, my install scripts does that already. The problem is that the initial import causes the build for some reason. I will try wiping the jenkins install and retry, just in case something is wrong, and get back.
Running docker image ls confirms the image already exists on the machine (the one I built) So I don't know why Jenkins does this, but it is strange
The pull means the Jenkins didn't have Declarative Pipeline (Docker) I guess. Otherwise you would see:
docker pull 172.17.17.17:5000/vyos/vyos-build:equuleus
instead of yours:
docker pull vyos/vyos-build:equuleus
The pull should be skipped normally because the condition isn't met:
Stage "Define Agent" skipped due to when conditional
So the pull seems like wrong settings but it shouldn't run and it doesn't run for me.
You don't need to wipe everything - just add linux-kernel2 and copy linux-kernel and this is the same thing as you would do with the script - the branch indexing gets triggered on empty pipeline.
What I saw was for sure fluke - I wasn't able to repeat it ever again with the same job - it happened randomly.
You could modify all conditions to include for example environment variable that would neutralize the build the same way how the ARM64_BUILD_DISABLED works - you would set DISABLE_ALL_BUILDS=true before you add jobs and if you know the indexing is done then set DISABLE_ALL_BUILDS=false and this way you know there is no way to trigger the build by fluke - even if it gets triggered it will be skipped due to the variable. But this isn't one condition in one place, at least 5 or more places have this condition so if there is other global way without modifying the source code that would be better.
I'll take a look at the settings - these are the images on the machine:
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
172.17.17.17:5000/vyos/vyos-build equuleus 7adf8e5adeca 5 hours ago 4.34GB
vyos/vyos-build equuleus 7adf8e5adeca 5 hours ago 4.34GB
vyos/vyos-build current 5e1a58e14e81 6 hours ago 2.95GB
172.17.17.17:5000/vyos/vyos-build current 5e1a58e14e81 6 hours ago 2.95GB
vyos/vyos-build sagitta d3b28d56cd86 6 hours ago 2.48GB
172.17.17.17:5000/vyos/vyos-build sagitta d3b28d56cd86 6 hours ago 2.48GB
registry 2.7 b8604a3fe854 2 years ago 26.2MB
Indeed you are right - I haven't gotten Declarative Pipeline (Docker)
set up π
One more step to add to my install script - luckily, once I'm done, no one will make my mistakes π
Is the random automatic build even a issue then? If you correct the docker pull then the early build isn't really a problem right?
It depends, I'll test it soon (right now I'm using hostapd to verify that it can build packages). The idea is that it provisions all the jobs, waits for them to be done, and once they are done, it starts X number of jobs at a time (where X is the number of CPU cores and executors on the server). So if it starts to build the kernel, that will slow down that part quiet a bit - but I'll see if it works now and get back to you.
If it does, I'm 95% done with the installer, up to the point of it resulting in all packages being built at the end. Then I just need a few steps, so you can ask it to build an ISO for you.
π
Currently you wait for all indexing to complete before triggering build step? Thus unexpected complication would make the indexing step long?
There is also option to think about different scheduling principle - to be state aware. One could track what package completed indexing and one could start build after the indexing for the package is completed - per package - this way there is no waiting. One could limit that X concurrent indexing/builds are active at one time. Ideally one would want to track if both indexing was done and if the package was built by the indexing and then one wouldn't even care if the build would trigger by itself. What about adding report what build failed as well? Maybe even retry failed build for second time to give a chance to fix itself? Should I be the one who writes such scheduler? π€
I think it would make sense for me to do what I'm working on, and then once I'm done, you can take it and run with it, to rewrite it if you want π
But yes, I'm waiting for them to finish provisioning before I trigger the actual initial build.
But all that is a job for another day - I'm off for today π
It seems to have resolved itself.
Here is a small treat - I can provision all the packages and wait for them to finish with branch indexing π
Please enter you username here: GurliGebis
Please enter your Jenkins token here: REDACTED
Testing jenkins connection: OK
Provisioning jobs in Jenkins...
aws-gateway-load-balancer-tunnel-handler: OK
ddclient: OK
dropbear: OK
ethtool: OK
frr: OK
hostap: OK
hsflowd: OK
hvinfo: OK
ipaddrcheck: OK
iproute2: OK
isc-dhcp: OK
keepalived: OK
libnss-mapuser: OK
libnss-tacplus: OK
libpam-radius-auth: OK
libvyosconfig: OK
linux-kernel: OK
live-boot: OK
mdns-repeater: OK
minisign: OK
ndppd: OK
netfilter: OK
ocserv: OK
opennhrp: OK
openvpn-otp: OK
owamp: OK
pam_tacplus: OK
pmacct: OK
pyhumps: OK
python3-inotify: OK
radvd: OK
strongswan: OK
telegraf: OK
udp-broadcast-relay: OK
vyatta-bash: OK
vyatta-biosdevname: OK
vyatta-cfg-firewall: OK
vyatta-cfg-qos: OK
vyatta-cfg-quagga: OK
vyatta-cfg-system: OK
vyatta-cfg-vpn: OK
vyatta-cfg: OK
vyatta-cluster: OK
vyatta-config-mgmt: OK
vyatta-conntrack: OK
vyatta-nat: OK
vyatta-op-firewall: OK
vyatta-op-qos: OK
vyatta-op-vpn: OK
vyatta-op: OK
vyatta-wanloadbalance: OK
vyatta-zone: OK
vyos-1x: OK
vyos-cloud-init: OK
vyos-http-api-tools: OK
vyos-nhrp: OK
vyos-opennhrp: OK
vyos-strongswan: OK
vyos-user-utils: OK
vyos-utils: OK
vyos-world: OK
vyos-xe-guest-utilities: OK
wide-dhcpv6: OK
Waiting for jobs to be provisioned in Jenkins...
Jobs has been provisioned.
root@vyos-tester:~/vyos-jenkins#
I have btw. created a PR that updates the jobs.json file to only contain the actual branches that is being built: #22
Good job! Looks you are close to have everything? What you still missing/want to do?
My todo list consists of:
And also a final script to setup nginx (that's the easy part), so the iso builder can find the deb packages
It looks like the equuleus one is completely different from all other packages (only checked a few) - do we need to change it in your vyos-build repo, and change it to use that one instead?
As luck has it - the Jenkinsfile in sagitta is the wrong one. Look at the Jenkinsfile of equuleus, you can see two changeset patterns, one **/packages/linux-kernel/*
and the other one **/data/defaults.json
. When they ported this non-standard Jenkinsfile to the standard buildPackage, then they met a challenge - the buildPackage doesn't support multiple changeset patterns, so how did they solve that? They didn't, they used only the first pattern and that's wrong. Today sagitta ISO build failed for me, since it didn't find kernel it expected to find and sure enough, they did update the kernel to 6.6.33 but that matches only the second pattern they dropped, so the Jenkins didn't know it and so I did have the previous 6.6.32 instead... Isn't that funny coincidence? π
I'm at a loss for words
I did found workaround for the changeset linux-kernel sagitta issue, so now linux-kernel needs to use fork (https://github.com/dd010101/vyos-build.git
) as well. I don't have easy way to test this - we shall see!
Nice π
Check this out - building in batches....
Please enter you username here: GurliGebis
Please enter your Jenkins token here: REDACTED
[ Completed ] Package: libnss-tacplus - Branch: sagitta
[ Completed ] Package: vyatta-op-vpn - Branch: equuleus
[ Completed ] Package: vyatta-bash - Branch: equuleus
[ Completed ] Package: vyatta-bash - Branch: sagitta
[ Completed ] Package: radvd - Branch: sagitta
[ Completed ] Package: vyatta-config-mgmt - Branch: equuleus
[ Completed ] Package: vyos-cloud-init - Branch: equuleus
[ Completed ] Package: vyos-cloud-init - Branch: sagitta
[ Completed ] Package: telegraf - Branch: equuleus
[ Completed ] Package: telegraf - Branch: sagitta
[ Completed ] Package: pam_tacplus - Branch: sagitta
[ Completed ] Package: minisign - Branch: equuleus
[ Completed ] Package: hsflowd - Branch: sagitta
[ Completed ] Package: vyatta-cfg-quagga - Branch: equuleus
[ Completed ] Package: ddclient - Branch: sagitta
[ Completed ] Package: vyatta-cfg-firewall - Branch: equuleus
[ Completed ] Package: opennhrp - Branch: sagitta
[ Completed ] Package: vyatta-biosdevname - Branch: equuleus
[ Completed ] Package: vyatta-biosdevname - Branch: sagitta
[ Completed ] Package: vyatta-cfg - Branch: equuleus
[ Completed ] Package: vyatta-cfg - Branch: sagitta
[ Completed ] Package: iproute2 - Branch: equuleus
[ Completed ] Package: vyatta-conntrack - Branch: equuleus
[ Completed ] Package: mdns-repeater - Branch: equuleus
[ Completed ] Package: keepalived - Branch: equuleus
[ Running ] Package: keepalived - Branch: sagitta
[ Running ] Package: netfilter - Branch: equuleus
[ Running ] Package: netfilter - Branch: sagitta
[ Completed ] Package: vyatta-cfg-system - Branch: equuleus
[ Completed ] Package: vyatta-cfg-system - Branch: sagitta
(It is a snapshot of how far along it is, it takes MAX(4, CPU_CORES) tasks at a time
All packages build fine using this method π
These are the kernel images I get out of it today (with it being built from your repo):
./public_html/repositories/sagitta/pool/main/v/vyos-linux-firmware
./public_html/repositories/sagitta/pool/main/v/vyos-linux-firmware/vyos-linux-firmware_20231211_all.deb
./public_html/repositories/sagitta/pool/main/l/linux-upstream
./public_html/repositories/sagitta/pool/main/l/linux-upstream/linux-headers-6.6.33-amd64-vyos_6.6.33-1_amd64.deb
./public_html/repositories/sagitta/pool/main/l/linux-upstream/linux-libc-dev_6.6.33-1_amd64.deb
./public_html/repositories/sagitta/pool/main/l/linux-upstream/linux-image-6.6.33-amd64-vyos_6.6.33-1_amd64.deb
./public_html/repositories/equuleus/pool/main/v/vyos-linux-firmware
./public_html/repositories/equuleus/pool/main/v/vyos-linux-firmware/vyos-linux-firmware_20201218_all.deb
./public_html/repositories/equuleus/pool/main/w/wireguard-linux-compat
./public_html/repositories/equuleus/pool/main/w/wireguard-linux-compat/wireguard-modules_1.0.20201112-1~bpo10+1_all.deb
./public_html/repositories/equuleus/pool/main/l/linux-5.4.268-amd64-vyos
./public_html/repositories/equuleus/pool/main/l/linux-5.4.268-amd64-vyos/linux-libc-dev_5.4.268-1_amd64.deb
./public_html/repositories/equuleus/pool/main/l/linux-5.4.268-amd64-vyos/linux-image-5.4.268-amd64-vyos_5.4.268-1_amd64.deb
./public_html/repositories/equuleus/pool/main/l/linux-5.4.268-amd64-vyos/linux-headers-5.4.268-amd64-vyos_5.4.268-1_amd64.deb
./public_html/repositories/equuleus/pool/main/l/linux-5.4.268-amd64-vyos/linux-tools-5.4.268-amd64-vyos_5.4.268-1_amd64.deb
Check this out - building in batches....
Very nice!
Could you make CONCURRENT_JOBS_COUNT overridable by the user and include note in the echo "Concurrent jobs: $CONCURRENT_JOBS_COUNT"
that user can change this? If for example the user encounters his system is too slow to handle the calculated number?
Sure, I'll write a note about it - I'll let user set it using an environment variable
Done π
Thats it for now, might be a few days before I get time to work on the next steps. But next step is NGINX - we're getting close to the finish line now.
@dd010101 look what I got π
####################################
# Unofficial VyOS ISO builder v1.0 #
####################################
Please enter which branch you want to build (equuleus or sagitta): sagitta
Please enter your email address: REDACTED
Cloning the VyOS build repository...
Checking out the sagitta branch...
Downloading apt signing key...
Building the ISO...
ISO build is complete.
The file is called: vyos-1.4-release-20240617-iso-amd64.iso.
Cleaning up...
Nice, we can continue in the other thread if everything is done here!
Just imported the jobs using the seed jobs bash file.
For sagitta, it works just fine, but for equuleus, when it does branch indexing, it starts pulling the non-local-built vyos-build container, and building the package.
Here is a shorted output of the branch indexing:
At this point I interrupted it (I let it ran earlier, and it started to compile everything after doing the pull).
How can we fix it, so it just indexes it (like it should), instead of pulling the docker image (it shouldn't), and build the package (it shouldn't either).
Here is the output from branch indexing for the sagitta branch: