Closed marekm72 closed 2 months ago
Let's start with telegraf, please post what is your Console Output in Jenskins. If you see package missing then the build job for the packages likely failed and Console Output will give you hints why.
Find telegraf job - then select sagitta branch - then select number of latest build run and then you should see option for Console Output and there you see if build failed or what happened. Search for build run that has associated SHA1 hash (Git SHA1: 96c70b0a0bb
for example). There are maybe other runs without SHA1 - those aren't actual builds, those runs are for branch indexing not for building - ignore those.
Another place to check is the uncron.service journalctl --no-pager -b -u uncron.service
, do you see mentions of those missing packages there? Do they exit cleanly with Job exited with code 0
?
Funny, now repositories/sagitta/pool/main/t/telegraf/telegraf_1.28.3-1_amd64.deb is there, but with fresh timestamp built while I was writing this report, it is build #2 (#1 was "Branch indexing" yesterday evening when most other packages were built), "telegraf" appears only in the latest uncron log, no trace of it earlier, I might have accidentally clicked something in Jenkins to run it. But the other three packages are still missing: vyos-xe-guest-utilities - only current and equuleus branches (no sagitta), only Branch indexing yesterday evening, nothing in uncron log vyos-world - equuleus and sagitta branches, only Branch indexing yesterday evening, nothing in uncron log vyos-user-utils - same as wyos-world For the last two, clicked the triangle on the right to add sagitta branch build to the queue and now the packages are built. Now only vyos-xe-guest-utilities remains missing, and I see it mentioned in the /usr/local/bin/uncron-add script, the script is exactly as shown in the guide and found in $PATH - not sure what is wrong.
One thing where the guide might be a bit more clear - for some commands it may not be obvious if they should be run as root or as ordinary user, and in which working directory - /var/lib/jenkins is home directory of user jenkins (yes remembered to create user jenkins with UID 1006 first), but there is also /home/sentrium (owned by jenkins) and /home/marekm owned by me (marekm UID 1000, first user created during Debian install, following common practice to use sudo with my own password, while root password is disabled). There is also a mention of /opt/apt.gpg.key which is actually /home/sentrium/web/dev.packages.vyos.net/public_html/repositories/apt.gpg.key - no mention of /opt anywhere else in the guide, but somehow I have empty directories /opt/containerd/{bin,lib} and /opt/jenkins-cli containing many xml files named after packages and owned by root, perhaps I did some step in the wrong directory and/or as the wrong user and have some permissions issues.
If you see only branch indexing then for some reason the build was never triggered. The seed-jobs.sh build
should trigger all builds but there is possible race condition - if you run seed-jobs.sh build
too early (before branch indexing is completed) then Jenkins won't build those jobs since it doesn't have anything to build yet. The easy solution is to do what you did - use the triangle/Build Now for specific branch of specific job/packages. You need to worry about triggering the build only for the first time - next build is handled automatically by Jenkins schedule, thus if you solve the issue with Build Now then you don't need to do anything else.
Can you please run Build now (triangle) again and post Console Output of the new run for the package that is still missing (vyos-xe-guest-utilities)? Also please look for mentions after the run in the log of uncron.service - journalctl --no-pager -b -u uncron.service
you should see something from uncron only if the Jenkins build succeeds.
The vyos-xe-guest-utilities
is special case where the team didn't create proper sagitta branch, thus we need to have hack via uncron-add to redirect vyos-xe-guest-utilities current branch to sagitta, this is the only package with this trait.
Everything runs as root unless stated (or shown) otherwise. Most commands need to run as root since we are manipulating the system or other users. If you do a lot of work as root then perhaps sudo -i
is the better choice than lots of sudo prefixed commands. Do you think it's better to add sudo everywhere or just state to run as root?
If not stated then the cwd doesn't matter - there should be cd where it does matter.
/home/sentrium
that's decision of the team, can't do much about that - it doesn't make sense, but they just have it hardcoded in the source code because that's what they are using.
/opt/containerd
is created by docker.
/opt/jenkins-cli
is used by seed-jobs.sh, this can run as root or user - doesn't really matter, this is just working directory for the script so only the script is using these files. You can delete it afterwards. Perhaps /tmp would be better place.
/opt/apt.gpg.key
is used inside the build container when you are building ISO, thus not present on host.
I don't think you did anything incorrect since if you did then lot of it would fail not just one package, most things are shared. The issue you found doesn't sound like permissions error. Thus we need to look at the logs for the specific packages to see how far it gets and why it doesn't create .deb. Perhaps the uncron-add hack is failing?
OK, vyos-xe-guest-utilities resolved by Build Now current branch. A few more packages were missing, probably the script run too early, resolved by Build Now too. Now the ISO build stops here:
The following packages have unmet dependencies:
vyos-1x : Depends: fuse-overlayfs but it is not going to be installed
Depends: owamp-client but it is not installable
Depends: owamp-server but it is not installable
Depends: podman but it is not going to be installed
Depends: python3-vici (>= 5.7.2) but it is not installable
Depends: twamp-client but it is not installable
Depends: twamp-server but it is not installable
E: Unable to correct problems, you have held broken packages.
owamp build fails, Jenkins console log says "dpkg-checkbuilddeps: error: Unmet build dependencies: dh-apparmor dh-exec libcap-dev"
python3-vici is missing and I don't see it in Jenkins at all. EDIT: python3-vici resolved - in strongswan, Build Now in Jenkins once again.
The other packages are installed (wrong versions?):
$ dpkg -l podman fuse-overlayfs dh-apparmor dh-exec libcap-dev
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-================-==============-============-=================================================================
ii dh-apparmor 3.0.8-3 all AppArmor debhelper routines
ii dh-exec 0.27 amd64 Scripts to help with executable debhelper files
ii fuse-overlayfs 1.10-1 amd64 implementation of overlay+shiftfs in FUSE for rootless containers
ii libcap-dev:amd64 1:2.66-4 amd64 POSIX 1003.1e capabilities (development)
ii podman 4.3.1+ds1-8+b1 amd64 engine to run OCI-based containers in Pods
owamp
dependencies are broken but fixed by my fork of vyos-build (docker container part) - https://github.com/dd010101/vyos-build/commit/4a6d29550a71560801914efa0b79984830213f6b
What docker vyos-build container you are using? Are you using your own built from my fork? This sounds like you have the version from dockerhub? Please rebuild sagitta vyos-build docker container as outline in Build patched vyos-build docker images.
The other packages are installed (wrong versions?)
This is what mine sagitta vyos-build container reports:
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-================-==============-============-=================================================================
ii dh-apparmor 3.0.8-3 all AppArmor debhelper routines
ii dh-exec 0.27 amd64 Scripts to help with executable debhelper files
ii fuse-overlayfs 1.10-1 amd64 implementation of overlay+shiftfs in FUSE for rootless containers
ii libcap-dev:amd64 1:2.66-4 amd64 POSIX 1003.1e capabilities (development)
ii podman 4.3.1+ds1-8+b1 amd64 engine to run OCI-based containers in Pods
This doesn't make sense - not installable when they are installed... Sounds like Jenkins is using wrong container?
Please verify section Configure environment variables and add global vyos-build Jenkins library. Specifically the Declarative Pipeline (Docker) part where custom docker registry is. Also check Console Output and focus on what vyos-build is being used from where.
EDIT: I did rebuild the sagitta vyos-build docker container from my fork and also used it to rebuild the owamp and it went just fine. Thus I believe it's not generally broken and something on your side is misbehaving? The dpkg was run in docker container what you did use to build the ISO right? So there is possibility that Jenkins is using something else, please verify.
The default Jenkins view is bad to see what packages did fail since it shows the pipeline combined and you don't see sagitta branch for example. Also Jenkins really sucks that it shows the indexing as build but what you can do...
If you want better view then:
Install extra plugin:
Manage Jenkins -> Plugins -> Available plugins
Then create custom view by navigating to dashboard (click on Jenkins logo or Dashboard in breadcrumb).
There you should list of jobs and above All and + thus use the plus to add another View.
Give it name and choose List View and configure this new view:
Job Filters
[✓] Recurse in subfolders
[✓] Use a regular expression to include jobs into the view
Regular expression: .*
Press [Add Job Filter] button and select Job Type Filter
Job Type: Multibranch Pipeline
Match Type: Exclude Matched - Filter out jobs that match this filter
Then save the view by [OK].
This should give you overview of all branches and there you can sort by build time and also see if some jobs failed. Very short build time will give you hint that there may be only branch indexing and actual build is missing. You can also run build from there.
The default view shows pipelines and that's useless because build time isn't the actual build time and also the triangle doesn't build it just scans... Not useful at all.
I probably did something wrong with the build containers, after rebuilding them (running that script again) the owamp packages were built. Also, I was trying native build (first part of official instructions) and got confused by these "installed but not installed" packages. I have to admit I have no experience with Docker containers. I use XCP-NG VMs all the time. Might be a good idea to document all steps when using containers, without referring to the official instructions (in case they are updated for 1.5 only). Also it was not clear how to mount apt.gpg.key until I have RTFM a bit about docker run -v option, used this command (one long line):
docker run --rm -it --privileged -v $(pwd):/vyos -v /home/sentrium/web/dev.packages.vyos.net/public_html/repositories/apt.gpg.key:/opt/apt.gpg.key -w /vyos vyos/vyos-build:sagitta bash
Sagitta image finally built, boots in XCP-NG VM, vyos-smoketest runs (some tests pass, some fail, I have 7 network interfaces as that seems to be XCP-NG limit) and 1GB RAM for this VM (with default 256MB fails to boot - kernel can't start init and panics, I thought 256MB would be enough for a basic router if not using BGP).
It will take a while but I will try to start over from scratch and do all the steps again (based on current version of the guide with any fixes made in the meantime). It's really complex, I'm impressed that you figured it out. To say "it's open source, all code is in github" was downright evil - in Soviet Russia VyOS obfuscates you! :)
One thing not clear to me - what are the advantages of native build vs using containers (both ways described in the official instructions) and what needs to be done to make native build work too.
Might be a good idea to document all steps when using containers, without referring to the official instructions (in case they are updated for 1.5 only). Also it was not clear how to mount apt.gpg.key until I have RTFM a bit about docker run -v option...
I did add extra instructions on how to build the ISO - that should clear things up.
Sagitta image finally built, boots in XCP-NG VM, vyos-smoketest runs (some tests pass, some fail, I have 7 network interfaces as that seems to be XCP-NG limit) and 1GB RAM for this VM (with default 256MB fails to boot - kernel can't start init and panics, I thought 256MB would be enough for a basic router if not using BGP).
From my experience 256MB is too low for "full fat OS" - VyOS isn't aiming for the embedded sector (like RISC MIPS) thus I'm not surprised it doesn't even boot. 1GB is from my view plenty (they specify 1GB as minimum officially) - this doesn't apply here though - throw away everything you know - smoketest doesn't behave like normal install.
By default the Makefile/automatic smoketest will assign 3GB to the test VM and this will fail due to OOM if you have more than 4 thread assigned to the test VM but works with <=4 threads. This tells you that for some reason smoketest eats a lot of memory and 3GB is just about enough if you have 1-4 threads. Specifically the step Running Testcase: /usr/libexec/vyos/tests/smoke/cli/test_service_ids_ddos-protection.py
fails due to the OOM killer. I have no idea why it eats so much memory but it does and if you give it 3GB with 3 threads then it's happy. Generally the only failure of smoketest I saw was the OOM.
It's really complex, I'm impressed that you figured it out. To say "it's open source, all code is in github" was downright evil - in Soviet Russia VyOS obfuscates you! :)
It's complex but straightforward if you know what direction you should take. Running automated build system for many packages will never be simple. Believe or not the Jenkins route is much less complex than if you would do it by other build system.
Perhaps soon there would be easy way - GurliGebis does great progress on automating the steps in the instructions. That said I do believe that if you go current route then you get much better understanding what pieces are involved and this will be great benefit for you if you want to keep running package mirror long term. If you abstract everything away then the complexity is still there - you just don't see it and thus it will be harder debug if something goes wrong.
Indeed, their suggestion that anyone should just figure this all out on their own without any pointers is insane. I'm not sure if it's negligent or sneaky - doesn't matter why. They made repeated suggestion that requires unreasonable amount of knowledge and time as trap for people to fall for - willingly or not - that's the result. If everyone did figure this all out on their own then it would be still huge waste of effort to rediscover everything again and again...
One thing not clear to me - what are the advantages of native build vs using containers (both ways described in the official instructions) and what needs to be done to make native build work too.
Just use Docker method - it's easier. I see the native method as legacy from before the Docker method was invented. Docker saves you the process of setting up to the build OS and more importantly it will save you the overhead of keeping the build OS up to date. This is huge benefit if you want to build multiple branches that use different Debain versions. There is no reason to use the native method if you can use Docker.
Started over - fresh VM with 4 CPUs, 16 GB RAM, 150 GB disk, Debian 12.5 install from netinst iso, OS language set to en_US.UTF-8 to avoid locale issues, in tasksel only SSH server and system utilities checked (no desktop environment). Following all steps, nothing suspicious until "opam" (whatever that is):
"opam init" gives a warning that opam is out of date, and later (Required setup, please read) asked if I want to modify ~/.bash_profile - I didn't.
"opam switch create default 4.13.1" says "[ERROR] There already is an installed switch named default"
I've just ignored these warnings/errors and continued, is that OK?
Got to the seed-jobs.sh script and setting up nginx for the repo while waiting for all Jenkins jobs to complete (a lot of CPU used, RAM usage without swap seems to stay below 5GB), before trying to build the ISO.
Unfortunately, package builds seem to fail now - console output pasted below for "ethtool" but others fail with similar messages. I haven't seen that previously, was something changed in the repo or could I have made some mistake? Is it OK that origin URLs at the top have "vyos" in place of "dd010101"?
"opam switch create default 4.13.1" says "[ERROR] There already is an installed switch named default"
Yes, that's okay. For some reason my opam didn't have the default environment so I added it. If you already have environment then just use it. I will update the instructions to mention that this is possibility.
Unfortunately, package builds seem to fail now
The failing builds seem to be because you don't have all Jenkins plugins?
Invalid agent type "docker" specified. Must be one of [any, label, none]
It says it doesn't know docker - this is additional plugin mentioned in Install Jenkins plugins.
in Script Console println(Jenkins.instance.pluginManager.plugins) gives this output, docker-workflow is there:
[Plugin:ionicons-api, Plugin:cloudbees-folder, Plugin:antisamy-markup-formatter, Plugin:asm-api, Plugin:json-path-api, Plugin:structs, Plugin:workflow-step-api, Plugin:token-macro, Plugin:build-timeout, Plugin:credentials, Plugin:plain-credentials, Plugin:variant, Plugin:ssh-credentials, Plugin:credentials-binding, Plugin:scm-api, Plugin:workflow-api, Plugin:commons-lang3-api, Plugin:timestamper, Plugin:caffeine-api, Plugin:script-security, Plugin:javax-activation-api, Plugin:jaxb, Plugin:snakeyaml-api, Plugin:json-api, Plugin:jackson2-api, Plugin:commons-text-api, Plugin:workflow-support, Plugin:plugin-util-api, Plugin:font-awesome-api, Plugin:bootstrap5-api, Plugin:jquery3-api, Plugin:echarts-api, Plugin:display-url-api, Plugin:checks-api, Plugin:junit, Plugin:matrix-project, Plugin:resource-disposer, Plugin:ws-cleanup, Plugin:ant, Plugin:javax-mail-api, Plugin:durable-task, Plugin:workflow-durable-task-step, Plugin:bouncycastle-api, Plugin:instance-identity, Plugin:workflow-scm-step, Plugin:workflow-cps, Plugin:workflow-job, Plugin:jakarta-activation-api, Plugin:jakarta-mail-api, Plugin:apache-httpcomponents-client-4-api, Plugin:mailer, Plugin:workflow-basic-steps, Plugin:gradle, Plugin:pipeline-milestone-step, Plugin:pipeline-build-step, Plugin:pipeline-groovy-lib, Plugin:pipeline-stage-step, Plugin:joda-time-api, Plugin:pipeline-model-api, Plugin:pipeline-model-extensions, Plugin:branch-api, Plugin:workflow-multibranch, Plugin:pipeline-stage-tags-metadata, Plugin:pipeline-input-step, Plugin:pipeline-model-definition, Plugin:workflow-aggregator, Plugin:jjwt-api, Plugin:okhttp-api, Plugin:github-api, Plugin:mina-sshd-api-common, Plugin:mina-sshd-api-core, Plugin:gson-api, Plugin:eddsa-api, Plugin:trilead-api, Plugin:git-client, Plugin:git, Plugin:github, Plugin:github-branch-source, Plugin:pipeline-github-lib, Plugin:pipeline-graph-analysis, Plugin:metrics, Plugin:pipeline-graph-view, Plugin:ssh-slaves, Plugin:matrix-auth, Plugin:pam-auth, Plugin:ldap, Plugin:email-ext, Plugin:theme-manager, Plugin:dark-theme, Plugin:authentication-tokens, Plugin:docker-commons, Plugin:docker-workflow, Plugin:copyartifact, Plugin:ssh-agent, Plugin:pipeline-utility-steps, Plugin:job-dsl]
Maybe you need Docker plugin
as well? Please try to install this plugin as well.
Installed, still fails the same way. Current list of plugins:
[Plugin:ionicons-api, Plugin:cloudbees-folder, Plugin:antisamy-markup-formatter, Plugin:asm-api, Plugin:json-path-api, Plugin:structs, Plugin:workflow-step-api, Plugin:token-macro, Plugin:build-timeout, Plugin:credentials, Plugin:plain-credentials, Plugin:variant, Plugin:ssh-credentials, Plugin:credentials-binding, Plugin:scm-api, Plugin:workflow-api, Plugin:commons-lang3-api, Plugin:timestamper, Plugin:caffeine-api, Plugin:script-security, Plugin:javax-activation-api, Plugin:jaxb, Plugin:snakeyaml-api, Plugin:json-api, Plugin:jackson2-api, Plugin:commons-text-api, Plugin:workflow-support, Plugin:plugin-util-api, Plugin:font-awesome-api, Plugin:bootstrap5-api, Plugin:jquery3-api, Plugin:echarts-api, Plugin:display-url-api, Plugin:checks-api, Plugin:junit, Plugin:matrix-project, Plugin:resource-disposer, Plugin:ws-cleanup, Plugin:ant, Plugin:javax-mail-api, Plugin:durable-task, Plugin:workflow-durable-task-step, Plugin:bouncycastle-api, Plugin:instance-identity, Plugin:workflow-scm-step, Plugin:workflow-cps, Plugin:workflow-job, Plugin:jakarta-activation-api, Plugin:jakarta-mail-api, Plugin:apache-httpcomponents-client-4-api, Plugin:mailer, Plugin:workflow-basic-steps, Plugin:gradle, Plugin:pipeline-milestone-step, Plugin:pipeline-build-step, Plugin:pipeline-groovy-lib, Plugin:pipeline-stage-step, Plugin:joda-time-api, Plugin:pipeline-model-api, Plugin:pipeline-model-extensions, Plugin:branch-api, Plugin:workflow-multibranch, Plugin:pipeline-stage-tags-metadata, Plugin:pipeline-input-step, Plugin:pipeline-model-definition, Plugin:workflow-aggregator, Plugin:jjwt-api, Plugin:okhttp-api, Plugin:github-api, Plugin:mina-sshd-api-common, Plugin:mina-sshd-api-core, Plugin:gson-api, Plugin:eddsa-api, Plugin:trilead-api, Plugin:git-client, Plugin:git, Plugin:github, Plugin:github-branch-source, Plugin:pipeline-github-lib, Plugin:pipeline-graph-analysis, Plugin:metrics, Plugin:pipeline-graph-view, Plugin:ssh-slaves, Plugin:matrix-auth, Plugin:pam-auth, Plugin:ldap, Plugin:email-ext, Plugin:theme-manager, Plugin:dark-theme, Plugin:authentication-tokens, Plugin:docker-commons, Plugin:docker-workflow, Plugin:copyartifact, Plugin:ssh-agent, Plugin:pipeline-utility-steps, Plugin:job-dsl, Plugin:cloud-stats, Plugin:apache-httpcomponents-client-5-api, Plugin:docker-java-api, Plugin:docker-plugin]
That's unusual. Please restart Jenkins, verify jenkins
linux user has docker
group and verify docker works with some dummy command like docker images
.
I have only docker-commons
and docker-workflow
plugins and it does work... So plugins aren't the issue.
Reboot and then "./seed-jobs.sh build" again seems to help, added another plugin (View Job Filters) for better view as you suggested, now I can see most builds still running but a few (that failed previously) have already finished successfully. Thanks!
"Launch the vyos-build docker container" as given doesn't work:
# docker run --rm -it -v "$(pwd)":/vyos -v "$HOME/.gitconfig":/etc/gitconfig -v "$HOME/.bash_aliases":/home/vyos_bld/.bash_aliases -v "$HOME/.bashrc":/home/vyos_bld/.bashrc -v "/tmp/apt.gpg.key:/opt/apt.gpg.key" -w /vyos --privileged --sysctl net.ipv6.conf.lo.disable_ipv6=0 -e GOSU_UID=$(id -u) -e GOSU_GID=$(id -g) "vyos/vyos-build:$BRANCH" bash
Current UID/GID: 0/0
useradd warning: vyos_bld's uid 0 outside of the UID_MIN 1000 and UID_MAX 60000 range.
useradd: warning: the home directory /home/vyos_bld already exists.
useradd: Not copying any file from skel directory into it.
root@53bef03fae5b:/vyos# ./build-vyos-image --help
E: Could not retrieve vyos-1x from branch sagitta: GitCommandError(['git', 'checkout', 'sagitta'], 128, b"warning: unable to access '/etc/gitconfig': Is a directory\nwarning: unable to access '/etc/gitconfig': Is a directory\nwarning: unable to access '/etc/gitconfig': Is a directory\nfatal: unknown error occurred while reading the configuration files", b'')
The example aliases in the official guide seem to be wrong, I have nothing in $HOME/.gitconfig and even just "./build-vyos-image --help" in the container fails at it can't access the repo, works after removing all the -v options with $HOME so the command to run is:
# docker run --rm -it -v "$(pwd)":/vyos -v "/tmp/apt.gpg.key:/opt/apt.gpg.key" -w /vyos --privileged --sysctl net.ipv6.conf.lo.disable_ipv6=0 -e GOSU_UID=$(id -u) -e GOSU_GID=$(id -g) "vyos/vyos-build:$BRANCH" bash
UID and GID are 0 when running as root so probably could be removed too - or should this step be run as ordinary user? This built the ISO. Will test it tomorrow, time for some sleep. It's a bit worrying that the build process sometimes accesses the original git repo even when using your fork, this means it might break when they make changes (accidentally or not).
E: Could not retrieve vyos-1x from branch sagitta: GitCommandError(['git', 'checkout', 'sagitta'], 128, b"warning: unable to access '/etc/gitconfig': Is a directory\nwarning: unable to access '/etc/gitconfig': Is a directory\nwarning: unable to access '/etc/gitconfig': Is a directory\nfatal: unknown error occurred while reading the configuration files", b'')
This is very common when someone freshly builds a system and doesn't generate the files before executing the docker command. You'll notice that your host machine now has folders in those locations.
Don't forget to mount /dev
if you wish to build anything other than a .iso.
It's a bit worrying that the build process sometimes accesses the original git repo even when using your fork, this means it might break when they make changes (accidentally or not).
Please post the logs that show this is the case. Using the --vyos-mirror option, it should not know about the official repo. You say it sometimes accesses the repo, but anytime that occurs, you'd get a failure to build.
The example aliases in the official guide seem to be wrong, I have nothing in $HOME/.gitconfig and even just "./build-vyos-image --help" in the container fails at it can't access the repo
This was pretty much copy paste from the official documentation. Those extra mounts are nice to have so you have your bash/git config inside the container as you do in your parent system. This isn't at all required. I will reduce it to essentials so it doesn't cause issue with fresh systems.
UID and GID are 0 when running as root so probably could be removed too - or should this step be run as ordinary user?
The UID/GID handling is handy to have in case someone would run it differently. If you run as root it doesn't make difference but you may choose to run as user for example so it's nice to have command that would work in both cases. In reality it doesn't matter if you run this as root or user - you just need the user who did the git clone/who owns the vyos-build files.
It's a bit worrying that the build process sometimes accesses the original git repo even when using your fork, this means it might break when they make changes (accidentally or not).
That's indentinal. Where possible we use official GIT repository for packages so there is no delay between when upstream pushes fix vs when we would merge the fix into our fork. The build isn't 100% reproducible by it's nature anyway so I don't mind the fact that upstream can break the build for everyone, the build can break itself too, I think the more up to date nature is bigger benefit where you receive the fix for the broken build quicker. I like to keep this unless it casues major issues then I it make sense to bother with fork of everything.
Please post the logs that show this is the case. Using the --vyos-mirror option
I believe this is in regard to the GIT repositories for building packages themselves. The ISO build uses only your mirror - that's correct but the packages use official GIT repository in most cases. We use our GIT repository only where official is broken and can't be used.
@marekm72 Do you plan to host your package mirror publicly? It would be nice to have public mirror for those who don't want to bother with/or don't have capacity for the build process. We already have very easy and automated way to build the packages (look at the automated scripts in the readme) but it requires extra hardware to do. What are your thoughts on this?
I will look into it soon. How do you estimate the hardware requirements? I've been a bit busy recently with some urgent maintenance of my infrastructure (mainly wireless on a 12m high mast), mostly done now.
Not sure if these packages should be patched to remove their trademarked logo/artwork then, and there is another potential legal issue with GPL binary packages without corresponding source in the same APT repo. Reported here: https://vyos.dev/T6451 https://vyos.dev/T6483 For both there was some hope initially (triaged as Wishlist) until yesterday when both were closed as Invalid. It may or may not be a coincidence that also yesterday my account was silenced on their forum (I have been accused of hijacking - no idea what they mean really). Also, not sure what their recent announcement of VyOS Stream means, how it will affect this work - will it be the end of VyOS like CentOS Stream was the end of CentOS (or so I heard, could be wrong as I'm a long time Debian user, is it something like Debian testing?). BTW, I wonder how many more users silenced from their forum it takes to reach some critical mass and have the issue get some publicity on LWN or something. Their main website still says things like: "However, simply making code available is not enough. We also keep the complete build toolchain available, and we strive to make it easy to use. You can build a VyOS image in just a few commands. There is no special maintainer toolchain we keep to ourselves: all image build tools are available to everyone interested."
How do you estimate the hardware requirements?
You can find those here.
Not sure if these packages should be patched to remove their trademarked logo/artwork
The trademarked material is for example in the vyos-build repository and potentially not contained in .deb packages. I'm not sure since it's not easy to find all this material, there is no list or anything like it, I know the grub logo is pulled from the vyos-build when you build ISO thus it doesn't touch the .deb. What other material is there though? If you point me to other to other material you saw in the VyOS I could track it down where it comes from.
there is another potential legal issue with GPL binary packages without corresponding source in the same APT repo
That's not the case. The GPL doesn't say anything about source packages. It only tells you that if you distribute the binary, then you shall provide the source code if someone asks you. This doesn't need to be public, you don't need to provide source to everybody only the person you did distributed the binary to and this doesn't need to be automated, you can provide the source code it via e-mail if you want. See GPL FAQ and this.
Also there is no benefit for the VyOS to provide source packages, since you can easily build the given package via Debian tools from git, source package will not make it easier. It's even less useful for community since the community doesn't have access to the repositories thus even if there were source packages they would be locked away and thus we needs to build from git anyway.
For both there was some hope initially (triaged as Wishlist) until yesterday when both were closed as Invalid. It may or may not be a coincidence that also yesterday my account was silenced on their forum (I have been accused of hijacking - no idea what they mean really).
I'm sure they won't do anything at all for the build yourself community. Thus I was expecting they will just leave it there as wishlist forever - not sure what happened to make it invalid. I do think that the only way anything we describe will get implemented is that we need to do it ourselves regardless how much sense or value it has.
The forum is pretty much locked down. I saw someone reported on other channels that the forum registration is on invite or something like that as well. They really want to control the content there to make it look like everything is going great. I don't think this is about hijacking (or any other reason they say), you maybe we're slightly off topic in one instance but other members can post off topic as much as they want and that's fine. It's basically justification for getting rid of source of unwanted opinions. They need fabricate some reason to silence you because silincing people just for the opinions would look bad.
Also, not sure what their recent announcement of VyOS Stream means, how it will affect this work - will it be the end of VyOS like CentOS Stream was the end of CentOS (or so I heard, could be wrong as I'm a long time Debian user, is it something like Debian testing?).
As I understand it's the same idiology like the CentOS Stream in sense that VyOS release is now downstream of current and they want to change it so the release is upstream. This has the aim to keep experimental code from leaking into the release and gives them control over what is included in the release. Until now VyOS release originated from current thus it contained everything including the experimental code and situation like this did break sagitta they say, so they want to flip it around so this won't be possible in the future. Since they will have option not to include the experimental code. That's totally understandable and fine development method.
The CentOS Stream has negative publicity not for what it is but because of what the Redhat did with it, Redhat provides only Stream thus upstream and they dropped the downstream version that did originate from RHEL. Previously there were two CentOS versions and they dropped the stable one and that's why people don't like it - not because the stream idiology has some issue, it's not issue until you have only stream.
The VyOS project could do the same after they successfully develop the stream development. They don't say anything about that in the blog post of course. The only reason they said was to make LTS true LTS as opposite to what sagitta is now - fork of unstable. Everything in the blog post is good but let's say it also gives them more flexibility as well.
BTW, I wonder how many more users silenced from their forum it takes to reach some critical mass and have the issue get some publicity on LWN or something.
I think they are playing it well, in sense that you literally can't see anything from the outside, everything looks honky dory. Thus I don't expect any mass to form since most people can't even see the issues, the forum is the main channel for this and they control it tightly. Don't underestimate the power of censorship!
Their main website still says things like: "However, simply making code available is not enough. We also keep the complete build toolchain available...
To be fair as it is now this still applies. This project is proof. They could remove the LTS code from GitHub and stick to strictly "just GPL" and then it won't apply anymore 😄. They didn't show any intentions for this so far...
I don't think about the "if this" or "if that", I take it for what it is now and currently it's fine. They don't develop fast so even if some big changes are coming you will not see the result any time soon. That's why I would say if it's fine now it will be for the near future. What happens with next release? We shall see, that's likely where the big changes would be visible.
I'd prefer to keep the build stuff internal, simply because I'm new to Jenkins and don't know it well enough to keep it properly secured (nothing against Jenkins specifically, it's simply a large and complex thing that takes time to learn). Then mirror the resulting repo to a public server (different VM on a public IP, without all the build stuff), that probably reduces the requirements somewhat.
About the trademarked stuff, I don't know where else it may be hidden, I remember xcp-ng too had to find it where Citrix has put it in XenServer, for sure not all places with the "vyos" string (case insensitive) are a problem, just like Linux has AF_UNIX sockets and there is no UNIX(R) trademark violation (but there is also the AF_LOCAL alias so someone may have been afraid of it).
GPL - my main concern is to provide "corresponding source" which means exactly the same source that was used to produce the binaries at any given point of time. The way Debian does it is IMHO perfectly what Section 3 wants (in my understanding, but that part is written in fairly clear language understandable to non-lawyers) without requiring to keep archive of corresponding sources for all binary packages ever published (in case someone asks for source by email a year later). The key words - "offering equivalent access to copy the source code from the same place". I don't know what VyOS will do, but they might at some point close the rolling APT repo as well, and say "we had to do it to comply with the GPL". LTS is not an issue here as it's already closed, if not distributing binaries then also no need to distribute sources.
Silencing - yeah, I'm from Poland and old enough to remember how it was before 1989 (I was 17 back then) though it was not as extreme as in the Soviet Union or now Russia where people can go to jail for protesting against war (officially "special operation"). Not something you would expect from an Open Source project today... Forum registration was open when I registered, didn't know it has become so closed now. Another instance is some discussion about UPnP see https://vyos.dev/T5835 and search for "simplysoft" who created this task and is now Unknown Object (User). While I agree with syncer on the technical issues here (UPnP is generally a security hole), still it's wrong to say some things that were said there.
I'd prefer to keep the build stuff internal... Then mirror the resulting repo to a public server...
Then you may like this script. You may also just use rsync in cron with interval like hour but I used this script for different purpose to make the mirroring both faster to propagate and also more resilient thanks to the postpone check. In this instance there are many options how to do this but this is one is simple, fast and efficient.
About the trademarked stuff, I don't know where else it may be hidden.
It's more about if you use VyOS as user, where you see the logo? I don't ask where it's hidden but where you see it in plain sight. I actually didn't find the logo in many places at all - I know just one, maybe I'm blind that's why I'm asking if you see the logo when you use VyOS elsewhere. There are also other parts you want to change to make clear distinction that the ISO isn't official.
Trademarked logo - I know only: Boot background - https://github.com/vyos/vyos-build/blob/sagitta/data/live-build-config/includes.binary/isolinux/splash.png
Other bits to change in order to make clear the ISO isn't official VyOS: Boot menu headline - https://github.com/vyos/vyos-build/blob/sagitta/data/live-build-config/includes.binary/isolinux/menu.cfg MOTD links to blog/bug tracker - https://github.com/vyos/vyos-build/blob/sagitta/data/defaults.toml
I believe with these files you can break the feeling like the ISO is official by explicitly saying it's NOT in GRUB, in MOTD, etc.
If you want to be pedantic then you find all the "VyOS" mentions like: MOTD - https://github.com/vyos/vyos-1x/blob/sagitta/src/conf_mode/system_login_banner.py#L33 Mentions like these - https://github.com/vyos/vyos-1x/blob/equuleus/op-mode-definitions/show-system.xml.in#L89
This enters the territory where you would need to fork large portion of the VyOS code base, because mentions like these are hardcoded all over the place.
Specifically in the CLI interface:
./op-mode-definitions/show-environment.xml.in: <command>if ! grep -q hypervisor /proc/cpuinfo; then ${vyos_libexec_dir}/vyos-sudo.py ${vyos_op_scripts_dir}/show_sensors.py; else echo "VyOS running under hypervisor, no sensors available"; fi</command>
./op-mode-definitions/openconnect.xml.in: <help>Show full settings, including QR code and commands for VyOS</help>
./op-mode-definitions/openconnect.xml.in: <help>Show OTP authentication secret in Hex (used in VyOS config)</help>
./op-mode-definitions/show-system.xml.in: <help>Show full settings, including QR code and commands for VyOS</help>
./op-mode-definitions/show-system.xml.in: <help>Show information about non VyOS user accounts</help>
./op-mode-definitions/show-system.xml.in: <help>Show information about VyOS user accounts</help>
./op-mode-definitions/configure.xml.in: echo "Please do it as an administrator level VyOS user instead."
./op-mode-definitions/show-license.xml.in: <help>Show VyOS license information</help>
./op-mode-definitions/system-image.xml.in: <help>Show installed VyOS images</help>
./op-mode-definitions/system-image.xml.in: <help>Show details about installed VyOS images</help>
./op-mode-definitions/force-root-partition-auto-resize.xml.in: <help>Resize the VyOS partition</help>
./interface-definitions/pki.xml.in: <defaultValue>VyOS</defaultValue>
./interface-definitions/service_https.xml.in: <help>VyOS HTTP API configuration</help>
To make ISO completely VyOS mention free isn't easy. This would really benefit if this was something officially supported, but I doubt they will even accept any rebranding functionality, like if someone sent pull-request, since then they will have to support it in the future and for new features. This is something that falls into the "nobody needs this, enterprise doesn't need this, nobody cares, nobody will benefit" expect the people they like to delete...
I think the mentions miss the point though - the name alone isn't a issue per say, see all the places with Microsoft mentions, it's not a issue to use trademarked name until you start to use it in specific way, specifically when you start to pretend whatever you distributing to users is from the brand or is the brand.
Also how could you use the .deb packages alone? In order to assemble anything usable you need pieces that aren't included in the .deb repository - thus I don't see how anybody can argue that the packages are damaging the brand, since whatever you provide is useless by itself, it's not a product you can use.
Also how could you argue that the ISO that says NOT VYOS, is from VyOS? In order to make clear the ISO isn't official you don't need to remove every mention of the brand. You just need to clearly state what the product is or isn't in way people can't miss it. I do believe the changes in the vyos-build should be enough, if boot and MOTD says this is NOT VYOS I don't think anybody would think it's VyOS even if they see mention of the brand in the help text for CLI. That's why I don't see the mentions of the name as problem.
Thus the logo is the main issue and that's not part of .deb package, that's included in the vyos-build and pulled during ISO build.
GPL - my main concern is to provide "corresponding source" which means exactly the same source that was used to produce the binaries at any given point of time.
The GIT gives you ability to checkout any given version and if you have the binary you have version, date or direct commit hash you can refer to. Thus if you have GIT repository you automatically have way to get "corresponding source" for any given binary from any point of time. That's not as user friendly and direct as source packages if you only care about the version you have but GPL doesn't say how you should get it, it just says you should have a way to get it.
Another instance is some discussion about UPnP see https://vyos.dev/T5835 and search for "simplysoft" who created this task and is now Unknown Object (User). While I agree with syncer on the technical issues here (UPnP is generally a security hole), still it's wrong to say some things that were said there.
Looks like the VyOS team has very much unhealthy culture where there is no option to disagree. The people who requested the feature didn't find agreement and thus I would expect this would end as we can agree to disagree - this doesn't look like it's a option - you need to agree at the end or you will get deleted? I mean this creates very peaceful culture indeed, where no conflict exists yet it's very much unhealthy because competing opinions keep each other in check. I don't think it's about the project itself - it's about specific people filling roles they have difficulties with. If person has difficulty accepting criticism or dealing with opposing opinions why this person needs to handle all the controversial communications? They should find other people for such task who aren't so easily offended and will not reach for the delete/silence button at first sign of disagreement...
OK, I've just tested the automated build scripts, following the instructions starting from a fresh Debian 12 VM. All worked fine, took a few hours and I share the results (equuleus and sagitta, APT repos and ISO images) here:
http://git.amelek.net/not-vyos/
Note, this is temporary and not yet updated regularly, I've just re-used the same VM from the git mirror as I haven't set up the new one properly.
https works too, I don't force it for the downloads as the packages in the APT repository are signed and for the ISOs I provide hash files to check download integrity.
I plan to set it up properly later, with mirrors for more things, backups in case something disappears etc.
I put the ISOs here for convenience as well, but they may not stay for long as the build process is not yet patched to remove the logo etc. They are only very lightly tested (boot in VM with 1 CPU and 1 GB RAM, login with vyos/vyos, show version), please test carefully and better don't rush to upgrade production routers just yet.
The unofficial 1.4.0 image already includes the regreSSHion fix (already included in Debian), 1.3.8 has an old version of OpenSSH without this vulnerability.
In other news, they cut off my access to further LTS updates and rejected my monthly $25 via Open Collective (as the email from OC told me, without providing a reason) which I've been donating since March 2022. I have the official 1.3.8 and 1.4.0 images downloaded before this happened, but from now on will have to rely on the self-built images. I'll see if I can donate to accel-ppp instead or they reject me there as well, as I will still use accel-ppp even if I ever stop using VyOS in favor of roll-my-own-router (Debian or Alpine + BIRD + accel-ppp).
The reprepro-mirror.sh script has a small bug where it doesn't use $TARGET_PATH but only the default /tmp/repositories - one line fix:
-targetPath=${targetPath:-$TARGET_PATH} +targetPath=${TARGET_PATH:-$targetPath}
The automated scripts are excellent, and if VyOS(tm) provided something similar then they could claim "we strive to make it easy to use" - but that part was done by you in this project, they only provided motivation. Thanks!
My suggestion to include Debian source packages in APT repo was mainly for rolling builds, as the changes may be too intrusive for LTS. I see the scripts create 3 build containers (equuleus, sagitta, current) so perhaps we could build rolling packages APT repo as well (still available but who knows for how long, might not hurt to be prepared), all is good while GIT repos with sources remain available but remember "in the cloud" means "on someone elses servers" and anything can disappear which doesn't remove the GPL obligations so they might use that as an excuse. So far my question whether contributions from the community to Debianize more rolling packages would be accepted was met with the sound of silence. On a positive note, at least they haven't deleted me from their bugtracker just yet :)
How different are the package lists for the official iso builds vs the custom built images? Would you be able to provide the package list from the isos themselves for analysis? They can be found on the cd at /live/filesystem.packages
.
@marekm72 if it is easy to Debianize a package, I think it would be worth a try (with just one package), to see what will happen.
The filesystem.packages files and diffs from original LTS to self-built images added to the directory with the images.
@marekm72
build process is not yet patched to remove the logo etc
I added some branding removal, namely:
I don't plan to rename every VyOS mention - since this should be enough to break the "this is VyOS" feeling by specifically saying in many places this is not VyOS and also including additional notes explaining the unofficial nature. There is no way anyone can argue that average person would be fooled by this thinking this is official VyOS image by using it, even if you have no idea of the origin of the ISO or you are using already installed system.
If you know any other place where VyOS logo is seen or any other prominent places where "VyOS" string is seen then please let me know. The default user/hostname, help text for commands or example values aren't what I'm looking for - I know about these but I don't count them as something that needs to be replaced. I'm looking specifically only for the places you will see every time you boot/use the OS and have "VyOS" in this styling or any place whatsoever where logo or graphics are present.
See If you want to distribute ISO in readme how you can activate branding removal. This isn't enabled by default, since if you don't distribute the ISO, then you have no reason to remove branding - that's why you shall opt-in only if you plan to distribute the ISOs.
If you want to convert existing Jenkins to no brand, then it's not enough to just export NOT_VYOS="yes"
, you also need to configure extra environment variable with name NOT_VYOS and value yes in Manage Jenkins -> System -> Global properties -> Environmental Variables -> Add. This is done automatically if you export NOT_VYOS="yes"
and then run scripts on fresh OS.
In other news, they cut off my access to further LTS updates and rejected my monthly $25 via Open Collective...
One can imagine how easily would such offer cover the cost of hosting the build infrastructure, even the cost of development of brand removal, if this was offered and promoted - this would solve the main issues why they closed the repositories... But of course that would bypass the thousands of $ paywall so I'm surprised it even existed as long as it did... I didn't even know it existed before they said they will end it...
The reprepro-mirror.sh script has a small bug
Fixed.
The automated scripts are excellent, and if VyOS(tm) provided something similar then they could claim "we strive to make it easy to use" - but that part was done by you in this project, they only provided motivation. Thanks!
This project is very much result of collective, not just me. The automated scripts were done completely by @GurliGebis - for me the manual guide was easy enough (compared to the effort that was required to develop it...) - if it was just me then there would be no automated scripts 😄.
Many other people contributed and we should thank them as well - the pittagurneyi provided missing build scripts and feedback, the Crushable1278 provided feedback and build information as the veduco did as well, ibehren1 provided feedback from testing as you did and other people did provide feedback/testing too.
...perhaps we could build rolling packages APT repo as well (still available but who knows for how long, might not hurt to be prepared)
It's not worth it for now since you can build it officially and also the build system is soon to be changed, they want to drop Jenkins and also switch to the stream branch development thus whatever rolling build system is now, it will change for sure in the future - thus we would be working on something that is subject to change and we can wait - thus I don't see the reason to do so, yet. If they would block it - that would be good reason to do so right away.
...all is good while GIT repos with sources remain available but remember "in the cloud" means "on someone elses servers" and anything can disappear which doesn't remove the GPL obligations...
If you would erase whole GitHub (to erase every fork out there) then maybe but perhaps not even then. There is no practical way to erase GIT repository of popular project unless you start thinking of destruction of the internet and all digital media - perhaps this will eventually happen but then GPL will not matter... Popular GIT repositories are more decentralized and thus more resilient than any APT repository. The GIT repositories are really the ultimate "corresponding source" you could ask for.
So far my question whether contributions from the community to Debianize more rolling packages would be accepted was met with the sound of silence.
I'm not sure what do you mean by Debianize - all packages produce .deb with Debian tooling, thus they are Debian packages... The sources part is often broken, that's why you don't see it much, it's not too hard to fix - it's just tar ball and some definition... But, it's unnecessarily difficult if this isn't officially supported. We have no easy way of fixing the build scripts - unless you maintain a fork of the GIT repository or you invent some clever hacking via the Jenkins Pipeline library to modify the source code before its built - that's how I done the removal of branding for the vyos-1x and vyatta-cfg.
It's unnecessarily difficult for us to fix something like this that would be very easily fixed if this was done officially thus it makes sense to do only if it's 1) must have 2) there is no way this would be officially supported - the branding is good example, the source packages don't meet either I think - if you make pull-request - I don't see why it would be refused - on the other hand - there no way that rebranding would be accepted since they would work against themselves and to do it fully it would require changes across whole code base.
The filesystem.packages files and diffs from original LTS to self-built images added to the directory with the images.
Thank you for posting this information. I was somehow convinced that additional magic was going into the official images. Keep in mind 1.4.0 was GA'd Jun 4 and 1.3.8 was released Jun 25 +-a few days or so.
1.4.0: There are the expected upgraded package differences, but some other interesting notes are:
-keepalived 1:2.2.7-1+b2
+keepalived 1:2.2.8-1
This was changed in February here: vyos/vyos-build@522c56d7fbac5af5d7c45aab656e0cd68b52ac1f Why wasn't this package upgraded in their repo by June? Looking today and it seems keepalived no longer has a presence at all in the official Sagitta repo. If keepalived isn't included in their repo by the next maintenance release, assuming they still use the same repo we can view but not use, we should expect the upstream, unpatched keepalived to be included. That's a tad worrisome, I'd think.
-python3-vici 5.9.8-1
+python3-vici 5.9.11-1
Vici continues to be downrev (but we know their build process for strongswan is broken, so...) From your diffs, I do see a concerning call out:
-libpam-tacplus 1.4.3-cl5.1.0u5
+libpam-tacplus 1.7.0-0.1
...
libtac2 1.4.3-cl5.1.0u5
+libtac5 1.7.0-0.1
This should indicate that the package from vyos-build
packages/pam_tacplus
was built and sourced rather than dd's vyos-missing
packages/libnss-tacplus
. Perhaps you could review your build configuration to exclude this pam_tacplus
package and remove it from your repo.
Actually, it's probably best to remove pam_tacplus
entirely from the preconfigured jobs, or otherwise include it in a disabled state somehow:
https://github.com/dd010101/vyos-jenkins/blob/master/jobs/project-jobs.json#L184
Otherwise, we're right on track.
1.3.8: I don't see really any meaningful differences here. They do seem to include a single additional package though:
-apt-transport-https 1.8.2.3
Given there generally aren't any apt repos configured post-install/in the live image, I'm not sure the purpose of including this.
Looks great.
Thank you again for this information. It is extremely helpful and confidence inspiring.
OK, removing the most obvious tuff and making it clear this is not officially suported should be a good start, shows we have good will to respect their trademarks. I understand it's not absolutely all mentions of the name, just like the already mentioned example with Linux having AF_UNIX sockets.
By "you" I mean plural (English has this ambiguity), I know that a few people (not just one) made this happen - thanks!
I've seen some messages during libtacplus package build (missing group or something like that) scroll by, but assumed it was harmless as the build didn't fail in the end.
Debianize here means have the build system (Jenkins or whatever the new one will be - they don't seem to be very open in discussing this, a lot more may be going on internally) not make Debian binary packages directly - but instead, make Debian-format source packages, then build binary packages from these source packages using only standard Debian tools, and put both source and binary packages in the same APT repo. As I understand it, this meets the GPL requirement to provide corresponding source, but doesn't require the source to be kept available indefinitely, just for as long as the binaries are available. The key here is "offering equivalent access to copy the source code from the same place" so a common APT repo (deb + deb-src) clearly satisfies this, but I'm not really sure "the Internet" (source in GitHub, binaries in APT repo) as "the same place" does. Of course I don't know what they really plan to do (they can do something first and then announce it later as happened with blocking the LTS package repos), but without this, they might say "we don't have the resources to respond to requests for source by email, so we had to close the APT repo to comply with the GPL". I hope it doesn't happen really.
@Crushable1278
I was somehow convinced that additional magic was going into the official images.
What makes you think that? Their development is very much open and I don't see why would they want do it. It makes sense to remove the branch completely - that is delete it or leave it as it - but if they continue the development and they do, why would they have additional magic outside the code base? I mean they have add-ons for that...
Why wasn't this package upgraded in their repo by June? Looking today and it seems keepalived no longer has a presence at all in the official Sagitta repo
It has presence in the rolling repo 1:2.2.8-1
https://rolling-packages.vyos.net/current/dists/current/main/binary-amd64/Packages, so what does this tells us? The ISO has it, the rolling has it, sagitta doesn't - yet it has build script, so....?
This should indicate that the package from vyos-build packages/pam_tacplus was built and sourced rather than dd's vyos-missing packages/libnss-tacplus. Perhaps you could review your build configuration to exclude this pam_tacplus package and remove it from your repo.
The libnss-tacplus
isn't the libpam-tacplus
you looking at. The libpam-tacplus
is built from vyos-build/packages/pam_tacplus
and this gives the 1.7. So why their repo doesn't have 1.7, if their build script produces 1.7? This seems like they didn't built for while for some reason... The rolling has 1.4.3-cl5.1.0u5
as well built from hash 1b6c1f14cc288b71c6c7005a60d2d7d7b617f6a0
so this doesn't match the 4f91b0d
pinned in the build script and this pin dates back 2 years. What gives?
@marekm72
..whatever the new one will be - they don't seem to be very open in discussing this
There is no information about the future build system we just know it won't be Jenkins. That's why we don't want to invest time into the Jenkins that is used currently for rolling.
...make Debian-format source packages, then build binary packages from these source packages using only standard Debian tools, and put both source and binary packages in the same APT repo...
Why would you do it this way? The binary you talking about is in most cases built by dpkg-buildpackage already... If you have the source code and you build the binary from it, why would you not make tarball of the source code you used and provide that as source package? The dpkg-buildpackage does it for you. Nearly all the .debs without sources are built by Debian tools that are capable of making also the source package at the same time.
The key here is "offering equivalent access to copy the source code from the same place"
I think that's a option not requirement. If I read the section 3), then I see multiple options and you are not required to use the same place at all, but if you do, then you automatically satisfy 3) but that's not the only way to satisfy 3). You can satisfy 3) by written offer, there is no requirement how you get the source code at all. Do they have written offer? I'm not sure... They should! To satisfy the offer you could just include link to their GitHub repositories and that's completely valid way how to provide corresponding source, there is no need to satisfy the same place. I'm not sure how they deliver the offer though...
If you want host repository, then I would suggest to make index.html and say there what it is - or more what it isn't and link to VyOS GitHub and this GitHub - then you satisfy the same place too, right? It's the same HTTP protocol, it's the same URL... I'm not sure what would describe same place more more than identical protocol and address...
Check the generate-mirror-html.sh:
cd ./extras/mirror
./generate-mirror-html.sh
This will generate HTML page for you mirror providing corresponding source code offer and other important information. I would suggest not to use "not VyOS" but rather NOTvyos, to avoid using the "VyOS" name style and also make it more clear.
The Debian source packages aren't there to satisfy GPL - they are user friendly way how to make modifications or inspect what code you are using, you can satisfy GPL easily without them too.
https://github.com/dd010101/vyos-missing/blob/sagitta/packages/libnss-tacplus/build.sh#L20
libpam-tacplus
gets built as part of the libnss-tacplus
build process because of libnss' unique dependencies.
We know they haven't built it for a while using the official packages/pam-tacplus
because the commit it's pinned to doesn't build.
The provided package lists are confirmation of what is in the official .iso which lines up closely with the official nightly images as expected.
But the vyos-build/packages/pam_tacplus produces the version I see in the apt...
Package: libpam-tacplus
Version: 1.7.0-0.1
So the libnss-tacplus doesn't explain anything at all. I don't see any way how would you build the 1.4 from sagitta vyos-build/packages/pam_tacplus since it gives 1.7. If the libnss-tacplus produces 1.7 as well then it doesn't change anything. I still have no way to build 1.4 from the vyos-build/packages/pam_tacplus...
Right. You don't. It gets ignored/discarded, but that's not documented officially. The packages must have been built by a maintainer (unless there's a source for the actual precompiled debs that I can't find) and sideloaded. Of course version 1.7.0 is > 1.4.3, so it pulls that and its dependencies instead/in addition to.
Are there functional differences between the two? I admittedly have no desire to figure it out one way or another so default to their choice. inotify is a different case as the changes between -1
and -3
were ~3 lines that were essentially all comments - easily verified.
That's weird, so the vyos-missing is right and the vyos-build is wrong I guess... The repository is getting both and so what package is build last - that's the package you will see.
We have no way how to retroactively remove job, so this isn't easy thing to do, everyone would need to remove the pam_tacplus manually or spawn fresh OS/rerun scripts again.
EDIT: as luck has it we were forced to fork the pam_tacplus, since it was broken, so we can simply rewrite the pam_tacplus to build from the https://github.com/vyos/libpam-tacplus instead.
EDIT2: made it noop, since the vyos-missing already builds it so it's redundant to do it again. I also bumped the vyos-missing/libnss-tacplus so it will rebuild libpam-tacplus and this will result in 1.4.3 for everyone with existing Jenkins installation.
All packages built successfully but still:
[2024-07-05 19:56:17] lb chroot_install-packages install
P: Begin installing packages (install pass)...
Reading package lists...
Building dependency tree...
Reading state information...
Package telegraf is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Unable to locate package linux-image-6.6.36-amd64-vyos
E: Couldn't find any package by glob 'linux-image-6.6.36-amd64-vyos'
E: Couldn't find any package by regex 'linux-image-6.6.36-amd64-vyos'
E: Unable to locate package vyos-linux-firmware
E: Unable to locate package vyos-intel-qat
E: Unable to locate package vyos-intel-ixgbe
E: Unable to locate package vyos-intel-ixgbevf
E: Unable to locate package openvpn-dco
E: Package 'telegraf' has no installation candidate
E: An unexpected failure occurred, exiting...
P: Begin unmounting filesystems...
P: Saving caches...
Reading package lists...
Building dependency tree...
Reading state information...
I: Checking if packages required for VyOS image build are installed
I: using build flavors directory data/build-flavors
I: Cleaning the build workspace
I: Setting up additional APT entries
I: Configuring live-build
I: Starting image build
Traceback (most recent call last):
File "/vyos/./build-vyos-image", line 621, in <module>
cmd("lb build 2>&1")
File "/vyos/scripts/image-build/utils.py", line 84, in cmd
raise OSError(f"Command '{command}' failed")
OSError: Command 'lb build 2>&1' failed
ISO build failed
@koljenovic please follow here https://github.com/dd010101/vyos-jenkins/issues/32
@marekm72 I also noticed the cgit reports bogus Idle time. The cgit expects agefile otherwise it falls back to last modify time for the default branch but VyOS uses many branches thus this doesn't work. That's why I added agefile support for the github-mirror.sh and this agefile contains timestamp of latest commit from all branches thus it shows correct idle time.
Thanks - fixed script now installed, shows correct idle time and GH URLs in the descriptions. Also now making duplicity backups (incremental backup before each mirror run every 2 hours, full backup every month), on a local filesystem (just testing for now, to look into better options later).
Incrementing every 2 hours seems wasteful, because most of the time nothing will change, this will create empty increment, that doesn't take much space but it's wasteful and creates long incremental chains and that's then messy to deal with when you search for specific version.
I'm using duply for configuring duplicity - duply has profiles where you define your backup (it's basically bash script defining duplicity arguments) and then you use much simpler CLI of duply that uses those profiles. The duply is simple bash wrapper for duplicity but it helps a bit - basically without it you need to write your own bash scripts / aliases so you don't repeat all arguments over and over.
Perhaps we could do backups only if something changed and last backups was X ago? Thanks to the idle time we have timestamps for all branches/repositories so it's easy to calculate most recent timestamp of them all and then you can compare this timestamp to timestamp of last backup and decide if the delta between them is enough to justify backup. The duply/duplicity does the full backup automatically if you use --full-if-older-than
so we don't even need to decide if we want full or increment we just need to know if we do/don't want backup in specific run.
This I updated the github-mirror.sh to store latest commit timestamp for each namespace. And then I written new script github-mirror-backup.sh that will fetch timestamp from each namespace and compare it with the backup timestamp and decide what to do.
Check the git-mirror.md - it has one additional line in the update-mirrors.sh
example and also details about duply.
Duplicity is simple and super simple with duply, it's not the quickest but also doesn't create heavy load on the server and supports lot of backends. There are also alternatives like restic or duplicacy but I like duplicity better. The efficiency is comparable just some alternatives exchange high CPU load for faster backup for example.
Good point about empty incremental backups, I've just added --skip-if-no-change. For now I use duplicity 2.1.4-3~bpo12+1 from bookworm-backports (as bookworm has much older version). It's just a single line added near the top of update-mirrors.sh:
duplicity --full-if-older-than 1M --no-encryption --skip-if-no-change backup "$ROOT_PATH" file:///opt/vyos-backup
I will look into duply as well, but for this use duplicity alone seems fairly simple.
I think it make sense to throttle the backups even further to backup if last backup was 3-6 hours ago and also change happened, so it doesn't make unnecessary backups of partial work. So did the change happened? Is the backup older than X hours? Then backup. Then you can also mirror even each hour and you don't worry that it will spam backups because of active work session. You want to mirror much faster then backups so your mirror is usable and there isn't too big delay if wrong commit happened and fix followed.
The single line isn't enough, you need also cleanup and the procedure for restoring/fetching/listing contents will make you tired of the arguments real fast, that's why I think using duply as frontend is very reasonable. You can roll your own scripts for sure but why to bother when you can use something already existing.
It's just a single line added near the top of update-mirrors.sh
On the top? On the bottom? I think you want to backup after mirroring, yes?
Backup before the new mirror run is also after the previous one, so it doesn't really change much. OK it was a quick hack, my thinking was that if the backup fails for any reason, the script terminates (because of "set -e") and the mirror is not updated which can easily be seen on the web. I agree that backup doesn't need to run as often as mirror, but also needs some protection from running simultaneously with mirror (as that would back up inconsistent data). I'll look more into duply for sure, especially since I need a better solution for automated backups of my own internal things as well (still doing too much manually).
If you run in cron and you have MAILTO/mailing correctly set then you will get notification on failure (thanks to stderr) so you don't need to rely on the webui, notification is better anyway, since you may not look at the webui for long time.
Yes, it makes sense to run backup after mirroring that's how I updated the update-mirrors.sh. This way there can't be conflict between the mirroring and backup.
Backups... Backups... Backups...
I have periodic automated differental/incremental backups for everything and everywhere, of course even for the phone and vyos (for those I just transfer files elsewhere via rsync and then use duply on other machine to do the backup since that's sometimes easier than trying to run duplicity everywhere). This did save me work many times because of accidental delete for example. I take the disk usage as cheap insurance!
I think the backups should to be automated, versioned and also sometimes manually verified that they can be actually restored - otherwise I would think there is no backup :), well no backup you can depend on. The differental/incremental mechanism allows me to do many backups with long history that would be otherwise prohibited by the disk usage requirement.
On Linux I use duply+duplicity for both directory and rootfs backup and I did recover full rootfs from duplicity number of times. It's basically fancy tar of files but that's all you need anyway. As backed I use FTP or object storage like Backblaze B2, this can be very cost effective way to have offsite backup in different country or continent, the duply/duplicity has baked-in gpg encryption support so the privacy of the data doesn't really matter, since for not-so-trusted storage you can use encryption easily.
The rootfs backup profile looks the same it just has whole array of excludes for the /dev, /proc and such. The full rootfs restore requires additional steps like setup for grub/boot and adjustment for fstab since the backup contains only rootfs files and thus grub/boot is missing and fstab UUID will be different since you create new rootfs partition - few things to take care of, it's manual process but overall straightforward.
There are downsides to the differential/incremental backups too - if you have long chain the restore time will be longer. You also can't delete part of the chain - you can only delete the whole chain (full backup and all its increments). Thus the longer chain is the more disadvantage you will get. Super long chains are for sure not recommended (depending on the backup frequency 1 week to 1 month is generally good value). Another edge-case are large files that change all the time - like VM images, don't backup those, backup the VM internally instead - duply would make full copy of the image every time. I take the downsides as very good tradeoff because if I made full copy each time then the history would be very shallow thus there is no other option than to use some differential/increment backup strategy.
If you backup manually via duply (you don't have automated backup) then you don't want --full-if-older-than
since that would create full backup every time if the interval is longer than the MAX_FULLBKP_AGE. Thus it makes sense not to use --full-if-older-than
at all and do duply some full
and duply some incr
as you wish. I use this on systems that aren't servers and they don't run periodically like every day.
If you want fancy backups then lookup PAR2, duplicity has support for it as well. PAR2 can calculate parity like RAID5 but on file level and thus then you can protect your backups from bit rot of the storage by detecting/repairing corruption. The parity calculation is CPU heavy thus it's not for free but with this combination you can create very good backups indeed if you combine it with 1-2-3 rule.
EDIT: Some numbers of real world efficiency - those are rootfs backups of servers where majority of the storage are binary files like compressed images:
Full backup size 39.34 GB, 4 backups per day, 318 backups resulting in 176.4 GB backup size. Keep 3 full backups/chains. Long chain. Full backup size 91.64 GB (104 GB raw), 2 backups per day, 48 backups, resulting in 237.57 GB backup size. Keep 2 full backups/chains. Short chain.
I have also larger sample size of many duply backups I use for work, many are tiny directory backups, some are rootfs backups. The sum of the raw capacity is 526.67 GB and the used space by backups is 1.3 TB, total number of backups is 6774. Who knows what that would occupy if all were full copies - it's not easy to calculate since not every project has same number of backups but whatever that number is it would be something like 100x. This goes to the argument that most of the real world data doesn't change often or at all.
The duplicity uses gzip so you get some compression but most of the capacity is occupied by compressed files already so the gain from compression is minimal. The biggest gain is the differential/incremental nature. My ballpark number is that I need on average 3x the used capacity to store long history of backups, exact numbers of course depend on how much the data changes but in real world the data doesn't change that much every day so it yield the 3x multiplier even if you have hundreds of backups over couple months.
I think everything here is already completed or outdated so closing this.
I've tried to follow the guide (a lot of steps, so can't rule out making a mistake somewhere), I'd like to report partial success - got as far as starting the ISO build, but it fails because a few packages in the newly created repo are missing:
Missing packages: telegraf, vyos-xe-guest-utilities, vyos-world, vyos-user-utils
Comparing the lists of files (extracted from directory listings vs newly build repo) gives quite a large diff, exluding arm64 and some version changes (as expected for "LTS+"), a few more are missing:
and a few are added as well: