dd010101 / vyos-jenkins

Scripts for building custom VyOS stream (1.5 circinus) packages/images. Also legacy scripts for building frozen 1.3 equuleus/1.4 sagitta packages/images.
98 stars 32 forks source link

podman dependencies problem running build-iso.sh #35

Closed carlo-luzi closed 4 months ago

carlo-luzi commented 4 months ago
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 podman : Depends: libc6 (>= 2.38) but 2.36-9+deb12u7 is to be installed
          Depends: libgpgme11t64 (>= 1.4.1) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
E: An unexpected failure occurred, exiting...
P: Begin unmounting filesystems...
P: Saving caches...
Reading package lists...
Building dependency tree...
Reading state information...
Traceback (most recent call last):
I: Checking if packages required for VyOS image build are installed
I: using build flavors directory data/build-flavors
I: Cleaning the build workspace
I: Setting up additional APT entries
I: Configuring live-build
I: Starting image build
  File "/vyos/./build-vyos-image", line 621, in <module>
    cmd("lb build 2>&1")
  File "/vyos/scripts/image-build/utils.py", line 84, in cmd
    raise OSError(f"Command '{command}' failed")
OSError: Command 'lb build 2>&1' failed
ISO build failed

Clean and updated Debian 12 install on VM, all automated scripts from 1 to 8 run without problems.

For further reference here build-iso.sh output build-iso.log

carlo-luzi commented 4 months ago

I also attach uncron service log uncron.log

Crushable1278 commented 4 months ago

This is fixed in Current and has not been backported to Circinus or Sagitta yet.

https://github.com/vyos/vyos-build/commit/16753c9d3a61385e1efa0e1eb524ee070e812551

Of course, as PR 690's discussion indicates, Bookworm includes glibc 2.36 and Trixie includes glibc 2.38. Podman 4.9.5 was released to address a few CVEs. Because we run Bookworm, we cannot take this new build or any subsequent builds with the newer glibc requirement and are therefore perpetually vulnerable to these and new CVEs.

The team will need to come up with a solution - probably building from source.

Podman release here: https://github.com/containers/podman/releases/tag/v4.9.5 https://github.com/containers/podman/blob/v4.9.5/RELEASE_NOTES.md

The CVE at hand is here: https://nvd.nist.gov/vuln/detail/CVE-2024-3727

Crushable1278 commented 4 months ago

The fix is now merged into Circinus.

dd010101 commented 4 months ago

Yet another build broken by pull request in waiting... 🤔

Sagitta also merged now, so it's resolved? Seems to be working for me...

carlo-luzi commented 4 months ago

Three hours ago I restarted the automated sequence and the build was successful. Tomorrow I'll do a test install with the ISO.

carlo-luzi commented 4 months ago

It seems to work, as a reference I paste here the versions of some packages affected by the upstream fix:

vyos@vyos:~$ dpkg -l | grep -e libgpgme11t64 -e podman -e netavark
ii  netavark                             1.4.0-4.1                        amd64        Rust based network stack for containers
ii  podman                               4.3.1+ds1-8+deb12u1              amd64        engine to run OCI-based containers in Pods
vyos@vyos:~$
Crushable1278 commented 4 months ago

That doesn't seem quite right.

$ dpkg -l | grep -e libgpgme11t64 -e podman -e netavark
ii  libgpgme11t64:amd64                  1.18.0-4.1+b1                    amd64        GPGME - GnuPG Made Easy (library)
ii  netavark                             1.4.0-4.1                        amd64        Rust based network stack for containers
ii  podman                               4.9.4+ds1-1                      amd64        tool to manage containers and pods

4.3.1 is Bookworm's version of podman, not Trixie's.

The fix pinned the version to 4.9.4* - given that, how'd you pull in 4.3.1?

Crushable1278 commented 4 months ago

I'm seeing the same as well..... it's 4.3.1, and not honoring trixies pin. @dd010101 Can you confirm this fix is bunk?

dd010101 commented 4 months ago

I have 4.3.1 too.

The 4.9.4 doesn't exist anymore? That's why it falls back to bookworm version? I'm not sure what is the policy of Debian - do they keep outdated versions in the repositories?

trystan-s commented 4 months ago

Even their most recent daily rolling build has podman 4.3.1+ds1-8+deb12u1 so it's safe to say their commit didn't work.

Crushable1278 commented 4 months ago

http://ftp.debian.org/debian/pool/main/libp/libpod/ 4.9.4 exists.

I set the pin to Pin: version 4.9.4+ds1-1 and it still failed.

Don't get me started on the nightly images... they are published after smoketests fail anyways (well, one).

dd010101 commented 4 months ago

http://ftp.debian.org/debian/pool/main/libp/libpod/

No it doesn't exist. So this is cache thing - it's removed from some mirrors?

I see this:

[ ] golang-github-containers-libpod-dev_3.0.1+dfsg1-3+deb11u5_all.deb   2023-12-30 20:15    1.4M
[ ] libpod_3.0.1+dfsg1-3+deb11u5.debian.tar.xz  2023-12-29 23:43    19K
[TXT]   libpod_3.0.1+dfsg1-3+deb11u5.dsc    2023-12-29 23:43    4.8K
[ ] libpod_3.0.1+dfsg1.orig.tar.xz  2021-02-24 17:28    2.1M
[ ] libpod_4.3.1+ds1-8+deb12u1.debian.tar.xz    2024-03-25 21:37    20K
[TXT]   libpod_4.3.1+ds1-8+deb12u1.dsc  2024-03-25 21:37    4.3K
[ ] libpod_4.3.1+ds1.orig.tar.xz    2022-11-14 00:30    2.5M
[ ] libpod_4.9.4+ds1-1.debian.tar.xz    2024-03-28 11:32    22K
[TXT]   libpod_4.9.4+ds1-1.dsc  2024-03-28 11:32    5.4K
[ ] libpod_4.9.4+ds1.orig.tar.xz    2024-03-28 11:32    2.7M
[ ] libpod_4.9.5+ds1-1.debian.tar.xz    2024-07-04 21:27    22K
[TXT]   libpod_4.9.5+ds1-1.dsc  2024-07-04 21:27    5.4K
[ ] libpod_4.9.5+ds1.orig.tar.xz    2024-07-04 21:27    2.7M
[ ] libpod_5.0.2+ds1-3.debian.tar.xz    2024-05-05 11:54    24K
[TXT]   libpod_5.0.2+ds1-3.dsc  2024-05-05 11:54    5.4K
[ ] libpod_5.0.2+ds1.orig.tar.xz    2024-04-28 18:21    2.7M
[ ] podman-docker_4.3.1+ds1-8+deb12u1_amd64.deb 2024-04-01 12:19    19K
[ ] podman-docker_4.3.1+ds1-8+deb12u1_arm64.deb 2024-04-01 12:24    19K
[ ] podman-docker_4.3.1+ds1-8+deb12u1_armel.deb 2024-04-01 12:19    19K
[ ] podman-docker_4.3.1+ds1-8+deb12u1_armhf.deb 2024-04-01 12:24    19K
[ ] podman-docker_4.3.1+ds1-8+deb12u1_i386.deb  2024-04-01 12:19    19K
[ ] podman-docker_4.3.1+ds1-8+deb12u1_mips64el.deb  2024-04-01 13:29    19K
[ ] podman-docker_4.3.1+ds1-8+deb12u1_mipsel.deb    2024-04-01 13:14    19K
[ ] podman-docker_4.3.1+ds1-8+deb12u1_ppc64el.deb   2024-04-01 12:14    19K
[ ] podman-docker_4.3.1+ds1-8+deb12u1_s390x.deb 2024-04-01 12:13    19K
[ ] podman-docker_4.9.5+ds1-1_amd64.deb 2024-07-04 22:02    25K
[ ] podman-docker_4.9.5+ds1-1_arm64.deb 2024-07-04 21:57    25K
[ ] podman-docker_4.9.5+ds1-1_armel.deb 2024-07-04 22:02    25K
[ ] podman-docker_4.9.5+ds1-1_armhf.deb 2024-07-04 22:02    25K
[ ] podman-docker_4.9.5+ds1-1_i386.deb  2024-07-04 21:57    25K
[ ] podman-docker_4.9.5+ds1-1_mips64el.deb  2024-07-05 04:50    25K
[ ] podman-docker_4.9.5+ds1-1_ppc64el.deb   2024-07-04 21:57    25K
[ ] podman-docker_4.9.5+ds1-1_riscv64.deb   2024-07-05 00:49    25K
[ ] podman-docker_4.9.5+ds1-1_s390x.deb 2024-07-04 22:38    25K
[ ] podman-docker_5.0.2+ds1-3_amd64.deb 2024-05-05 20:13    24K
[ ] podman-docker_5.0.2+ds1-3_arm64.deb 2024-05-05 20:28    24K
[ ] podman-docker_5.0.2+ds1-3_armel.deb 2024-05-05 20:59    24K
[ ] podman-docker_5.0.2+ds1-3_armhf.deb 2024-05-05 20:59    24K
[ ] podman-docker_5.0.2+ds1-3_i386.deb  2024-05-05 20:23    24K
[ ] podman-docker_5.0.2+ds1-3_mips64el.deb  2024-05-06 04:17    24K
[ ] podman-docker_5.0.2+ds1-3_ppc64el.deb   2024-05-05 20:13    24K
[ ] podman-docker_5.0.2+ds1-3_riscv64.deb   2024-05-13 15:29    24K
[ ] podman-docker_5.0.2+ds1-3_s390x.deb 2024-05-05 22:07    24K
[ ] podman-remote_4.9.5+ds1-1_amd64.deb 2024-07-04 22:02    7.9M
[ ] podman-remote_4.9.5+ds1-1_arm64.deb 2024-07-04 21:57    6.8M
[ ] podman-remote_4.9.5+ds1-1_armel.deb 2024-07-04 22:02    6.5M
[ ] podman-remote_4.9.5+ds1-1_armhf.deb 2024-07-04 22:02    6.5M
[ ] podman-remote_4.9.5+ds1-1_i386.deb  2024-07-04 21:57    6.9M
[ ] podman-remote_4.9.5+ds1-1_mips64el.deb  2024-07-05 04:50    5.7M
[ ] podman-remote_4.9.5+ds1-1_ppc64el.deb   2024-07-04 21:57    6.3M
[ ] podman-remote_4.9.5+ds1-1_riscv64.deb   2024-07-05 00:49    6.6M
[ ] podman-remote_4.9.5+ds1-1_s390x.deb 2024-07-04 22:38    6.8M
[ ] podman-remote_5.0.2+ds1-3_amd64.deb 2024-05-05 20:13    7.2M
[ ] podman-remote_5.0.2+ds1-3_arm64.deb 2024-05-05 20:28    6.2M
[ ] podman-remote_5.0.2+ds1-3_armel.deb 2024-05-05 20:59    5.8M
[ ] podman-remote_5.0.2+ds1-3_armhf.deb 2024-05-05 20:59    5.8M
[ ] podman-remote_5.0.2+ds1-3_i386.deb  2024-05-05 20:23    6.3M
[ ] podman-remote_5.0.2+ds1-3_mips64el.deb  2024-05-06 04:17    5.2M
[ ] podman-remote_5.0.2+ds1-3_ppc64el.deb   2024-05-05 20:13    5.7M
[ ] podman-remote_5.0.2+ds1-3_riscv64.deb   2024-05-13 15:29    6.0M
[ ] podman-remote_5.0.2+ds1-3_s390x.deb 2024-05-05 22:07    6.1M
[ ] podman_3.0.1+dfsg1-3+deb11u5_amd64.deb  2023-12-30 20:10    9.1M
[ ] podman_3.0.1+dfsg1-3+deb11u5_arm64.deb  2023-12-30 20:20    7.8M
[ ] podman_3.0.1+dfsg1-3+deb11u5_armel.deb  2023-12-30 20:25    7.8M
[ ] podman_3.0.1+dfsg1-3+deb11u5_armhf.deb  2023-12-30 20:15    7.8M
[ ] podman_3.0.1+dfsg1-3+deb11u5_i386.deb   2023-12-30 20:15    8.4M
[ ] podman_3.0.1+dfsg1-3+deb11u5_mips64el.deb   2023-12-30 22:17    7.0M
[ ] podman_3.0.1+dfsg1-3+deb11u5_mipsel.deb 2023-12-30 22:02    7.0M
[ ] podman_3.0.1+dfsg1-3+deb11u5_ppc64el.deb    2023-12-30 20:10    7.5M
[ ] podman_3.0.1+dfsg1-3+deb11u5_s390x.deb  2023-12-30 20:36    8.1M
[ ] podman_4.3.1+ds1-8+deb12u1_amd64.deb    2024-04-01 12:19    10M
[ ] podman_4.3.1+ds1-8+deb12u1_arm64.deb    2024-04-01 12:24    8.8M
[ ] podman_4.3.1+ds1-8+deb12u1_armel.deb    2024-04-01 12:19    9.0M
[ ] podman_4.3.1+ds1-8+deb12u1_armhf.deb    2024-04-01 12:24    8.9M
[ ] podman_4.3.1+ds1-8+deb12u1_i386.deb 2024-04-01 12:19    9.6M
[ ] podman_4.3.1+ds1-8+deb12u1_mips64el.deb 2024-04-01 13:29    8.0M
[ ] podman_4.3.1+ds1-8+deb12u1_mipsel.deb   2024-04-01 13:14    8.1M
[ ] podman_4.3.1+ds1-8+deb12u1_ppc64el.deb  2024-04-01 12:14    8.4M
[ ] podman_4.3.1+ds1-8+deb12u1_s390x.deb    2024-04-01 12:13    9.2M
[ ] podman_4.9.5+ds1-1_amd64.deb    2024-07-04 22:02    12M
[ ] podman_4.9.5+ds1-1_arm64.deb    2024-07-04 21:57    11M
[ ] podman_4.9.5+ds1-1_armel.deb    2024-07-04 22:02    11M
[ ] podman_4.9.5+ds1-1_armhf.deb    2024-07-04 22:02    11M
[ ] podman_4.9.5+ds1-1_i386.deb 2024-07-04 21:57    11M
[ ] podman_4.9.5+ds1-1_mips64el.deb 2024-07-05 04:50    9.6M
[ ] podman_4.9.5+ds1-1_ppc64el.deb  2024-07-04 21:57    10M
[ ] podman_4.9.5+ds1-1_riscv64.deb  2024-07-05 00:49    11M
[ ] podman_4.9.5+ds1-1_s390x.deb    2024-07-04 22:38    11M
[ ] podman_5.0.2+ds1-3_amd64.deb    2024-05-05 20:13    12M
[ ] podman_5.0.2+ds1-3_arm64.deb    2024-05-05 20:28    10M
[ ] podman_5.0.2+ds1-3_armel.deb    2024-05-05 20:59    10M
[ ] podman_5.0.2+ds1-3_armhf.deb    2024-05-05 20:59    10M
[ ] podman_5.0.2+ds1-3_i386.deb 2024-05-05 20:23    11M
[ ] podman_5.0.2+ds1-3_mips64el.deb 2024-05-06 04:17    9.3M
[ ] podman_5.0.2+ds1-3_ppc64el.deb  2024-05-05 20:13    10M
[ ] podman_5.0.2+ds1-3_riscv64.deb  2024-05-13 15:29    10M
[ ] podman_5.0.2+ds1-3_s390x.deb    2024-05-05 22:07    11M
trystan-s commented 4 months ago

From that link I only see versions 3.0.1, 4.3.1, 4.9.5, and 5.0.2

dd010101 commented 4 months ago

Yep, I see no point why would Debian keep old versions around. Imagine the capacity that would require... The fix with pin to old version is thus wrong since it pins version that is subject for removal and soon will be deleted from all mirrors.

How to fix it properly? I see only one way - build podman 4.9.5 from source.

Crushable1278 commented 4 months ago

I saw the source package not the compiled binary package. Silly.

Wow, so I got it from here on Monday: http://http.us.debian.org/debian/pool/main/libp/libpod/podman_4.9.4+ds1-1_arm64.deb

This is bad. Really bad.

You'll need to put the appropriate package in your vyos-build/packages directory prior to compilation of the iso. podman_4.9.4+ds1-1.zip

dd010101 commented 4 months ago

Is VyOS team aware of this? I guess then need to have some backlog task to fix this properly eventually since pinning to 4.9.4 forever isn't long term solution thus building from source is the way forward. This would change the priority of such task...

Crushable1278 commented 4 months ago

I'm sure they're not. With the status of our accounts, who will let them know?

Why was the decision made in the first place to move to Trixie's podman (4.7 per the task)? There was almost no text in that task, let alone justification.

dd010101 commented 4 months ago

Because of bug https://vyos.dev/T5829

I'm sure they're not. With the status of our accounts, who will let them know?

I don't have any accounts anymore (forum/dev), everything on VyOS side was deleted 😄

Crushable1278 commented 4 months ago

T5829 doesn't list which version of Bookworm's podman was in use at the time Sagitta was noted impacted, but states it's fixed in upstream's 4.7.2. I can't seem to find any relevant commits from the podman PR or release notes indicating it was fixed. I'm sure this just means I'm not looking properly.

I'm essentially trying to see if Bookworm's version took in the fix from the time it was reported as an issue until the April release of 4.3.1. Maybe it's not an issue anymore and we can go back to Bookworm's?

Someone would need to reproduce the test case from T5829 on a new build - or the 20240710 nightly.

dd010101 commented 4 months ago

I can't seem to find any relevant commits from the podman PR or release notes indicating it was fixed. I'm sure this just means I'm not looking properly.

This could be fixed by the 4.7.0 for example - hard to find out. Only the c-po knows how he concluded the newer version fixes this.

I'm essentially trying to see if Bookworm's version took in the fix from the time it was reported as an issue until the April release of 4.3.1. Maybe it's not an issue anymore and we can go back to Bookworm's?

Then it easier to look at the Debian changes - do you see relevant back-port? The only newer version is the latest the changelog the patch.

Someone would need to reproduce the test case from T5829 on a new build - or the 20240710 nightly.

The funny thing is that I can't make it work even with previous sagitta build...

error

Yes, it's broken with 4.3.1 but also with 4.7.4 so this doesn't tell much...

The steps:

# you need installed vyos and internet access for the container pull
configure
set system name-server 8.8.8.8
set interfaces ethernet eth0 address dhcp
commit

exit

add container image alpine

configure

set container name alp01 image 'alpine'
set container name alp01 network NET01 address '10.0.0.12'
set container network NET01 prefix '10.0.0.0/24'
commit

set container network NET01 prefix '2001:db8::/64'
set container name alp01 network NET01 address '2001:db8::12'
commit

The 20240710 nightly fails as well. EDIT: The 20240630 rolling fails as well. So let's just say the podman version is pretty much irrelevant because even with 4.7.4 you will have really hard time to use VyOS with dual-stack containers...

My sagitta build fails with both versions. The rolling also fails with both.

Thus I conclude - it doesn't matter, didn't work before, doesn't work now....

Crushable1278 commented 4 months ago

I apologize that this is a bit off track. One thing I noticed checking this silly issue was that the version dump produced by the error handler does not reuse the same code as the show version command handler.

Something I do in my Sagitta+ images is further extend the debranding and strip the hardcoded mention of VyOS from the reported version string when executing show version, primarily found in vyos-1x. This can be done in src/op_mode/version.py for Sagitta+ and src/op_mode/show_version.py for Equuleus. But importantly, testing here showed me that one must also consider python/vyos/airbag.py.

I know you've patched out Equuleus' default_motd, found within vyos-build. Sagitta+ default_motd can be found within vyos-1x here: data/templates/login/default_motd.j2.

There are obviously a million other places that could be changed. For me, I figured show version would be one of the most visible debranding changes possible. Just a thought.

All in all, I'm immeasurably disappointed that a bug was found, a commit was added to allegedly fix said bug, but no smoketest was ever written to ensure that the bug remained or was ever verifiably fixed. Somehow, I used to just use VyOS and not worry what was broken - hah!

As always, it's been a pleasure probing broken code with you guys.

carlo-luzi commented 4 months ago

Thank you all very much, in the end maybe this is the price to pay in building a "FrankenDebian”

dd010101 commented 4 months ago

@Crushable1278

Something I do in my Sagitta+ images is further extend the debranding and strip the hardcoded mention of VyOS from the reported version string

Something like this https://github.com/dd010101/vyos-build/commit/a6aada10e66c3d70776a7111bfd49098bb278bd8?

vyos

There are countless mentions of "VyOS" but the point isn't to get rid of all of them, it's enough to get rid of the most visible ones in each stage (install/boot/login). The version command is also good candidate. There are also pointless mentions like these https://github.com/vyos/vyos-1x/blob/sagitta/op-mode-definitions/show-system.xml.in#L126. Where the name would be easily substituted by generic term, it's obviously not made to be de-branded.

All in all, I'm immeasurably disappointed that a bug was found, a commit was added to allegedly fix said bug, but no smoketest was ever written to ensure that the bug remained or was ever verifiably fixed.

I would hoped that the fix worked at least for while and then it brake again. This isn't blocking bug though. You can still use the containers with dual-stack you just need to define both IPv4 and IPv6 ranges for network in one commit, if you define network only with IPv4 and then IPv6 then it fails but you can create new network with both and reassign it to all containers - lot of legwork but workable.

My experience with smoketest is - that it creates way more false-positives then it catches bugs. Like identical test failing on one machine and not failing on other, running out of memory, race condition between commit/check... It could be better...

I also understand that different functionalities have different priority and perhaps non-breaking container bugs aren't top priority for VyOS and thus they don't warrant the same attention like writing test for each bug as other parts do. You can confirm this by looking at the test https://github.com/vyos/vyos-1x/blob/sagitta/smoketest/scripts/cli/test_container.py the containers have lot of functionality and options and only few are tested. Funny thing is - there are couple tests and one of the test is "test_dual_stack_network" yet this one will pass because it does IPv4+IPv6 and not the IPv4 then IPv6 😄.

Crushable1278 commented 4 months ago

https://vyos.dev/T6598

This was just filed.

Let's see what the outcome is.

dd010101 commented 4 months ago

New custom build script - https://github.com/vyos/vyos-build/pull/709/files as expected. Work in progress.

dd010101 commented 4 months ago

Sagitta podman package was added https://github.com/vyos/vyos-build/pull/719. Yet the build is broken - because of https://github.com/vyos/vyos-build/pull/721, waiting...

Crushable1278 commented 4 months ago

The container already has the go path defined making PR 721 redundant. What error are you seeing? I'm building without issue.

[14:56:23 root@89e9c60f2ce2 vyos {0}]# echo $PATH
/opt/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
[14:56:26 root@89e9c60f2ce2 vyos {0}]# which go
/opt/go/bin/go
Created package {:path=>"../podman_4.9.5_amd64.deb"}

I noticed that the golang-github-containers-common dependency that was present before with 4.9.4/4.3.1 is no longer present, leaving this package excluded from the final image. Not a lot in the package and containers seem to function without it, but it is a difference compared with older official and unofficial builds.

I expect we'll see a 1.4.1 release in the coming week given the amount of bug fixes that went in this week around Sagitta smoketests.

dd010101 commented 4 months ago
19:52:12  + cd podman
19:52:12  + echo 'I: installing dependencies'
19:52:12  I: installing dependencies
19:52:12  + make install.tools
19:52:12  /usr/bin/bash: line 1: go: command not found
19:52:12  /usr/bin/bash: line 1: go: command not found
19:52:12  env: ‘go’: No such file or directory
19:52:12  env: ‘go’: No such file or directory

It's failing for me since the freshly build sagitta container is broken with the go path. So I don't think it's redundant - why do you think it's redundant?

Crushable1278 commented 4 months ago

https://github.com/vyos/vyos-build/blob/d9b1177b78f4744fb9775e71fa5fe318f9ccbaa6/docker/Dockerfile#L373

The container environment already has go defined per .bashrc

docker.io/vyos/vyos-build:sagitta
Current UID/GID: 0/0
useradd warning: vyos_bld's uid 0 outside of the UID_MIN 1000 and UID_MAX 60000 range.
useradd: warning: the home directory /home/vyos_bld already exists.
useradd: Not copying any file from skel directory into it.
dircolors: no SHELL environment variable, and no shell type option given
[18:40:34 root@3f8565d4f4e9 vyos {0}]# echo $PATH
/opt/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
[18:41:06 root@3f8565d4f4e9 vyos {0}]# go
Go is a tool for managing Go source code.

Usage:

        go <command> [arguments]
...
stripped
...
[18:41:07 root@3f8565d4f4e9 vyos {2}]# which go
/opt/go/bin/go

How can it be that running the container manually has access to the go path, but running it through automation, it's missing?

Automation:

PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
WHICH GO: 

Manual:

PATH: /opt/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
WHICH GO: /opt/go/bin/go

There seems to be something overriding the home directory using automation, causing the .bashrc file to be missed and/or not contain the necessary path.

dd010101 commented 4 months ago

This was discussed before, the bashrc is blocked in non-interactive shell and thus anything in there is disregarded. Kudos to pittagurneyi for finding this.

Thus the build is broken unless you run it interctivelly and the Jenkins doesn't.