Closed joekohlsdorf closed 4 months ago
That's odd, we haven't updated the build containers since February or so. Do you know when GLIBC 2.32 was released?
On Tue, Jun 27, 2023 at 11:15 AM joe @.***> wrote:
The published binary for v1.1.0 for AMD64 requires GLIBC 2.32 or 2.34 but Debian Bullseye has 2.31. Debian Bookworm was only published ~2 weeks ago so in my opinion it isn't reasonable to already expect everyone to be on the latest version.
Error message:
./kubectl-hns: /lib/x86_64-linux-gnu/libc.so.6: version
GLIBC_2.34' not found (required by ./kubectl-hns) ./kubectl-hns: /lib/x86_64-linux-gnu/libc.so.6: version
GLIBC_2.32' not found (required by ./kubectl-hns)To reproduce wit Docker put the following in a Dockerfile and run docker build .:
FROM debian:bullseye ADD https://github.com/kubernetes-sigs/hierarchical-namespaces/releases/download/v1.1.0/kubectl-hns_linux_amd64 /kubectl-hns RUN chmod +x ./kubectl-hns RUN ./kubectl-hns help
The simplest fix is probably to use an older OS to compile the binary.
— Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/hierarchical-namespaces/issues/308, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE43PZGXS6ZE3SK3H7FXAALXNL2KJANCNFSM6AAAAAAZVYJFWA . You are receiving this because you are subscribed to this thread.Message ID: @.***>
I was previously running 1.1.0rc2 which works. I just tested rc3 and found that it has the same issue. So if you updated the build containers in February just after publishing rc2 then that is likely to be the change which introduced this incompatibility.
Running into the same thing with GitHub self hosted runners.
/home/runner/.krew/bin/kubectl-hns: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /home/runner/.krew/bin/kubectl-hns)
/home/runner/.krew/bin/kubectl-hns: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /home/runner/.krew/bin/kubectl-hns)
Is there a fix or workaround?
Hi, Same problem here on Rocky 8.7.
Ok I'll go see if I can downgrade somehow
Sorry I haven't gotten to this yet :( Will have another look.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Hi, Same problem here .
[root@cs-xndb1 ~]# kubectl hns --help /usr/local/sbin/kubectl-hns: /lib64/libc.so.6: version 'GLIBC_2.34' not found (required by /usr/local/sbin/kubectl-hns) /usr/local/sbin/kubectl-hns: /lib64/libc.so.6: version 'GLIBC_2.32' not found (required by /usr/local/sbin/kubectl-hns) [root@cs-xndb1 ~]# cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core)
@mist714 raised an PR for this which was merged on the 14th November 2023 (PR 236) which builds the Linux binaries with CGO_ENABLED=0 so should avoid these issues with glibc completely. This merge, however, was after v1.1.0 was built and released so the binaries available are linked with glibc still. For the time being people may want to try building from source themselves. Maintainers, could somebody please roll an interim v1.1.1 or something with at least this change in to help folks out please? (I can confirm that built with CGO_ENABLED=0 worked for my use case)
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
Looks like little interest in this feature.
@iamasmith It's fixed in HEAD, just need a new release.
@joekohlsdorf yes, I think it was me that pointed this out
The published binary for v1.1.0 for AMD64 requires GLIBC 2.32 or 2.34 but Debian Bullseye has 2.31. Debian Bookworm was only published ~2 weeks ago so in my opinion it isn't reasonable to already expect everyone to be on the latest version.
Error message:
To reproduce wit Docker put the following in a
Dockerfile
and rundocker build .
:The simplest fix is probably to use an older OS to compile the binary.