linkerd / linkerd2

Ultralight, security-first service mesh for Kubernetes. Main repo for Linkerd 2.x.
https://linkerd.io
Apache License 2.0
10.49k stars 1.27k forks source link

Change default `cr.l5d.io` to `ghcr.io`? #12642

Closed sheeeng closed 1 month ago

sheeeng commented 1 month ago

What is the issue?

I want to change all occurrences of the default cr.l5d.io to ghcr.io.

diff --git a/core/linkerd-cni/HelmRelease.yaml b/core/linkerd-cni/HelmRelease.yaml
index 8e4a1ddae..7d50772f7 100644
--- a/core/linkerd-cni/HelmRelease.yaml
+++ b/core/linkerd-cni/HelmRelease.yaml
@@ -21,5 +21,7 @@ spec:
       chart: linkerd2-cni
       version: 30.12.2
   values:
+    image:
+      name: ghcr.io/linkerd/cni-plugin # Defaults to `cr.l5d.io`. # https://github.com/linkerd/linkerd2/blob/84cda9e9ffdd0addc702aed912d6cfba56a9e77e/charts/linkerd2-cni/values.yaml#L60
     repairController:
       enabled: true
diff --git a/core/linkerd-viz/HelmRelease.yaml b/core/linkerd-viz/HelmRelease.yaml
index 6c8eadfb4..f91e3cf27 100644
--- a/core/linkerd-viz/HelmRelease.yaml
+++ b/core/linkerd-viz/HelmRelease.yaml
@@ -24,6 +24,7 @@ spec:
       chart: linkerd-viz
       version: 30.12.11
   values:
+    defaultRegistry: ghcr.io/linkerd # Defaults to `cr.l5d.io`. https://github.com/linkerd/linkerd2/blob/84cda9e9ffdd0addc702aed912d6cfba56a9e77e/viz/charts/linkerd-viz/values.yaml#L22
     tap:
       externalSecret: true
       injectCaFrom: linkerd-viz/linkerd-tap
diff --git a/core/linkerd/HelmRelease.yaml b/core/linkerd/HelmRelease.yaml
index 7e49fa351..4b67bd6de 100644
--- a/core/linkerd/HelmRelease.yaml
+++ b/core/linkerd/HelmRelease.yaml
@@ -52,6 +52,10 @@ spec:
       version: 1.16.11
   values:
     cniEnabled: true
+    controllerImage: ghcr.io/linkerd/controller # Defaults to `cr.l5d.io`. https://github.com/linkerd/linkerd2/blob/84cda9e9ffdd0addc702aed912d6cfba56a9e77e/charts/linkerd-control-plane/values.yaml#L355
+    debugContainer:
+      image:
+        name: ghcr.io/linkerd/debug # Defaults to `cr.l5d.io`. https://github.com/linkerd/linkerd2/blob/84cda9e9ffdd0addc702aed912d6cfba56a9e77e/charts/linkerd-control-plane/values.yaml#L385
     identity:
       externalCA: true
       issuer:
@@ -67,6 +71,15 @@ spec:
       injectCaFrom: linkerd/linkerd-sp-validator
     podMonitor:
       enabled: true
+    policyController:
+      image:
+        name: ghcr.io/linkerd/policy-controller # Defaults to `cr.l5d.io`. https://github.com/linkerd/linkerd2/blob/84cda9e9ffdd0addc702aed912d6cfba56a9e77e/charts/linkerd-control-plane/values.yaml#L85
+    proxy:
+      image:
+        name: ghcr.io/linkerd/proxy # Defaults to `cr.l5d.io`. https://github.com/linkerd/linkerd2/blob/84cda9e9ffdd0addc702aed912d6cfba56a9e77e/charts/linkerd-control-plane/values.yaml#L145
+    proxyInit:
+      image:
+        name: ghcr.io/linkerd/proxy-init # Defaults to `cr.l5d.io`. https://github.com/linkerd/linkerd2/blob/84cda9e9ffdd0addc702aed912d6cfba56a9e77e/charts/linkerd-control-plane/values.yaml#L282
     nodeAffinity:
       requiredDuringSchedulingIgnoredDuringExecution:
         nodeSelectorTerms:

However I still get some containers still using cr.l5d.io.

$ kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec['initContainers', 'containers'][*].image}" |\
tr -s '[[:space:]]' '\n' |\
sort |\
uniq -c
    104 cr.l5d.io/linkerd/proxy:stable-2.14.10
     18 ghcr.io/linkerd/cni-plugin:v1.3.0
     12 ghcr.io/linkerd/controller:stable-2.14.10
      1 ghcr.io/linkerd/metrics-api:stable-2.14.10
      3 ghcr.io/linkerd/policy-controller:stable-2.14.10
     34 ghcr.io/linkerd/proxy:stable-2.14.10
      4 ghcr.io/linkerd/tap:stable-2.14.10
      1 ghcr.io/linkerd/web:stable-2.14.10

What else should I debug or modify?

How can it be reproduced?

I want to change all occurrences of the default cr.l5d.io to ghcr.io.

diff --git a/core/linkerd-cni/HelmRelease.yaml b/core/linkerd-cni/HelmRelease.yaml
index 8e4a1ddae..7d50772f7 100644
--- a/core/linkerd-cni/HelmRelease.yaml
+++ b/core/linkerd-cni/HelmRelease.yaml
@@ -21,5 +21,7 @@ spec:
       chart: linkerd2-cni
       version: 30.12.2
   values:
+    image:
+      name: ghcr.io/linkerd/cni-plugin # Defaults to `cr.l5d.io`. # https://github.com/linkerd/linkerd2/blob/84cda9e9ffdd0addc702aed912d6cfba56a9e77e/charts/linkerd2-cni/values.yaml#L60
     repairController:
       enabled: true
diff --git a/core/linkerd-viz/HelmRelease.yaml b/core/linkerd-viz/HelmRelease.yaml
index 6c8eadfb4..f91e3cf27 100644
--- a/core/linkerd-viz/HelmRelease.yaml
+++ b/core/linkerd-viz/HelmRelease.yaml
@@ -24,6 +24,7 @@ spec:
       chart: linkerd-viz
       version: 30.12.11
   values:
+    defaultRegistry: ghcr.io/linkerd # Defaults to `cr.l5d.io`. https://github.com/linkerd/linkerd2/blob/84cda9e9ffdd0addc702aed912d6cfba56a9e77e/viz/charts/linkerd-viz/values.yaml#L22
     tap:
       externalSecret: true
       injectCaFrom: linkerd-viz/linkerd-tap
diff --git a/core/linkerd/HelmRelease.yaml b/core/linkerd/HelmRelease.yaml
index 7e49fa351..4b67bd6de 100644
--- a/core/linkerd/HelmRelease.yaml
+++ b/core/linkerd/HelmRelease.yaml
@@ -52,6 +52,10 @@ spec:
       version: 1.16.11
   values:
     cniEnabled: true
+    controllerImage: ghcr.io/linkerd/controller # Defaults to `cr.l5d.io`. https://github.com/linkerd/linkerd2/blob/84cda9e9ffdd0addc702aed912d6cfba56a9e77e/charts/linkerd-control-plane/values.yaml#L355
+    debugContainer:
+      image:
+        name: ghcr.io/linkerd/debug # Defaults to `cr.l5d.io`. https://github.com/linkerd/linkerd2/blob/84cda9e9ffdd0addc702aed912d6cfba56a9e77e/charts/linkerd-control-plane/values.yaml#L385
     identity:
       externalCA: true
       issuer:
@@ -67,6 +71,15 @@ spec:
       injectCaFrom: linkerd/linkerd-sp-validator
     podMonitor:
       enabled: true
+    policyController:
+      image:
+        name: ghcr.io/linkerd/policy-controller # Defaults to `cr.l5d.io`. https://github.com/linkerd/linkerd2/blob/84cda9e9ffdd0addc702aed912d6cfba56a9e77e/charts/linkerd-control-plane/values.yaml#L85
+    proxy:
+      image:
+        name: ghcr.io/linkerd/proxy # Defaults to `cr.l5d.io`. https://github.com/linkerd/linkerd2/blob/84cda9e9ffdd0addc702aed912d6cfba56a9e77e/charts/linkerd-control-plane/values.yaml#L145
+    proxyInit:
+      image:
+        name: ghcr.io/linkerd/proxy-init # Defaults to `cr.l5d.io`. https://github.com/linkerd/linkerd2/blob/84cda9e9ffdd0addc702aed912d6cfba56a9e77e/charts/linkerd-control-plane/values.yaml#L282
     nodeAffinity:
       requiredDuringSchedulingIgnoredDuringExecution:
         nodeSelectorTerms:

However, I still get some containers still using cr.l5d.io.

Logs, error output, etc

$ kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec['initContainers', 'containers'][*].image}" |\
tr -s '[[:space:]]' '\n' |\
sort |\
uniq -c
    104 cr.l5d.io/linkerd/proxy:stable-2.14.10
     18 ghcr.io/linkerd/cni-plugin:v1.3.0
     12 ghcr.io/linkerd/controller:stable-2.14.10
      1 ghcr.io/linkerd/metrics-api:stable-2.14.10
      3 ghcr.io/linkerd/policy-controller:stable-2.14.10
     34 ghcr.io/linkerd/proxy:stable-2.14.10
      4 ghcr.io/linkerd/tap:stable-2.14.10
      1 ghcr.io/linkerd/web:stable-2.14.10

output of linkerd check -o short

Not available at the moment.

Environment

$ kubectl version
...
Server Version: v1.28.5

Possible solution

No response

Additional context

No response

Would you like to work on fixing this bug?

yes

sheeeng commented 1 month ago

It seems like the reconciliation takes time. I use rollout restart to force these changes.

c1grep() { grep "$@" || test $? = 1; }

for ns in $(kubectl get -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' namespaces)
do

  rollout_restart() {
    local resource_type=$1
    for resource in $(kubectl get $resource_type -n $ns -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}')
    do
      echo "Namespace: ${ns} | ${resource_type^}: ${resource}"
      kubectl get pods -n $ns -o jsonpath="{.items[*].spec['initContainers', 'containers'][*].image}" |\
        tr -s '[[:space:]]' '\n' |\
        sort |\
        uniq -c |\
        c1grep 'l5d'
      # kubectl rollout restart $resource_type/$resource -n $ns
    done
  }

  # rollout restart deployment
  rollout_restart "deployment"

  # rollout restart replicaset
  rollout_restart "replicaset"

  # rollout restart statefulset
  rollout_restart "statefulset"

  # rollout restart daemonset
  rollout_restart "daemonset"

  # rollout restart cronjob
  rollout_restart "cronjob"

done