linkerd / linkerd2

Ultralight, security-first service mesh for Kubernetes. Main repo for Linkerd 2.x.
https://linkerd.io
Apache License 2.0
10.62k stars 1.27k forks source link

Pods with injected proxy can not be started on Openshift (OKD 4.9) #8553

Closed ErmakovDmitriy closed 1 year ago

ErmakovDmitriy commented 2 years ago

What is the issue?

I deployed Linkerd following an article https://buoyant.io/2022/01/20/running-linkerd-on-openshift-4/. The deployment itself works. Linkerd check returns several warnings and one error about linkerd-viz (I do not think that my issue is related to them).

All Linkerd and Linkerd-CNI Pods are running and ready.

When I try to deploy the example application (emojivoto), I get errors from ReplicaSets as below because the Openshift SCC prohibits creating a Pod having containers with arbitrary UID:

Error creating: pods "emoji-66ccdb4d86-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider "containerized-data-importer": Forbidden: not usable by user or serviceaccount, spec.containers[1].securityContext.runAsUser: Invalid value: 2102: must be in the ranges: [1001180000, 1001189999], provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "bridge-marker": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "kubevirt-controller": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "linux-bridge": Forbidden: not usable by user or serviceaccount, provider "nmstate": Forbidden: not usable by user or serviceaccount, provider "kubevirt-handler": Forbidden: not usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount]

My expectation is that we should be able to run Pods with arbitrary UIDs and we should not use Openshift SCC like anyuid or even more privileged.

How can it be reproduced?

To reproduce the issue one should follow https://buoyant.io/2022/01/20/running-linkerd-on-openshift-4/ article. One need to download the example "emojivoto" application and edit it to annotate the emojivoto namespace with "proxy" inject annotations:

apiVersion: v1
kind: Namespace
metadata:
  name: emojivoto
  labels:
    # This is because I use non-default Namespace selector for proxy, see below in "Additional context"
    config.linkerd.io/admission-webhooks: enabled
  annotations:
    linkerd.io/inject: enabled

Important notice: Linkerd-VIZ helm chart was deployed with Openshift Security Context Constraint "anyuid" allowed for it and its metrics I can see in the VIZ dashboard.

Pods for the emojivoto use default (restricted) SCC and they can not use "proxy" UID 2102.

Logs, error output, etc

Deployment status of one of the emojivoto components:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"emoji","app.kubernetes.io/part-of":"emojivoto","app.kubernetes.io/version":"v11"},"name":"emoji","namespace":"emojivoto"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"emoji-svc","version":"v11"}},"template":{"metadata":{"labels":{"app":"emoji-svc","version":"v11"}},"spec":{"containers":[{"env":[{"name":"GRPC_PORT","value":"8080"},{"name":"PROM_PORT","value":"8801"}],"image":"docker.l5d.io/buoyantio/emojivoto-emoji-svc:v11","name":"emoji-svc","ports":[{"containerPort":8080,"name":"grpc"},{"containerPort":8801,"name":"prom"}],"resources":{"requests":{"cpu":"100m"}}}],"serviceAccountName":"emoji"}}}}
  creationTimestamp: "2022-05-25T07:58:45Z"
  generation: 1
  labels:
    app.kubernetes.io/name: emoji
    app.kubernetes.io/part-of: emojivoto
    app.kubernetes.io/version: v11
  name: emoji
  namespace: emojivoto
  resourceVersion: "201218085"
  uid: 255f4c65-839d-4d11-997e-5ff517d4a20b
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: emoji-svc
      version: v11
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: emoji-svc
        version: v11
    spec:
      containers:
      - env:
        - name: GRPC_PORT
          value: "8080"
        - name: PROM_PORT
          value: "8801"
        image: docker.l5d.io/buoyantio/emojivoto-emoji-svc:v11
        imagePullPolicy: IfNotPresent
        name: emoji-svc
        ports:
        - containerPort: 8080
          name: grpc
          protocol: TCP
        - containerPort: 8801
          name: prom
          protocol: TCP
        resources:
          requests:
            cpu: 100m
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: emoji
      serviceAccountName: emoji
      terminationGracePeriodSeconds: 30
status:
  conditions:
  - lastTransitionTime: "2022-05-25T07:58:45Z"
    lastUpdateTime: "2022-05-25T08:06:44Z"
    message: ReplicaSet "emoji-66ccdb4d86" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2022-05-25T09:34:17Z"
    lastUpdateTime: "2022-05-25T09:34:17Z"
    message: Deployment does not have minimum availability.
    reason: MinimumReplicasUnavailable
    status: "False"
    type: Available
  - lastTransitionTime: "2022-05-25T09:34:17Z"
    lastUpdateTime: "2022-05-25T09:34:17Z"
    message: 'pods "emoji-66ccdb4d86-" is forbidden: unable to validate against any
      security context constraint: [provider "anyuid": Forbidden: not usable by user
      or serviceaccount, provider "containerized-data-importer": Forbidden: not usable
      by user or serviceaccount, spec.containers[1].securityContext.runAsUser: Invalid
      value: 2102: must be in the ranges: [1001180000, 1001189999], provider "nonroot":
      Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid":
      Forbidden: not usable by user or serviceaccount, provider "bridge-marker": Forbidden:
      not usable by user or serviceaccount, provider "machine-api-termination-handler":
      Forbidden: not usable by user or serviceaccount, provider "kubevirt-controller":
      Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden:
      not usable by user or serviceaccount, provider "hostaccess": Forbidden: not
      usable by user or serviceaccount, provider "linux-bridge": Forbidden: not usable
      by user or serviceaccount, provider "nmstate": Forbidden: not usable by user
      or serviceaccount, provider "kubevirt-handler": Forbidden: not usable by user
      or serviceaccount, provider "node-exporter": Forbidden: not usable by user or
      serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount]'
    reason: FailedCreate
    status: "True"
    type: ReplicaFailure
  observedGeneration: 1
  unavailableReplicas: 1

output of linkerd check -o short

Linkerd core checks

linkerd-existence

‼ cluster networks can be verified the following nodes do not expose a podCIDR:

List of servers

see https://linkerd.io/2/checks/#l5d-cluster-networks-verified for hints

control-plane-version

‼ control plane and cli versions match control plane running stable-2.11.2 but cli running edge-22.5.2 see https://linkerd.io/2/checks/#l5d-version-control for hints

linkerd-control-plane-proxy

‼ control plane proxies and cli versions match linkerd-destination-5ccc4764f4-hf6rt running stable-2.11.2 but cli running edge-22.5.2 see https://linkerd.io/2/checks/#l5d-cp-proxy-cli-version for hints

Linkerd extensions checks

linkerd-viz

‼ viz extension proxies and cli versions match grafana-6496fff755-r7nwc running stable-2.11.2 but cli running edge-22.5.2 see https://linkerd.io/2/checks/#l5d-viz-proxy-cli-version for hints

Status check results are √

Environment

Kubernetes version: v1.22.1-1839+b93fd35dd03051-dirty OKD version: 4.9.0-0.okd-2022-02-12-140851 using OVN-Kubernetes Linkerd: stable-2.11.2 installed via Helm chart

Possible solution

Maybe we should somehow allow the injected proxy to run with "any" UID which will be then assigned by Openshift (OKD) using its mechanism. This may improve security as the containers with proxy will not run with the same UID in a cluster and allow running workloads on OKD without giving containers/Pods too much access.

Additional context

No response

Would you like to work on fixing this bug?

No response

ErmakovDmitriy commented 2 years ago

Some steps to troubleshoot

In order to further test and check I deployed a new OKD 4.10 cluster with only 3 master nodes (I have quite limited resources) using Openshift SDN for network (the cluster in the bug used Openshift OVN-Kubernetes).

The CNI was redeployed with logLevel: debug. I could not see any logs in journalctl -u kubelet | grep linkerd from the CNI.

Probably, for some reason, the CNI plugin even is not called by Openshift (Multus). I deleted the linkerd-cni binary on one of the nodes and Pods still were normally started on it.

To further check and compare, I deployed Maistra Operator (Istio based) on the same cluster. It works well out-of-the-box. Comparing the Istio-CNI and Linkerd-CNI Pod configurations, I noticed that Istio uses:

# -- Directory on the host where the CNI plugin binaries reside
destCNINetDir:    "/etc/cni/multus/net.d/"
# -- Directory on the host where the CNI configuration will be placed
destCNIBinDir:    "/opt/multus/bin/"

while the article about Linkerd (see the bug description) recommends to use:

destCNIBinDir: /var/lib/cni/bin
destCNINetDir: /etc/kubernetes/cni/net.d

Kubelet in my Openshift is configured with default settings as below:

May 27 15:07:43 master0.okd.dev.local hyperkube[1442]: I0527 15:07:43.627811    1442 flags.go:64] FLAG: --cni-bin-dir="/opt/cni/bin"
May 27 15:07:43 master0.okd.dev.local hyperkube[1442]: I0527 15:07:43.627814    1442 flags.go:64] FLAG: --cni-cache-dir="/var/lib/cni/cache"
May 27 15:07:43 master0.okd.dev.local hyperkube[1442]: I0527 15:07:43.627817    1442 flags.go:64] FLAG: --cni-conf-dir="/etc/cni/net.d"

Possible cause of the issue

I have checked the Maistra Operator and Istio documenation:

It may be that Multus used by Openshift does not support CNI chaining so the Maistra uses NetworkAttachDefinitions roughly this way:

  1. A Namespace is assigned to be controlled by some of the Maistra Istio control plane
  2. A NetworkAttachDefinition for Istio CNI is created in the namespace
  3. Pods, annotated with Istio proxy inject annotation, are modified by mutating web hook to contain: proxy container and k8s.v1.cni.cncf.io/networks: istio-net-attach-definition-name annotation

this way, Multus applies the Istio CNI to the Pod.

I could not reproduce this with Linkerd despite trying to put necessary files (linkerd-cni.conf) in /etc/cni/multus/net.d/linkerd.conf and creating a NetworkAttachDefinition for Multus.

My "try-to-reproduce" configurations:

linkerd-cni.conf:

    {
      "cniVersion": "0.3.0",
      "name": "linkerd-cni",
      "type": "linkerd-cni",
      "log_level": "debug",
      "policy": {
        "type": "k8s",
        "k8s_api_root": "https://172.30.0.1:443",
        "k8s_auth_token": "<token>"
      },
      "kubernetes": {
        "kubeconfig": "/etc/cni/multus/net.d//ZZZ-linkerd-cni-kubeconfig"
      },
      "linkerd": {
        "incoming-proxy-port": 4143,
        "outgoing-proxy-port": 4140,
        "proxy-uid": 2102,
        "ports-to-redirect": [],
        "inbound-ports-to-ignore": [
          "4191",
          "4190"
        ],
        "simulate": false,
        "use-wait-flag": true
      }
    }

NetworkAttachDefinition:

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: linkerd-cni
spec:
  config: ''

and one of the Linkerd Pods was annotated with k8s.v1.cni.cncf.io/networks: linkerd-cni.

This lead to errors on creating a Pod like below:

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_linkerd-destination-696d48756d-p4wxs_linkerd_cfbd3a42-c6f1-4eb4-9cc1-b57a0bc95b8d_0(2661b3cd99b38bd18ca498929cf2be89db85a3b1f2e33e859fb63910ea1058a1): error adding pod linkerd_linkerd-destination-696d48756d-p4wxs to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [linkerd/linkerd-destination-696d48756d-p4wxs/cfbd3a42-c6f1-4eb4-9cc1-b57a0bc95b8d:linkerd-cni]: error adding container to network "linkerd-cni": unexpected end of JSON input

on a good side, I managed to "call" the CNI plugin.

ErmakovDmitriy commented 2 years ago

I have a small update about the topic.

I tried to make a proof of concept for Multus + Linkerd based on what I observed with Maistra Service Mesh.

Here is an operator: https://github.com/ErmakovDmitriy/linkerd-multus-operator an a changed (to support non-chained call) linkerd-cni https://github.com/ErmakovDmitriy/linkerd2/tree/cni-in-cluster-config

It defines one namespaces API resource AttachDefinition which at the moment just shows the operator that a NetworkAttachmentDefinition for Multus should be created in the namespace.

Then the operator also provides a mutating webhook which annotates Pods with k8s.v1.cni.cncf.io/networks Multus attach annotation if both:

  1. Pod or its Namespace has linkerd.io/inject annotation with enabled or ingress
  2. The Multus NetworkAttachmentDefinition is in the Pod's Namespace.

The operator support only one hard-coded AttachDefiniton name which is linkerd-cni. The Multus NetworkAttachmentDefinition will have the same name, as below:

[dev:~]$ 
[dev:~]$ k -n emojivoto2 get attachdefinitions.cni.linkerd.io 
NAME          AGE
linkerd-cni   10s
[dev:~]$ k -n emojivoto2 get network-attachment-definitions.k8s.cni.cncf.io 
NAME          AGE
linkerd-cni   14s
[dev:~]$ 
[dev:~]$ k -n emojivoto2 get attachdefinitions.cni.linkerd.io linkerd-cni -o yaml 
apiVersion: cni.linkerd.io/v1alpha1
kind: AttachDefinition
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"cni.linkerd.io/v1alpha1","kind":"AttachDefinition","metadata":{"annotations":{},"name":"linkerd-cni","namespace":"emojivoto2"},"spec":{"createMultusNetworkAttachmentDefinition":true,"proxyConfig":null}}
  creationTimestamp: "2022-06-07T17:19:44Z"
  generation: 1
  name: linkerd-cni
  namespace: emojivoto2
  resourceVersion: "295036"
  uid: b1e52c31-fd2f-466d-8f28-63d03c3bce32
spec:
  createMultusNetworkAttachmentDefinition: true
[dev:~]$ 

To run the Linkerd CNI plugin for the control plane namespace an AttachDefinition and k8s.v1.cni.cncf.io/networks=linkerd-cni annotation were added to the control plane Deployments manually as below:

### [dev:~]$ k -n linkerd get attachdefinitions.cni.linkerd.io linkerd-cni -o yaml
apiVersion: cni.linkerd.io/v1alpha1
kind: AttachDefinition
metadata:
  name: linkerd-cni
  namespace: linkerd
spec:
  createMultusNetworkAttachmentDefinition: true
linkerd upgrade --disable-heartbeat --linkerd-cni-enabled --proxy-cpu-limit=100m --proxy-cpu-request=10m --proxy-log-level=info --proxy-memory-limit=64Mi --proxy-memory-request=32Mi  --set=proxy.await=true,proxy.outboundConnectTimeout=2000ms,podAnnotations.'k8s\.v1\.cni\.cncf\.io/networks=linkerd-cni' --force | kubectl apply -f -

After that, the control plane has proper iptables rules and the same works for linkerd-viz.

There are still some strange issues though: before the control plane Pods destination and policy could be started, the PostStartHook must fail. Only after that, I can see that the identity Pod issues a certificate:

identity time="2022-06-08T15:46:04Z" level=info msg="issued certificate for linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local until 2022-06-09 15:46:24 +0000 UTC: 41eb8bb99743b5dd2cd4f81274f1c422578b2f58738c82aa61205a3881cf1c89"                                                                               
identity time="2022-06-08T15:46:06Z" level=info msg="issued certificate for linkerd-proxy-injector.linkerd.serviceaccount.identity.linkerd.cluster.local until 2022-06-09 15:46:26 +0000 UTC: 2cc0e0bd2006e265f290fb9e2edf1c4f37b028db3970cb836ff40802254e3123" 

The same behavior is observed for emojivoto demo application, see logs from vote container:

[     0.002232s]  INFO ThreadId(01) linkerd2_proxy::rt: Using single-threaded proxy runtime
[     0.002951s]  INFO ThreadId(01) linkerd2_proxy: Admin interface on 0.0.0.0:4191
[     0.002984s]  INFO ThreadId(01) linkerd2_proxy: Inbound interface on 0.0.0.0:4143
[     0.003005s]  INFO ThreadId(01) linkerd2_proxy: Outbound interface on 127.0.0.1:4140
[     0.003023s]  INFO ThreadId(01) linkerd2_proxy: Tap interface on 0.0.0.0:4190
[     0.003041s]  INFO ThreadId(01) linkerd2_proxy: Local identity is default.emojivoto2.serviceaccount.identity.linkerd.cluster.local
[     0.003060s]  INFO ThreadId(01) linkerd2_proxy: Identity verified via linkerd-identity-headless.linkerd.svc.cluster.local:8080 (linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local)
[     0.003078s]  INFO ThreadId(01) linkerd2_proxy: Destinations resolved via linkerd-dst-headless.linkerd.svc.cluster.local:8086 (linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local)
[     7.006495s]  WARN ThreadId(01) policy:watch{port=4191}:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}:endpoint{addr=10.130.0.52:8090}: linkerd_reconnect: Failed to connect error=connect timed out after 2s
[     7.008595s]  WARN ThreadId(02) identity:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:endpoint{addr=10.128.0.64:8080}: linkerd_reconnect: Failed to connect error=connect timed out after 2s
[     9.113006s]  WARN ThreadId(02) identity:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:endpoint{addr=10.128.0.64:8080}: linkerd_reconnect: Failed to connect error=connect timed out after 2s
[     9.117314s]  WARN ThreadId(01) policy:watch{port=4191}:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}:endpoint{addr=10.130.0.52:8090}: linkerd_reconnect: Failed to connect error=connect timed out after 2s
[    11.326637s]  WARN ThreadId(02) identity:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:endpoint{addr=10.128.0.64:8080}: linkerd_reconnect: Failed to connect error=connect timed out after 2s
[    11.336645s]  WARN ThreadId(01) policy:watch{port=4191}:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}:endpoint{addr=10.130.0.52:8090}: linkerd_reconnect: Failed to connect error=connect timed out after 2s
[    13.686981s]  WARN ThreadId(02) rustls::conn: Sending fatal alert AccessDenied
[    13.754995s]  WARN ThreadId(01) policy:watch{port=4191}:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}:endpoint{addr=10.130.0.52:8090}: linkerd_reconnect: Failed to connect error=connect timed out after 2s
[    13.764811s]  WARN ThreadId(02) identity:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:endpoint{addr=10.128.0.64:8080}: linkerd_reconnect: Failed to connect error=connect timed out after 2s
[    13.792726s]  WARN ThreadId(02) rustls::conn: Sending fatal alert AccessDenied
[    14.012945s]  WARN ThreadId(02) rustls::conn: Sending fatal alert AccessDenied
[    14.417956s]  WARN ThreadId(02) rustls::conn: Sending fatal alert AccessDenied
[    14.920323s]  WARN ThreadId(02) rustls::conn: Sending fatal alert AccessDenied
[    15.004970s]  WARN ThreadId(01) linkerd_app: Waiting for identity to be initialized...
[    15.423357s]  WARN ThreadId(02) rustls::conn: Sending fatal alert AccessDenied
[    15.926151s]  WARN ThreadId(02) rustls::conn: Sending fatal alert AccessDenied

I am going to try to understand why it behaves this way, though, at the moment, I have no ideas (Openshift DNS pods were restarted).

ErmakovDmitriy commented 2 years ago

Here is the emojivoto YAML with added proxy UID annotation: https://pastebin.com/93DrHdE5

ErmakovDmitriy commented 2 years ago

The issue with the Pods is probably found (will do more tests).

The linkerd-cni generates iptables as below:

[root@master1 core]# iptables-save -t nat 
# Generated by iptables-save v1.8.7 on Wed Jun  8 20:52:37 2022
*nat
:PREROUTING ACCEPT [57:3420]
:INPUT ACCEPT [57:3420]
:OUTPUT ACCEPT [269:19605]
:POSTROUTING ACCEPT [489:32805]
:PROXY_INIT_OUTPUT - [0:0]
:PROXY_INIT_REDIRECT - [0:0]
-A PREROUTING -m comment --comment "proxy-init/install-proxy-init-prerouting/1654721284" -j PROXY_INIT_REDIRECT
-A OUTPUT -m comment --comment "proxy-init/install-proxy-init-output/1654721284" -j PROXY_INIT_OUTPUT
-A PROXY_INIT_OUTPUT -m owner --uid-owner 2102 -m comment --comment "proxy-init/ignore-proxy-user-id/1654721284" -j RETURN
-A PROXY_INIT_OUTPUT -o lo -m comment --comment "proxy-init/ignore-loopback/1654721284" -j RETURN
-A PROXY_INIT_OUTPUT -p tcp -m comment --comment "proxy-init/redirect-all-outgoing-to-proxy-port/1654721284" -j REDIRECT --to-ports 4140
-A PROXY_INIT_REDIRECT -p tcp -m multiport --dports 4191,4190 -m comment --comment "proxy-init/ignore-port-4191,4190/1654721284" -j RETURN
-A PROXY_INIT_REDIRECT -p tcp -m comment --comment "proxy-init/redirect-all-incoming-to-proxy-port/1654721284" -j REDIRECT --to-ports 4143
COMMIT
# Completed on Wed Jun  8 20:52:37 2022
[root@master1 core]# 

which include Proxy-UID. When the linkerd-cni is started and runs in Openshift, the Proxy must be run with some "above 1000000000" UID which is allowed in a Namespace, for example my emojivoto2 (some lines are omitted):

apiVersion: v1
kind: Namespace
metadata:
  annotations:
    linkerd.io/inject: enabled
    openshift.io/sa.scc.mcs: s0:c26,c15
    openshift.io/sa.scc.supplemental-groups: 1000680000/10000
    openshift.io/sa.scc.uid-range: 1000680000/10000

At the moment, I just set the ProxyUID in the AttachDefinition (https://github.com/ErmakovDmitriy/linkerd-multus-operator/blob/main/api/v1alpha1/attachdefinition_types.go#L115) as below:

# k -n emojivoto2 get attachdefinitions.cni.linkerd.io linkerd-cni -o yaml                                                  (main✱) 
apiVersion: cni.linkerd.io/v1alpha1
kind: AttachDefinition
metadata:
  name: linkerd-cni
  namespace: emojivoto2
spec:
  createMultusNetworkAttachmentDefinition: true
  proxyConfig:
    proxyUID: 1000681000

which generated the Multus NetworkAttachmentDefinition with the ProxyUID:

# k -n emojivoto2 get network-attachment-definitions.k8s.cni.cncf.io linkerd-cni -o yaml                                    (main✱) 
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: linkerd-cni
  namespace: emojivoto2
spec:
  config: '{"cniVersion":"0.3.0","name":"linkerd-cni","type":"linkerd-cni","ipam":{},"dns":{},"log_level":"debug","linkerd":{"incoming-proxy-port":4143,"outgoing-proxy-port":4140,"proxy-uid":1000681000,"inbound-ports-to-ignore":["4191","4190"]},"kubernetes":{"kubeconfig":"/etc/cni/net.d/ZZZ-linkerd-cni-kubeconfig"}}'

and then the iptables rules for a Pod in the emojvoto2 namespace became:

# nsenter --net=/var/run/netns/df742ce3-0f55-4aee-87d8-eec15bc4aeea -- iptables-save -t nat
# Generated by iptables-save v1.8.7 on Wed Jun  8 21:15:02 2022
*nat
:PREROUTING ACCEPT [22:1320]
:INPUT ACCEPT [22:1320]
:OUTPUT ACCEPT [79:4892]
:POSTROUTING ACCEPT [79:4892]
:PROXY_INIT_OUTPUT - [0:0]
:PROXY_INIT_REDIRECT - [0:0]
-A PREROUTING -m comment --comment "proxy-init/install-proxy-init-prerouting/1654722776" -j PROXY_INIT_REDIRECT
-A OUTPUT -m comment --comment "proxy-init/install-proxy-init-output/1654722776" -j PROXY_INIT_OUTPUT
-A PROXY_INIT_OUTPUT -m owner --uid-owner 1000681000 -m comment --comment "proxy-init/ignore-proxy-user-id/1654722776" -j RETURN
-A PROXY_INIT_OUTPUT -o lo -m comment --comment "proxy-init/ignore-loopback/1654722776" -j RETURN
-A PROXY_INIT_OUTPUT -p tcp -m comment --comment "proxy-init/redirect-all-outgoing-to-proxy-port/1654722776" -j REDIRECT --to-ports 4140
-A PROXY_INIT_REDIRECT -p tcp -m multiport --dports 4191,4190 -m comment --comment "proxy-init/ignore-port-4191,4190/1654722776" -j RETURN
-A PROXY_INIT_REDIRECT -p tcp -m comment --comment "proxy-init/redirect-all-incoming-to-proxy-port/1654722776" -j REDIRECT --to-ports 4143
COMMIT
# Completed on Wed Jun  8 21:15:02 2022

so that the Proxy could connect to the control plane and the Pod started.

stale[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

stale[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.