tektoncd / pipeline

A cloud-native Pipeline resource.
https://tekton.dev
Apache License 2.0
8.43k stars 1.77k forks source link

Task's container get default env var from Dockerfile instead of overrides from step spec #3666

Closed rvadim closed 2 years ago

rvadim commented 3 years ago

Actual Behavior

$ cat task-test.yaml                                                                                                                  
---                                                                                                                                   
apiVersion: tekton.dev/v1beta1                                                                                                        
kind: Task                                                                                                                            
metadata:                                                                                                                             
  name: test-gvisor                                                                                                                   
spec:                                                                                                                                 
  description: ""                                                                                                                     
  steps:                                                                                                                              
    - name: test                                                                                                                      
      image: gcr.io/kaniko-project/executor:debug                                                                                     
      script: | 
        #!/busybox/sh
        echo $DOCKER_CONFIG                                                                                                           
      env:                                                                                                                            
      - name: DOCKER_CONFIG                                                                                                           
        value: /tekton/home/.docker/                                                                                                  
$ cat testrun.yaml 
apiVersion: tekton.dev/v1beta1                                                                                                        
kind: TaskRun                                                                                                                         
metadata:                                                                                                                             
  name: test-gvisor                                                                                                                   
spec:                                                                                                                                 
  taskRef:                                                                                                                            
    name: test-gvisor                                                                                                                 
  podTemplate:                                                                                                                        
    securityContext:                                                                                                                  
      runAsUser: 0                                                                                                                    
    runtimeClassName: gvisor                                                                                                          
$ kubectl delete -f task-test.yaml -f testrun.yaml ; kubectl create -f task-test.yaml -f testrun.yaml                     
task.tekton.dev "test-gvisor" deleted                                                                                                 
taskrun.tekton.dev "test-gvisor" deleted                                                                                              
task.tekton.dev/test-gvisor created                                                                                                   
taskrun.tekton.dev/test-gvisor created                                                                                                
$ tkn tr logs -f test-gvisor                                                                                                    
[test] /kaniko/.docker/

Expected Behavior

$ tkn tr logs -f test-gvisor                                                                                                    
[test] /tekton/home/.docker/

As you can see a real world example with the Kaniko will fail with error Permission denied on trying to push to the registry.

Steps to Reproduce the Problem

  1. Setup and configure GKE cluster
  2. Configure GKE Sandbox (https://cloud.google.com/kubernetes-engine/docs/how-to/sandbox-pods)
  3. Run Task/TaskRun from the Actual Behavior

Additional Info

$ cat Dockerfile
FROM ubuntu:20.04

ENV HELLO_WORLD "Value from image"
$ docker build -t rvadim/test:1 .
Sending build context to Docker daemon  3.072kB
Step 1/2 : FROM ubuntu:20.04
 ---> 1e4467b07108
Step 2/2 : ENV HELLO_WORLD "Value from image"
 ---> Using cache
 ---> c05b6cc8a7fa
Successfully built c05b6cc8a7fa
Successfully tagged rvadim/test:1
$ docker push rvadim/test:1
The push refers to repository [docker.io/rvadim/test]
095624243293: Layer already exists
a37e74863e72: Layer already exists
8eeb4a14bcb4: Layer already exists
ce3011290956: Layer already exists
1: digest: sha256:5d2567c0ec7bcbe50d0014e495e5830f6670855c00f572cecaafca3e4256b90a size: 1152
$ cat test1.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: test-1
  name: test-1
spec:
  containers:
  - command:
    - /bin/bash
    - -c
    - 'echo $HELLO_WORLD'
    image: rvadim/test:1
    imagePullPolicy: Always
    name: test-1
    env:
    - name: HELLO_WORLD
      value: "Hello world!"
  runtimeClassName: gvisor 
  securityContext:
    runAsUser: 0 # Just for force proper PSP
$ kubectl delete -f test1.yaml ; kubectl create -f test1.yaml 
pod "test-1" deleted
pod/test-1 created
$ kubectl logs test-1
Hello world!
$ cat test1.yaml 
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: test-1
  name: test-1
spec:
  containers:
  - command:
    - /busybox/sh
    - -c
    - 'echo $DOCKER_CONFIG'
    image: gcr.io/kaniko-project/executor:debug
    #imagePullPolicy: Always
    name: test-1
    env:
    - name: DOCKER_CONFIG
      value: "/tekton/home/.docker"
  runtimeClassName: gvisor 
  securityContext:
    runAsUser: 0
$ kubectl delete -f test1.yaml; kubectl create -f test1.yaml
pod "test-1" deleted
pod/test-1 created
$ kubectl logs test-1
/tekton/home/.docker

If you test it w/o the tekton entrypoint, it just working as expected.

Workaround

Manually set DOCKER_CONFIG in debug image

...
  - name: build-image
      image: gcr.io/kaniko-project/executor:debug
      command:
        - /busybox/sh
        - -c
        - export DOCKER_CONFIG=/tekton/home/.docker && /kaniko/executor --dockerfile=/workspace/output/Dockerfile --context=/workspace/output --destination=myregistry/my-image:latest --force
vdemeester commented 3 years ago

@rvadim interestingly scary 😓 My guess is that it happens only when using script, which would mean we are not passing the env. variable in that case.

cc @sbwsg

vdemeester commented 3 years ago

I wonder if there is something happening with gvisor though… Tried this out on master

~/s/g/tektoncd/pipeline:master? (kind-pipeline) λ tkn taskrun logs -f
? Select taskrun:  [Use arrows to move, type to filter]
> test-gvisor started 9 seconds ago
  demo-pipeline-run-1-build-skaffold-app-fd6kn started 2 days ago
  demo-pipeline-run-1-build-skaffold-web-4bpf2 started 2 days ago
  demo-pipeline-run-1-skaffold-unit-tests-pbkbv started 2 days ago
  demo-pipeline-run-1-fetch-from-git-h84x8 started 2 days ago

? Select taskrun: test-gvisor started 9 seconds ago
[test] /tekton/home/.docker/
rvadim commented 3 years ago

@vdemeester I think the problem not in script, the same result with command

$ cat testrun.yaml task-test.yaml 
apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
  name: test-gvisor
spec:
  taskRef:
    name: test-gvisor
  podTemplate:
    securityContext:
      runAsUser: 0
    runtimeClassName: gvisor

---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: test-gvisor
spec:
  description: ""
  steps:
    - name: test
      image: gcr.io/kaniko-project/executor:debug
      command:
      - /busybox/sh
      - -c 
      - echo $DOCKER_CONFIG
      env:
      - name: DOCKER_CONFIG
        value: /tekton/home/.docker/
$ kubectl delete -f task-test.yaml -f testrun.yaml ; kubectl create -f task-test.yaml -f testrun.yaml
$ tkn tr list
NAME               STARTED          DURATION     STATUS
test-gvisor        18 seconds ago   7 seconds    Succeeded
$ tkn tr logs -f test-gvisor
[test] /kaniko/.docker/
kubectl get pods test-gvisor-pod-grxvl -o yaml
...
spec:
  containers:
  - args:
    - -wait_file
    - /tekton/downward/ready
    - -wait_file_content
    - -post_file
    - /tekton/tools/0
    - -termination_path
    - /tekton/termination
    - -entrypoint
    - /busybox/sh
    - --
    - -c
    - echo $DOCKER_CONFIG
    command:
    - /tekton/tools/entrypoint
    env:
    - name: HOME
      value: /tekton/home
    - name: DOCKER_CONFIG
      value: /tekton/home/.docker/
    image: gcr.io/kaniko-project/executor:debug
    imagePullPolicy: IfNotPresent
    name: step-test
...
vdemeester commented 3 years ago

@vdemeester I think the problem not in script, the same result with command

Interesting, I definitely can't reproduce on a non-gvisor environment, all is as expected there (printing /tekton/home/.docker)

rvadim commented 3 years ago

Interesting, I definitely can't reproduce on a non-gvisor environment, all is as expected there (printing /tekton/home/.docker)

Yes, it works perfect on the regular runtime. Problem is only with gvisor.

Some additional info:

$ /home/containerd/usr/local/sbin/runsc -version
runsc version google-327477495
spec: 1.0.1-dev
System Info:
  Boot ID:                    3113a1ac-b063-4be2-b647-879cc7de9e49
  Kernel Version:             4.19.112+
  OS Image:                   Container-Optimized OS from Google
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.3.2
  Kubelet Version:            v1.17.13-gke.2001
  Kube-Proxy Version:         v1.17.13-gke.2001
vdemeester commented 3 years ago

Alright, I can reproduce this on a cluster with gVisor (thanks @Sh4d1). This is definitely a problem with gvisor and our entrypoint (I guess)… It's a bit less critical than I initially thought as it works as expected with other CRI implementation (at least cri-o, containerd, dockershim)

cc @bobcatfish @dibyom (if you know who to ping from the gvisor community on this)

vdemeester commented 3 years ago

Pod spec is

apiVersion: v1
kind: Pod
metadata:
  annotations:
    pipeline.tekton.dev/release: devel
    tekton.dev/ready: READY
  creationTimestamp: "2021-01-08T10:29:00Z"
  labels:
    app.kubernetes.io/managed-by: tekton-pipelines
    tekton.dev/task: test-gvisor
    tekton.dev/taskRun: test-gvisor
  name: test-gvisor-pod-qp88w
  namespace: default
  ownerReferences:
  - apiVersion: tekton.dev/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: TaskRun
    name: test-gvisor
    uid: ecb4b560-e65d-4a65-bf9f-67a1142c6fb6
  resourceVersion: "7159984"
  selfLink: /api/v1/namespaces/default/pods/test-gvisor-pod-qp88w
  uid: 7902b0b5-b463-4e94-822e-304b7e413915
spec:
  containers:
  - args:
    - -wait_file
    - /tekton/downward/ready
    - -wait_file_content
    - -post_file
    - /tekton/tools/0
    - -termination_path
    - /tekton/termination
    - -entrypoint
    - /tekton/scripts/script-0-kmctj
    - --
    command:
    - /tekton/tools/entrypoint
    env:
    - name: HOME
      value: /tekton/home
    - name: DOCKER_CONFIG
      value: /tekton/home/.docker/
    image: gcr.io/kaniko-project/executor:debug
    imagePullPolicy: IfNotPresent
    name: step-test
    resources:
      requests:
        cpu: "0"
        ephemeral-storage: "0"
        memory: "0"
    terminationMessagePath: /tekton/termination
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /tekton/scripts
      name: tekton-internal-scripts
    - mountPath: /tekton/tools
      name: tekton-internal-tools
    - mountPath: /tekton/downward
      name: tekton-internal-downward
    - mountPath: /tekton/creds
      name: tekton-creds-init-home-jfgg8
    - mountPath: /workspace
      name: tekton-internal-workspace
    - mountPath: /tekton/home
      name: tekton-internal-home
    - mountPath: /tekton/results
      name: tekton-internal-results
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-zs754
      readOnly: true
    workingDir: /workspace
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  initContainers:
  - args:
    - -c
    - |
      tmpfile="/tekton/scripts/script-0-kmctj"
      touch ${tmpfile} && chmod +x ${tmpfile}
      cat > ${tmpfile} << 'script-heredoc-randomly-generated-6hhzg'
      #!/busybox/sh
      echo $DOCKER_CONFIG

      script-heredoc-randomly-generated-6hhzg
    command:
    - sh
    image: gcr.io/distroless/base@sha256:92720b2305d7315b5426aec19f8651e9e04222991f877cae71f40b3141d2f07e
    imagePullPolicy: IfNotPresent
    name: place-scripts
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /tekton/scripts
      name: tekton-internal-scripts
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-zs754
      readOnly: true
  - command:
    - /ko-app/entrypoint
    - cp
    - /ko-app/entrypoint
    - /tekton/tools/entrypoint
    image: gcr.io/vde-tekton/entrypoint-bff0a22da108bc2f16c818c97641a296@sha256:0908b45793f2847874d4165441271c98b4b5f2274e83e89ed2ce304058b3a0f1
    imagePullPolicy: IfNotPresent
    name: place-tools
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /tekton/tools
      name: tekton-internal-tools
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-zs754
      readOnly: true
  nodeName: scw-vincent-default-0d076a252d93429cb4bd1ddc3d
  priority: 0
  restartPolicy: Never
  runtimeClassName: untrusted
  schedulerName: default-scheduler
  securityContext:
    runAsUser: 0
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - emptyDir: {}
    name: tekton-internal-workspace
  - emptyDir: {}
    name: tekton-internal-home
  - emptyDir: {}
    name: tekton-internal-results
  - emptyDir: {}
    name: tekton-internal-scripts
  - emptyDir: {}
    name: tekton-internal-tools
  - downwardAPI:
      defaultMode: 420
      items:
      - fieldRef:
          apiVersion: v1
          fieldPath: metadata.annotations['tekton.dev/ready']
        path: ready
    name: tekton-internal-downward
  - emptyDir:
      medium: Memory
    name: tekton-creds-init-home-jfgg8
  - name: default-token-zs754
    secret:
      defaultMode: 420
      secretName: default-token-zs754
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2021-01-08T10:29:11Z"
    reason: PodCompleted
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2021-01-08T10:29:14Z"
    reason: PodCompleted
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2021-01-08T10:29:14Z"
    reason: PodCompleted
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2021-01-08T10:29:00Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://dca76bbafe8d0ff93bd83499ecdd04ae8ae5e7e0bf1330a19d81759a6a71e588
    image: gcr.io/kaniko-project/executor:debug
    imageID: gcr.io/kaniko-project/executor@sha256:473d6dfb011c69f32192e668d86a47c0235791e7e857c870ad70c5e86ec07e8c
    lastState: {}
    name: step-test
    ready: false
    restartCount: 0
    started: false
    state:
      terminated:
        containerID: containerd://dca76bbafe8d0ff93bd83499ecdd04ae8ae5e7e0bf1330a19d81759a6a71e588
        exitCode: 0
        finishedAt: "2021-01-08T10:29:12Z"
        message: '[{"key":"StartedAt","value":"2021-01-08T10:29:12.787Z","type":"InternalTektonResult"}]'
        reason: Completed
        startedAt: "2021-01-08T10:29:11Z"
  hostIP: 10.70.128.65
  initContainerStatuses:
  - containerID: containerd://57899340f185f50eec913e5413f81dd8bb7587ea838945d8bf8989a4f54054b3
    image: sha256:4dc0ba3700ab0dcc162f67da9481a906f7753f171c08f602db6667fd74151f07
    imageID: gcr.io/distroless/base@sha256:92720b2305d7315b5426aec19f8651e9e04222991f877cae71f40b3141d2f07e
    lastState: {}
    name: place-scripts
    ready: true
    restartCount: 0
    state:
      terminated:
        containerID: containerd://57899340f185f50eec913e5413f81dd8bb7587ea838945d8bf8989a4f54054b3
        exitCode: 0
        finishedAt: "2021-01-08T10:29:03Z"
        reason: Completed
        startedAt: "2021-01-08T10:29:03Z"
  - containerID: containerd://4639e4b7cfec30c5fe58f49c54bb56058510fc58d08605736a04ce6640bd5353
    image: sha256:28e119d4d03e86b4774e663dcbf22cbf8bb81a00b6fe6352be3185f620d8380a
    imageID: gcr.io/vde-tekton/entrypoint-bff0a22da108bc2f16c818c97641a296@sha256:0908b45793f2847874d4165441271c98b4b5f2274e83e89ed2ce304058b3a0f1
    lastState: {}
    name: place-tools
    ready: true
    restartCount: 0
    state:
      terminated:
        containerID: containerd://4639e4b7cfec30c5fe58f49c54bb56058510fc58d08605736a04ce6640bd5353
        exitCode: 0
        finishedAt: "2021-01-08T10:29:10Z"
        reason: Completed
        startedAt: "2021-01-08T10:29:10Z"
  phase: Succeeded
  podIP: 100.64.2.153
  podIPs:
  - ip: 100.64.2.153
  qosClass: BestEffort
  startTime: "2021-01-08T10:29:00Z"

Logs with a patch version that prints env in the entrypoint as it starts are…

~/s/g/t/pipeline master *3 !1 ?2 ⛄ λ tkn taskrun logs -f
[test] PATH=/usr/local/bin:/kaniko:/busybox
[test] HOSTNAME=test-gvisor-pod-qp88w
[test] HOME=/root
[test] USER=/root
[test] SSL_CERT_DIR=/kaniko/ssl/certs
[test] DOCKER_CONFIG=/kaniko/.docker/
[test] DOCKER_CREDENTIAL_GCR_CONFIG=/kaniko/.config/gcloud/docker_credential_gcr_config.json
[test] KUBERNETES_SERVICE_PORT=443
[test] KUBERNETES_SERVICE_PORT_HTTPS=443
[test] KUBERNETES_PORT=tcp://10.32.0.1:443
[test] KUBERNETES_PORT_443_TCP=tcp://10.32.0.1:443
[test] KUBERNETES_PORT_443_TCP_PROTO=tcp
[test] KUBERNETES_PORT_443_TCP_PORT=443
[test] KUBERNETES_PORT_443_TCP_ADDR=10.32.0.1
[test] KUBERNETES_SERVICE_HOST=10.32.0.1
[test] /kaniko/.docker/

So the entrypoint

vdemeester commented 3 years ago

With the following diff…

diff --git a/cmd/entrypoint/main.go b/cmd/entrypoint/main.go
index 1b3256606..0f1977363 100644
--- a/cmd/entrypoint/main.go
+++ b/cmd/entrypoint/main.go
@@ -18,6 +18,7 @@ package main

 import (
    "flag"
+   "fmt"
    "io"
    "log"
    "os"
@@ -64,6 +65,9 @@ func cp(src, dst string) error {
 }

 func main() {
+   for _, e := range os.Environ() {
+       fmt.Println(e)
+   }
    // Add credential flags originally used in creds-init.
    gitcreds.AddFlags(flag.CommandLine)
    dockercreds.AddFlags(flag.CommandLine)
diff --git a/pkg/pod/script.go b/pkg/pod/script.go
index 288c6875c..5e3d7ebc7 100644
--- a/pkg/pod/script.go
+++ b/pkg/pod/script.go
@@ -122,8 +122,10 @@ func convertListOfSteps(steps []v1beta1.Step, initContainer *corev1.Container, p
 touch ${tmpfile} && chmod +x ${tmpfile}
 cat > ${tmpfile} << '%s'
 %s
+echo "--------"
+cat %s
 %s
-`, tmpFile, heredoc, script, heredoc)
+`, tmpFile, heredoc, script, tmpFile, heredoc)

        // Set the command to execute the correct script in the mounted
        // volume.

… I've got

[test] PATH=/usr/local/bin:/kaniko:/busybox
[test] HOSTNAME=test-gvisor-pod-6xrvg
[test] HOME=/root
[test] USER=/root
[test] SSL_CERT_DIR=/kaniko/ssl/certs
[test] DOCKER_CONFIG=/kaniko/.docker/
[test] DOCKER_CREDENTIAL_GCR_CONFIG=/kaniko/.config/gcloud/docker_credential_gcr_config.json
[test] KUBERNETES_PORT_443_TCP_ADDR=10.32.0.1
[test] KUBERNETES_SERVICE_HOST=10.32.0.1
[test] KUBERNETES_SERVICE_PORT=443
[test] KUBERNETES_SERVICE_PORT_HTTPS=443
[test] KUBERNETES_PORT=tcp://10.32.0.1:443
[test] KUBERNETES_PORT_443_TCP=tcp://10.32.0.1:443
[test] KUBERNETES_PORT_443_TCP_PROTO=tcp
[test] KUBERNETES_PORT_443_TCP_PORT=443
[test] KUBERNETES_SERVICE_PORT=443
[test] KUBERNETES_PORT=tcp://10.32.0.1:443
[test] USER=/root
[test] HOSTNAME=test-gvisor-pod-6xrvg
[test] DOCKER_CREDENTIAL_GCR_CONFIG=/kaniko/.config/gcloud/docker_credential_gcr_config.json
[test] SHLVL=1
[test] HOME=/root
[test] KUBERNETES_PORT_443_TCP_ADDR=10.32.0.1
[test] PATH=/usr/local/bin:/kaniko:/busybox
[test] KUBERNETES_PORT_443_TCP_PORT=443
[test] SSL_CERT_DIR=/kaniko/ssl/certs
[test] KUBERNETES_PORT_443_TCP_PROTO=tcp
[test] KUBERNETES_SERVICE_PORT_HTTPS=443
[test] KUBERNETES_PORT_443_TCP=tcp://10.32.0.1:443
[test] DOCKER_CONFIG=/kaniko/.docker/
[test] KUBERNETES_SERVICE_HOST=10.32.0.1
[test] PWD=/workspace
[test] /kaniko/.docker/
[test] --------
[test] #!/busybox/sh
[test] env
[test] echo $DOCKER_CONFIG
[test] 
[test] echo "--------"
[test] cat /tekton/scripts/script-0-s8t8h

For some reason, gvisor doesn't pass the correct environment variables to the entrypoint…

vdemeester commented 3 years ago

This reproduce the problem with pure k8s (no tekton except using the image)…

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: test-1
  name: test-1
spec:
  initContainers:
  - command:
    - /ko-app/entrypoint
    - cp
    - /ko-app/entrypoint
    - /bar/entrypoint
    image: gcr.io/vde-tekton/entrypoint-bff0a22da108bc2f16c818c97641a296@sha256:0908b45793f2847874d4165441271c98b4b5f2274e83e89ed2ce304058b3a0f1
    imagePullPolicy: IfNotPresent
    name: place-tools
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /bar
      name: foo
  containers:
  - command:
    - /foo/entrypoint
    image: gcr.io/kaniko-project/executor:debug
    #imagePullPolicy: Always
    name: test-1
    env:
    - name: DOCKER_CONFIG
      value: "/tekton/home/.docker"
    volumeMounts:
    - mountPath: /foo
      name: foo
  runtimeClassName: untrusted
  securityContext:
    runAsUser: 0
  volumes:
  - emptyDir: {}
    name: foo
vdemeester commented 3 years ago

Alright, thanks to @Sh4d1 (again 😉), we may touch something: So…

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: test-1
  name: test-1
  namespace: test
spec:
  containers:
  - command:
    - env
    image: gcr.io/kaniko-project/executor:debug
    #imagePullPolicy: Always
    name: test-1
    env:
    - name: DOCKER_CONFIG
      value: "/tekton/home/.docker"
    - name: TEST
      value: lol
  runtimeClassName: untrusted
  securityContext:
    runAsUser: 0

Gives…

PATH=/usr/local/bin:/kaniko:/busybox
HOSTNAME=test-1
HOME=/root
USER=/root
SSL_CERT_DIR=/kaniko/ssl/certs
DOCKER_CONFIG=/kaniko/.docker/
DOCKER_CREDENTIAL_GCR_CONFIG=/kaniko/.config/gcloud/docker_credential_gcr_config.json
DOCKER_CONFIG=/tekton/home/.docker
TEST=lol
KUBERNETES_PORT_443_TCP=tcp://10.32.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.32.0.1
KUBERNETES_SERVICE_HOST=10.32.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.32.0.1:443

Note the 2 DOCKER_CONFIG. This doesn't happen without untrusted (aka gvisor), only one DOCKER_CONFIG is there and it's the last one (the expected one).

vdemeester commented 3 years ago

Reported upstream https://github.com/google/gvisor/issues/5226

afrittoli commented 3 years ago

The issue seems to be fixed upstream in gvisor. @rvadim would you like to retest this with the latest gvisor?

afrittoli commented 3 years ago

Removing the priority as it is not a Tekton issue

tekton-robot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale with a justification. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

tekton-robot commented 2 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten with a justification. Rotten issues close after an additional 30d of inactivity. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle rotten

Send feedback to tektoncd/plumbing.

tekton-robot commented 2 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen with a justification. Mark the issue as fresh with /remove-lifecycle rotten with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/close

Send feedback to tektoncd/plumbing.

tekton-robot commented 2 years ago

@tekton-robot: Closing this issue.

In response to [this](https://github.com/tektoncd/pipeline/issues/3666#issuecomment-968184071): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen` with a justification. >Mark the issue as fresh with `/remove-lifecycle rotten` with a justification. >If this issue should be exempted, mark the issue as frozen with `/lifecycle frozen` with a justification. > >/close > >Send feedback to [tektoncd/plumbing](https://github.com/tektoncd/plumbing). Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.