This has a 'good' container which reports exitcode 0 after 5 seconds, and a 'bad' container which reports exit code 1 after 15 seconds.
After about 5 seconds the pod shows "Completed"
run kubectl describe pod/errorpod
The main pod Status value is Running in the describe result
Wait for both containers to finish
run kubectl describe pod/errorpod again
The main pod Status is Failed - but Lens shows Completed
The 'bad' container returns exit code 1 to fail the pod, but Lens still reports it as Completed.
If the 'bad' container finishes before the 'good' one then Lens first reports Status=Error due to the bad container, then it changes to Completed when the 'good' container finishes with exit=0.
If one of the containers is commented out and re-deployed then the Status column is 100 % reliable (always reports Completed or Error).
Expected behavior
The Status column on the Pod view should report the same value as the pod Status value in the result of kubectl describe pod/errorpod
Screenshots
Environment (please complete the following information):
Lens Version: 5.2.5-latest-20211001.2
Docker-Desktop
Windows 10
Ubuntu wsl2
Logs:
When you run the application executable from command line you will see some logging output. Please paste them here:
**After 'good' pod has completed;**
Name: errorpod
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Wed, 03 Nov 2021 14:39:46 +1100
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.1.1.76
IPs:
IP: 10.1.1.76
Containers:
good:
Container ID: docker://b4392200db58853597b3c448d53feeb8feaf0bfee8b1bf41379ed7401a953a9e
Image: nginx
Image ID: docker-pullable://artifactory.unibet.com.au/docker/nginx@sha256:7250923ba3543110040462388756ef099331822c6172a050b12c7a38361ea46f
Port: <none>
Host Port: <none>
Args:
bash
-c
sleep 5; exit 0
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 03 Nov 2021 14:39:47 +1100
Finished: Wed, 03 Nov 2021 14:39:52 +1100
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qvlp2 (ro)
bad:
Container ID: docker://11e64e6f86444e8993a97aac9cb1396f460538abf14b10fe52b3d293201160d9
Image: nginx
Image ID: docker-pullable://artifactory.unibet.com.au/docker/nginx@sha256:7250923ba3543110040462388756ef099331822c6172a050b12c7a38361ea46f
Port: <none>
Host Port: <none>
Args:
bash
-c
sleep 15; exit 1
State: Running
Started: Wed, 03 Nov 2021 14:39:47 +1100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qvlp2 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-qvlp2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8s default-scheduler Successfully assigned default/errorpod to docker-desktop
Normal Pulled 7s kubelet Container image "nginx" already present on machine
Normal Created 7s kubelet Created container good
Normal Started 7s kubelet Started container good
Normal Pulled 7s kubelet Container image "nginx" already present on machine
Normal Created 7s kubelet Created container bad
Normal Started 7s kubelet Started container bad
After 'bad' pod has completed;
Name: errorpod
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Wed, 03 Nov 2021 14:39:46 +1100
Labels: <none>
Annotations: <none>
Status: Failed
IP: 10.1.1.76
IPs:
IP: 10.1.1.76
Containers:
good:
Container ID: docker://b4392200db58853597b3c448d53feeb8feaf0bfee8b1bf41379ed7401a953a9e
Image: nginx
Image ID: docker-pullable://artifactory.unibet.com.au/docker/nginx@sha256:7250923ba3543110040462388756ef099331822c6172a050b12c7a38361ea46f
Port: <none>
Host Port: <none>
Args:
bash
-c
sleep 5; exit 0
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 03 Nov 2021 14:39:47 +1100
Finished: Wed, 03 Nov 2021 14:39:52 +1100
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qvlp2 (ro)
bad:
Container ID: docker://11e64e6f86444e8993a97aac9cb1396f460538abf14b10fe52b3d293201160d9
Image: nginx
Image ID: docker-pullable://artifactory.unibet.com.au/docker/nginx@sha256:7250923ba3543110040462388756ef099331822c6172a050b12c7a38361ea46f
Port: <none>
Host Port: <none>
Args:
bash
-c
sleep 15; exit 1
State: Terminated
Reason: Error
Exit Code: 1
Started: Wed, 03 Nov 2021 14:39:47 +1100
Finished: Wed, 03 Nov 2021 14:40:02 +1100
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qvlp2 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-qvlp2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18s default-scheduler Successfully assigned default/errorpod to docker-desktop
Normal Pulled 17s kubelet Container image "nginx" already present on machine
Normal Created 17s kubelet Created container good
Normal Started 17s kubelet Started container good
Normal Pulled 17s kubelet Container image "nginx" already present on machine
Normal Created 17s kubelet Created container bad
Normal Started 17s kubelet Started container bad
Describe the bug The pod Status value on the 'Pods' view is not reported correctly for multi-container pods
To Reproduce Steps to reproduce the behavior:
This has a 'good' container which reports exitcode 0 after 5 seconds, and a 'bad' container which reports exit code 1 after 15 seconds.
The 'bad' container returns exit code 1 to fail the pod, but Lens still reports it as Completed.
If the 'bad' container finishes before the 'good' one then Lens first reports Status=Error due to the bad container, then it changes to Completed when the 'good' container finishes with exit=0.
If one of the containers is commented out and re-deployed then the Status column is 100 % reliable (always reports Completed or Error).
Expected behavior The Status column on the Pod view should report the same value as the pod Status value in the result of kubectl describe pod/errorpod
Screenshots
Environment (please complete the following information):
Logs: When you run the application executable from command line you will see some logging output. Please paste them here:
Kubeconfig: apiVersion: v1 clusters:
Additional context Add any other context about the problem here.