Closed roffe closed 6 years ago
looks like this is not activly maintained, will look for other solutions
Sorry I missed your issue.
kube-slack does not read Kubernetes events. It read Kubernetes pod metadata (what you get in kubectl describe pod
, or more precisely kubectl get pod -o yaml
) and report the exact message Kubernetes reported as the reason.
I think this is working as intended. If the container refused to start due to entrypoint spawning problem Docker would report an error during docker start
and Kubernetes just report whatever it gets. If the container starts but crash, that is asynchronous and the output should goes into the logging system, which pod metadata is not one.
I'm using a lot of (container lifecycle hooks)[https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks] and these will output errors if something goes wrong, sometimes the slackbot does not print the message
for example here it printed the bash error in full because the file isn't chmod +x:
while when the script was executed and outputted a error slack message is ""
the event looks perfectly fine in k8s api when i get it with kubectl get events