While running a lot of parallel jobs I have some jobs that doesn't start at all showing up in the UI as "Canceled". I managed to reduce the number of canceled jobs by:
Increasing registry QPS on the kubelet
Moving the docker image to a private registry
Pulling the image at the kubernetes nodes startup ( managed by karpenter on EKS )
Sometimes I still get Canceled jobs in the UI and the pods logs this:
2023-12-14 11:30:14 DEBUG Loaded config agent_version=3.52.0 agent_build=6806
2023-12-14 11:30:14 DEBUG Enabled experiment "kubernetes-exec" agent_version=3.52.0 agent_build=6806
# Using experimental Kubernetes support
🚨 Error: Failed to start kubernetes client: error connecting to kubernetes runner: dial unix /workspace/buildkite.sock: connect: no such file or directory
Describing the job it says:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 7m53s job-controller Created pod: buildkite-018c6813-a298-4984-8707-ad9a72dd306e-6mfjh
Normal SuccessfulDelete 3m28s job-controller Deleted pod: buildkite-018c6813-a298-4984-8707-ad9a72dd306e-6mfjh
Warning DeadlineExceeded 3m28s job-controller Job was active longer than specified deadline
I noticed that activeDeadlineSeconds is 1 and backoffLimit is 0. These settings should be configurable to allow the Pod to retry in case of image pull issues etc but I'm not sure if it's relevant for this problem.
While running a lot of parallel jobs I have some jobs that doesn't start at all showing up in the UI as "Canceled". I managed to reduce the number of canceled jobs by:
Sometimes I still get Canceled jobs in the UI and the pods logs this:
Describing the job it says:
I noticed that
activeDeadlineSeconds
 is1
andbackoffLimit
is0
. These settings should be configurable to allow the Pod to retry in case of image pull issues etc but I'm not sure if it's relevant for this problem.