When killing the Job with killTimestamp, we see that the Job had reached a Killed phase even while the container is running.
Once the container is complete, we can see that the logs stopped (meaning that the container exited), and the Pod's containerStatuses running moved to terminated.
The implications of this include:
Incorrect handling of graceful termination (it appears to have immediately terminated, rather than gracefully shutdown)
Concurrency policy may be violated (container is pending termination, but another job has started)
Possible solutions:
Easy fix: Do not depend on the status.phase of the Pod to determine the task state. In this case, we need to look at the containerStatuses AND the phase to determine if all containers are dead AND they will not be recreated.
Abandon the active deadline approach: There are other problems with using active deadline and force deletion at the same time. Alternatively, we could keep the active deadline behavior behind a config/feature flag.
Currently, we use active deadlines to kill Pods, which apparently does not ensure that the container is already terminated before
The following JobConfig allows us to replicate this issue. We use https://github.com/irvinlim/signalbin to test interactions with signal handlers.
When killing the Job with
killTimestamp
, we see that the Job had reached a Killed phase even while the container is running.Once the container is complete, we can see that the logs stopped (meaning that the container exited), and the Pod's containerStatuses
running
moved toterminated
.The implications of this include:
Possible solutions:
status.phase
of the Pod to determine the task state. In this case, we need to look at thecontainerStatuses
AND thephase
to determine if all containers are dead AND they will not be recreated.