Closed lilongfeng0902 closed 2 years ago
@lilongfeng0902: This issue is currently awaiting triage.
If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
/kind bug /sig apps /area workload-api/job
Maybe my question is unreasonable…… Thanks.
What happened?
First,create a job,just as below:
Then,when the job is running, execute command "systemctl stop docker", the pod becomes completed. The pod yaml file ,just as below
Then you can exec command "systemctl restart docker",but the job change the completed status, the yaml of status like below:
What did you expect to happen?
Infact, the job has not completed, but the job has stopped. I hope that when the runtime service is normal,the job will run normal again.
How can we reproduce it (as minimally and precisely as possible)?
Firstly, create a job, until the job become running. Secondly, you stop the runtime service, by exec "systemclt stop docker.service". Thirdly, wait until the pod change completed status, then start docker service. Finnaly,you will reproduce it.
Anything else we need to know?
log about the question:
The problem might be the issue https://github.com/kubernetes/kubernetes/issues/28486‘s special case. Is it necessary to deal with……
Kubernetes version
Cloud provider
OS version
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)