When the tower job fails to launch, the current ansiblejob status output is:
status:
ansibleJobResult:
status: error
conditions:
- ansibleResult:
changed: 2
completion: 2020-09-04T16:31:44.004288
failures: 0
ok: 6
skipped: 1
lastTransitionTime: "2020-09-04T16:31:33Z"
message: Awaiting next reconciliation
reason: Successful
status: "True"
type: Running
k8sJob:
created: true
env:
secretNamespacedName: default/toweraccess
templateName: Demo Job Template
verifySSL: false
message: Monitor the K8s job status and log for more details
namespacedName: default/demo-job-wdq2b
besides the status: error it's not really clear what is going on. With this PR, the k8sJob status has been improved to explicitly states what kubectl commands the user can run to debug what cause the error:
k8sJob:
created: true
env:
secretNamespacedName: default/toweraccess
templateName: Demo Job Template
verifySSL: false
message: |-
Monitor the job.batch status for more details with the following commands:
'kubectl -n default get job.batch/demo-job-wdq2b'
'kubectl -n default describe job.batch/demo-job-wdq2b'
'kubectl -n default logs -f job.batch/demo-job-wdq2b'
namespacedName: default/demo-job-wdq2b
when the user run the logs command, it will show the cause of the error:
Signed-off-by: Mike Ng ming@redhat.com
When the tower job fails to launch, the current ansiblejob status output is:
besides the
status: error
it's not really clear what is going on. With this PR, the k8sJob status has been improved to explicitly states whatkubectl
commands the user can run to debug what cause the error:when the user run the logs command, it will show the cause of the error: