1) When the operator restarts, all the AnsibleJobs that were previously ran will trigger again.
With this PR, it will end the play and stamp the status with the following message:
...
message: This job instance is already running or has reached its end state.
...
2) Currently, its not clear that the awx.awx.tower_job_launch failed to launch (ie a bad auth token)
when you look at the status when the tower job failed to launch you see:
which represent the k8s job was created for the job runner successfully but doesn't have any info about the tower_job failed to launch. This is very confusing for users. You will have to log in the log of the k8s job to see that the tower_job failed.
Signed-off-by: Mike Ng ming@redhat.com
This PR is to address 2 issues:
1) When the operator restarts, all the AnsibleJobs that were previously ran will trigger again. With this PR, it will end the play and stamp the status with the following message:
2) Currently, its not clear that the
awx.awx.tower_job_launch
failed to launch (ie a bad auth token) when you look at the status when the tower job failed to launch you see:which represent the k8s job was created for the job runner successfully but doesn't have any info about the tower_job failed to launch. This is very confusing for users. You will have to log in the log of the k8s job to see that the tower_job failed.
generateName
to append a suffix forkubectl create
status.ansibleJobResult.status
toerror
when job launch errors.The new status when the tower job failed to launch
A tower job that ran successfully will have the following status:
Added
changed
andfailed
tostatus.ansibleJobResult
in this PR.