The backwards compatibility shim for that is currently broken, so the yaml shown directly above will fail every time.
Solution
This was a multi-layer problem. I'll outline the issues below:
The job_runner.tar.gz that gets copied in to the runner container got out of date. We should improve the process so that this gets automatically updated in the future to avoid issues like this in the future.
Past that though, there was a bug in the code itself which was causing the operator to try to look for an AnsibleWorkflow object when it shouldn't have been, which resulted in a misleading permissions issue
However, we uncovered another issue that need fixing, but which I think we should handle in a different issue:
The role, rolebinding, and service account are all created with ownerRef's. Unfortunately, this means that when the AnsibleJob or AnsibleWorkflow CR that created them is deleted, they too are deleted. This is generally not an issue because they are just created on the fly as needed by new CR's. However, this can introduce a race condition if an AnsibleJob is deleted while another is in flight. This should be handled as a separate issue. The easy fix would be to remove the ownerRef's directly after the resources are created, however, this will result in them being left behind if the operator is deleted.
Furthermore, if this were deployed in a cluster-scoped fashion, there would be a set of SA/role/rolebinding in each namespace that was managed, rather than a central one with a ClusterRole in the namespace the operator is installed in.
Problem Statement
The recommended way to launch a Workflow job template is to create an AnsibleWorkflow CR, like the following:
However, we still support the old (now deprecated) way of launching Workflows using an AnsibleJob CR, for example:
The backwards compatibility shim for that is currently broken, so the yaml shown directly above will fail every time.
Solution
This was a multi-layer problem. I'll outline the issues below:
However, we uncovered another issue that need fixing, but which I think we should handle in a different issue: