Closed simongdavies closed 3 years ago
For the pod affinity label, is the plan that the k8s operator will generate a unique value, set that on the porter job container, and then pass through this label so that the k8s driver will also use the same label on the invocation image pod?
For the pod affinity label, is the plan that the k8s operator will generate a unique value, set that on the porter job container, and then pass through this label so that the k8s driver will also use the same label on the invocation image pod?
@carolynvs The approach I have used in the controller is to set the AFFINITY_MATCH_LABELS
to installation=inst.Name
. I think this should work?
We can chat about that once we have a PR against the operator with this change, but people can run actions against an installation at the same time, though I'd advise anyone against it right now. Ideally it is more unique than the installation name. In another place where I needed a unique value, I used the CRD revision that triggered the event appended to the installation name.
This change introduces a setting to specify labels to be used for affinity for the Kubernetes driver.
The driver uses a PVC to copy input and output data between the driver and the job that runs the invocation image, In some cases there is a need to make sure that the job runs on the same node as another pod. For example, where the PVC is bound to the node running the driver with ReadWriteOnce access mode then the job running the invocation image must run on the same node as the pod creating the PVC, this change enables a client to set pod affinity appropriately.
To specify affinity with a pod the client should set the environment variable
AFFINITY_MATCH_LABELS
with label name value pairs separated by whitespace. (e.g 'A=B X=Y'). These labels are used to set Pod Affinity constraints for the job running the invocation image.Signed-off-by: Simon Davies simongdavies@users.noreply.github.com