Closed uaw013 closed 2 weeks ago
Any suggestions mr Arthur Barr?
Kubernetes Jobs are used to run a brand new container. So the Job above (assuming the indentation is a copy/paste problem), would run a brand new Pod, with a container called "install-script", run the shell command, and then exit. It wouldn't have any relationship whatsoever with any existing queue manager or other containers. There's nowhere in that Job where you are specifying which queue manager you want it to apply to.
If you want to run a command on an existing container, then you can use the kubectl exec
command to inject an additional process into a container. You may come up against security rules inside that container, and any changes which that script makes which aren't persisted to the volume (if the container uses a volume), will be lost when the container is restarted.
I've been trying to add a custom shell script using a Kubernetes job, but it doesn't seem to be working. The YAML file for the job is included below.
It appears that the problem might be related to the state of the replica pods. They are in the "running" state but not in the "ready" state. This is how the system is designed. You can refer to the IBM community discussion for more details: IBM MQ NativeHA on Kubernetes with IBM Messaging/MQ Helm Chart.
What is the best practice to do this?
Here is the YAML file I'm using: