Currently the pulp.json deployment config in the deployment_config contains one deployment config with 3 containers defined. This produces a situation where scaling up the pod to 2, gives you 2 workers, 2 webservers, and 2 resource managers.
I got an error when using them with openshift. Specificlaly the volume claim needed by the deployment configs couldn't be mounted on multiple Pods. The additional components are ready for testing, but they wouldn't start. It would hang at the volume mount every time for containers 2+.
Note that the pv.yaml I use makes the volume in ReadWriteOnly mode, which is not read/write by many clients. Is that the issue? I'm not able to test that because the openshift instance I have acess to doesn't allow me to create volumeclaims with ReadWriteMany which is the other kind of volume type.
Currently the pulp.json deployment config in the deployment_config contains one deployment config with 3 containers defined. This produces a situation where scaling up the pod to 2, gives you 2 workers, 2 webservers, and 2 resource managers.
The original idea is to separate them and that is where there are 3 other files (independant deployment configs) in this directory: https://github.com/bmbouter/pulp3-openshift/tree/master/deployment_config
I got an error when using them with openshift. Specificlaly the volume claim needed by the deployment configs couldn't be mounted on multiple Pods. The additional components are ready for testing, but they wouldn't start. It would hang at the volume mount every time for containers 2+.
Note that the pv.yaml I use makes the volume in ReadWriteOnly mode, which is not read/write by many clients. Is that the issue? I'm not able to test that because the openshift instance I have acess to doesn't allow me to create volumeclaims with ReadWriteMany which is the other kind of volume type.