Closed rhcarvalho closed 9 years ago
is the issue simply that the config data is not on a persistent volume? if it were, would this survive a restart? or is there something else that needs to be initialized each time the mongo container comes up?
During redeploy the replica set is essentially undone as pods get killed. Upon restart, they need be reconnected -- https://github.com/openshift/mongodb/blob/master/2.4/contrib/common.sh#L130-L135
I'll try to make it so that we use a post-deploy hook instead of a run-once pod and see how far it gets us. It should work even if we have ephemeral storage (data will be lost, but not the connectivity in the redeployed cluster).
I think this would be a good scenario for an extended test... or at least add as a test case for our QE team.
@rhcarvalho QE has added such scenario with "origin_devexp_625" contained in case title: https://tcms-openshift.rhcloud.com/case/4101/ and https://tcms-openshift.rhcloud.com/case/4102/ .
The replication example does not survive a redeploy. The way it works with a run-once pod has likely no future if we will support a redeploy.
Steps to reproduce:
On a new project, create cluster from the template:
Wait until replica set is deployed and stand-alone pod shuts down:
List pods and connect to one of them as the 'admin' user:
Ok, we have a replica set. Now, let's continue...
Redeploy:
Again, list pods and try to connect to one of them as the 'admin' user:
It failed because there is no data persistence and with the redeploy all the data an configuration was gone.
Connect without authentication:
As we can see, we have now an independent MongoDB instance, running without authentication and without any of the configuration applied originally by the
mongodb-service
pod.