This document is part of a repository that provides example code and instructions for deploying and configuring a Conjur cluster in OpenShift. There are additional instructions for running a webapp for demonstation purposes. The proposed architecture contains a master and two standbys.
We use a vagrant box since minishift does not support Origin 1.3. Run the follow to set up the environment.
vagrant up --provision
vagrant ssh # then change directory to ./scripts
If not using Vagrant please modify utils with relevant credentials for login into OpenShift
If using scripts please ensure conjur-appliance:4.9-stable
is available in your Docker engine
Deploy a Conjur cluster in OpenShift can be broken down into the following steps:
oc login $CLUSTER_URL -u admin -p admin
conjur
project.conjur-appliance:local
and haproxy
.conjur-master
service which uses HAProxy.Please consult 0_init.sh
.
Isolate the Conjur cluster by creating a dedicated OpenShift project for it.
Appropriate privileges should be granted to ensure relevant operations can be carried out e.g. Conjur seed files can be unpacked.
Below are some privilege considerations:
In order to unpack Conjur seed files processes in the Conjur container need to run as root. Addition of the anyuid privilege grant is one way in which this could be achieved.
oc adm policy add-scc-to-user anyuid -z default
HAproxy needs to be able to list master/standby pods to update its config.
oc adm policy add-cluster-role-to-user cluster-reader system:serviceaccount:$CONJUR_CONTEXT:default
User "developer" needs the edit role on a project.
oc policy add-role-to-user edit developer
Visit your Appliance URL
Please consult 1_build_all.sh
.
This section assumes you have the appliance image conjur-appliance:4.9-stable
in your OpenShift Docker Engine.
./etc/conjur.son
to the appliance. This file is used to specify the amount of memory to allocate to postgres. Consult ./build/conjur_server
./build/haproxy
Please consult 2_start_cluster.sh
.
The following steps should be carried out within the Conjur project
Please consult ./conjur-service/conjur-cluster.yaml
.
oc create -f ./conjur-service/conjur-cluster.yaml
$(oc get pods -l app=conjur-node --no-headers | awk '{ print $1 }')
oc label --overwrite pod $MASTER_POD_NAME role=master
oc exec $MASTER_POD_NAME -- evoke configure master \
-j /etc/conjur.json \
-h $CONJUR_MASTER_DNS_NAME \
--master-altnames conjur-master \
--follower-altnames conjur-follower \
-p $CONJUR_ADMIN_PASSWORD \
$CONJUR_CLUSTER_ACCOUNT
evoke seed standby > standby-seed.tar
$(oc get pods -l role=unset --no-headers | awk '{ print $1 }')
oc label --overwrite pod $pod_name role=standby
oc exec $pod_name evoke unpack seed /tmp/standby-seed.tar
oc exec $pod_name -- evoke configure standby -j /etc/conjur.json -i $MASTER_POD_IP
evoke replication sync
oc create -f ./conjur-service/haproxy-conjur-master.yaml
./etc/update_haproxy.sh
for a working exampleThis section demonstrates an example app consuming the the Conjur cluster running on OpenShift for the purposes of machine identity and secrets retrieval.
Please consult ./0_webapp_init.sh
./webapp_demo/build/build.sh
Please consult ./1_load_policies.sh
and webapp_demo/policy
Please consult ./2_deploy.sh
cat /opt/conjur/etc/ssl/conjur.pem
. Store this in a ConfigMap.conjur host rotate_api_key -h $host_id
and store it in a Secret.oc create -f ./conjur-service/conjur-cluster.yaml