oc cluster up
) and change the log driver to json-file (more reliable):
/etc/sysconfig/docker
OPTIONS=
to include --insecure-registry 172.30.0.0/16 --log-driver=json-file
sudo systemctl enable --now docker
sudo pip install kubernetes openshift
sudo dnf install libselinux-python
sudo dnf install python2-libselinux
$HOME/go/src/github.com/openshift/cluster-operator
go get -u github.com/cloudflare/cfssl/cmd/...
origin/releases
(doesn't have to be 3.10):
oc
from source.oc
binary somewhere in your path.kubectl
symlink to the oc
binary (if you don't already have it). This is necessary for the kubectl_apply
ansible module to work.
kubectl
symlink somewhere in your path.ln -s oc kubectl
oc cluster up --image="docker.io/openshift/origin"
oc login -u system:admin
oc adm policy add-cluster-role-to-user cluster-admin admin
oc login -u admin -p password
$HOME/.aws/credentials
- your AWS credentials, default section will be used but can be overridden by vars when running the create cluster playbook.$HOME/.ssh/libra.pem
- the SSH private key to use for AWSWARNING |
---|
By default when using deploy-devel-playbook.yml to deploy cluster operator, fake images will be used. This means that no actual cluster will be created. If you want to create a real cluster, pass -e fake_deployment=false to the playbook invocation. |
ansible-playbook contrib/ansible/deploy-devel-playbook.yml
deploy-devel-playbook.yml
automatically kicks off an image compile. To re-compile and push a new image:
oc start-build cluster-operator -n openshift-cluster-operator
eval $(minishift docker-env)
NO_DOCKER=1 make images
make integrated-registry-push
ansible-playbook contrib/ansible/create-cluster-playbook.yml
-e cluster_name
, -e cluster_namespace
, or other variables you can override as defined at the top of the playbook.You can then check the provisioning status of your cluster by running oc describe cluster <cluster_name>
If you are actively working on controller code you can save some time by compiling and running locally:
oc scale -n openshift-cluster-operator --replicas=0 dc/cluster-operator-controller-manager
oc edit -n openshift-cluster-operator DeploymentConfig cluster-operator-controller-manager
and add an argument for --controllers=-disableme or --controllers=c1,c2,c3 for just the controllers you want.oc delete -n openshift-cluster-operator DeploymentConfig cluster-operator-controller-manager
make build
go install ./cmd/cluster-operator
bin/cluster-operator controller-manager --log-level debug --k8s-kubeconfig ~/.kube/config
--controllers clusterapi,machineset,etc
. Use --help to see the full list.The Cluster Operator uses its own Ansible image which layers our playbooks and roles on top of the upstream OpenShift Ansible images. Typically our Ansible changes only require work in this repo. See the build/cluster-operator-ansible
directory for the Dockerfile and playbooks we layer in.
To build the cluster-operator-ansible image you can just run make images
normally.
WARNING: This image is built using OpenShift Ansible v3.10. This can be adjusted by specifying the CO_ANSIBLE_URL and CO_ANSIBLE_BRANCH environment variables to use a different branch/repository for the base openshift-ansible image.
You can run cluster-operator-ansible playbooks standalone by creating an inventory like:
[OSEv3:children]
masters
nodes
etcd
[OSEv3:vars]
ansible_become=true
ansible_ssh_user=centos
openshift_deployment_type=origin
openshift_release="3.10"
oreg_url=openshift/origin-${component}:v3.10.0
openshift_aws_ami=ami-833d37f9
[masters]
[etcd]
[nodes]
You can then run ansible with the above inventory file and your cluster ID:
ansible-playbook -i ec2-hosts build/cluster-operator-ansible/playbooks/cluster-operator/node-config-daemonset.yml -e openshift_aws_clusterid=dgoodwin-cluster
We're using the Cluster Operator deployment Ansible as a testing ground for the kubectl-ansible modules that wrap apply and oc process. These roles are vendored in similar to how golang works using a tool called gogitit. The required gogitit manifest and cache are committed, but only the person updating the vendored code needs to install the tool or worry about the manifest. For everyone else the roles are just available normally and this allows us to not require developers to periodically re-run ansible-galaxy install.
Updating the vendored code can be done with:
$ cd contrib/ansible/
$ gogitit sync
For OpenShift CI our roles template, which we do not have permissions to apply ourselves, had to be copied to https://github.com/openshift/release/blob/master/projects/cluster-operator/cluster-operator-roles-template.yaml. Our copy in this repo is authoritative, we need to remember to copy the file and submit a PR, and request someone run the make target for us whenever the auth/roles definitions change.
You can build the development utilities binary coutil
by running: make coutil
. Once built, the binary will be placed in bin/coutil
.
Utilities are subcommands under coutil
and include:
aws-actuator-test
- allows invoking AWS actuator actions (create, update, delete) without requiring a cluster to be present.extract-jenkins-logs
- extracts container logs from a cluster operator e2e run, given a Jenkins job URLplaybook-mock
- used by the fake-ansible image to track invocations of ansible by cluster operator controllerswait-for-apiservice
- given the name of an API service, waits for the API service to be functional.wait-for-cluster-ready
- waits for a cluster operator ClusterDeployment to be provisioned and functional, reporting on its progress along the way.