Open Cluster Management - Governance Policy Addon Controller
The governance policy addon controller manages installations of policy addons on managed clusters by
using
ManifestWorks.
The addons can be enabled, disabled, and configured via their ManagedClusterAddon
resources. For
more information on the addon framework, see the
addon-framework enhancement/design.
The addons managed by this controller are:
Go to the Contributing guide to learn how to get involved.
Check the Security guide if you need to report a security issue.
These instructions assume:
From the base of this repository, a default installation can be applied to the hub cluster with
kubectl apply -k config/default
. You might want to customize the namespace the controller is
deployed to, or the specific image used by the controller. This can be done either by editing
config/default/kustomization.yaml directly, or by using
kustomize commands like kustomize edit set namespace [mynamespace]
or
kustomize edit set image policy-addon-image=[myimage]
.
This example CR would deploy the Configuration Policy Controller to a managed cluster called
my-managed-cluster
:
apiVersion: addon.open-cluster-management.io/v1alpha1
kind: ManagedClusterAddOn
metadata:
name: config-policy-controller
namespace: my-managed-cluster
spec:
installNamespace: open-cluster-management-agent-addon
To modify the image used by the Configuration Policy Controller on this managed cluster, you can add an annotation either by modifying and applying the YAML directly, or via a kubectl command like:
kubectl -n my-managed-cluster annotate managedclusteraddon config-policy-controller addon.open-cluster-management.io/values='{"global":{"imageOverrides":{"config_policy_controller":"quay.io/my-repo/my-configpolicy:imagetag"}}}'
Any values in the
Helm chart's values.yaml can
be modified with the addon.open-cluster-management.io/values
annotation. However, the structure
of that annotation makes it difficult to apply multiple changes - separate kubectl annotate
commands will override each other, as opposed to being merged.
To address this issue, there are some separate annotations that can be applied independently:
addon.open-cluster-management.io/on-multicluster-hub
- set to "true" on the
governance-policy-framework addon when deploying it on a self-managed hub. It has no effect on
other addons. Alternatively, this annotation can be set on the hub's ManagedCluster object.log-level
- set to an integer to adjust the logging levels on the addon. A higher number will
generate more logs. Note that logs from libraries used by the addon will be 2 levels below this
setting; to get a v=5
log message from a library, annotate the addon with log-level=7
.policy.open-cluster-management.io/sync-policies-on-multicluster-hub
- set this to "true" only
when the hub is imported by another hub. This is a very advanced use-case and should almost
never be used. Alternatively, this annotation can be set on the hub's ManagedCluster object.To set up a local KinD cluster for development, you'll need to install
kind
. Then you can use the kind-deploy-controller
make target to set everything up, including
starting a kind cluster, installing the
registration-operator, and
importing a cluster.
Alternatively, you can run:
./build/manage-clusters.sh
to deploy a hub and a configurable number of managed clusters
(defaults to one) using Kindmake kind-bootstrap-cluster
, a wrapper for the ./build/manage-clusters.sh
scriptmake kind-bootstrap-cluster-dev
, a wrapper for the ./build/manage-clusters.sh
script that stops
short of deploying the controller so that the controller can be run locally.Note: You may need to run make clean
if there are existing stale kubeconfig
files at the root
of the repo.
Before the addons can be successfully distributed to the managed cluster, the work-agent must be
started. This usually happens automatically within 5 minutes of importing the managed cluster, and
can be waited for programmatically with the wait-for-work-agent
make target.
To deploy basic ManagedClusterAddons
to all managed clusters, you can run make kind-deploy-addons
.
To delete created clusters, you can use the make kind-bootstrap-delete-clusters
target, a wrapper
for the ./build/manage-clusters.sh
script.
Two make targets are used to update the controller running in the kind clusters with any local
changes. The kind-load-image
target will re-build the image, and load it into the kind cluster.
The kind-regenerate-controller
target will update the deployment manifests with any local changes
(including RBAC changes), and restart the controller on the cluster to update it.
In general, the addon-controller will revert changes made to its managed ManifestWorks, to match
what is rendered by the helm charts. To more quickly test changes to deployed resources without
rebuilding the controller image, the policy-addon-pause=true
annotation can be added to the
ManagedClusterAddOn resource. This will enable changes to the ManifestWork on the hub cluster to
persist, but direct changes to resources on a managed cluster will still be reverted to match the
ManifestWork.
If there is trouble overriding an image or other configurations in ACM, the
klusterletaddonconfig-pause
annotation is required on the KlusterletAddonConfig
to add the
addon.open-cluster-management.io/values
annotation overrides in the ManagedClusterAddon
.
oc annotate klusterletaddonconfig -n ${CLUSTER_NAME} ${CLUSTER_NAME} klusterletaddonconfig-pause=true --overwrite=true
The e2e tests are intended to be run against a kind
cluster. After setting one up with the steps
above (and waiting for the work-agent), the tests can be run with the e2e-test
make target.