As part of Oracle's resolution to make Oracle Database Kubernetes native (that is, observable and operable by Kubernetes), Oracle released Oracle Database Operator for Kubernetes (OraOperator
or the operator). OraOperator extends the Kubernetes API with custom resources and controllers for automating Oracle Database lifecycle management.
In this v1.1.0 production release, OraOperator
supports the following database configurations and infrastructure:
Oracle will continue to extend OraOperator
to support additional Oracle Database configurations.
kubectl wait
command that allows the user to wait for a specific condition on ADBThis release of Oracle Database Operator for Kubernetes (the operator) supports the following lifecycle operations:
The upcoming releases will support new configurations, operations, and capabilities.
This production release has been installed and tested on the following Kubernetes platforms:
Oracle strongly recommends that you ensure your system meets the following Prerequisites.
The operator uses webhooks for validating user input before persisting it in etcd. Webhooks require TLS certificates that are generated and managed by a certificate manager.
Install the certificate manager with the following command:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.4/cert-manager.yaml
OraOperator supports the following two modes of deployment:
This is the default mode, in which OraOperator is deployed to operate in a cluster, and to monitor all the namespaces in the cluster.
serviceaccount:oracle-database-operator-system:default
cluster wide access for the resources by applying cluster-role-binding.yaml kubectl apply -f rbac/cluster-role-binding.yaml
kubectl apply -f oracle-database-operator.yaml
In this mode, OraOperator can be deployed to operate in a namespace, and to monitor one or many namespaces.
serviceaccount:oracle-database-operator-system:default
service account with resource access in the required namespaces. For example, to monitor only the default namespace, apply the default-ns-role-binding.yaml kubectl apply -f rbac/default-ns-role-binding.yaml
To watch additional namespaces, create different role binding files for each namespace, using default-ns-role-binding.yaml as a template, and changing the metadata.name
and metadata.namespace
fields
WATCH_NAMESPACE
. Use comma-delimited values for multiple namespaces.- name: WATCH_NAMESPACE
value: "default"
kubectl apply -f oracle-database-operator.yaml
To expose services on each node's IP and port (the NodePort) apply the node-rbac.yaml. Note that this step is not required for LoadBalancer services.
kubectl apply -f rbac/node-rbac.yaml
After you have completed the preceding prerequisite changes, you can install the operator. To install the operator in the cluster quickly, you can apply the modified oracle-database-operator.yaml
file from the preceding step.
Run the following command
kubectl apply -f oracle-database-operator.yaml
Ensure that the operator pods are up and running. For high availability, Operator pod replicas are set to a default of 3. You can scale this setting up or down.
$ kubectl get pods -n oracle-database-operator-system
NAME READY STATUS RESTARTS AGE
pod/oracle-database-operator-controller-manager-78666fdddb-s4xcm 1/1 Running 0 11d
pod/oracle-database-operator-controller-manager-78666fdddb-5k6n4 1/1 Running 0 11d
pod/oracle-database-operator-controller-manager-78666fdddb-t6bzb 1/1 Running 0 11d
Check the resources
You should see that the operator is up and running, along with the shipped controllers.
For more details, see Oracle Database Operator Installation Instructions.
The following quickstarts are designed for specific database configurations:
The following quickstart is designed for non-database configurations:
YAML file templates are available under /config/samples
. You can copy and edit these template files to configure them for your use cases.
To uninstall the operator, the final step consists of deciding whether you want to delete the custom resource definitions (CRDs) and Kubernetes APIServices introduced into the cluster by the operator. Choose one of the following options:
To delete all the CRD instances deployed to cluster by the operator, run the following commands, where
kubectl delete oraclerestdataservice.database.oracle.com --all -n <namespace>
kubectl delete singleinstancedatabase.database.oracle.com --all -n <namespace>
kubectl delete shardingdatabase.database.oracle.com --all -n <namespace>
kubectl delete dbcssystem.database.oracle.com --all -n <namespace>
kubectl delete autonomousdatabase.database.oracle.com --all -n <namespace>
kubectl delete autonomousdatabasebackup.database.oracle.com --all -n <namespace>
kubectl delete autonomousdatabaserestore.database.oracle.com --all -n <namespace>
kubectl delete autonomouscontainerdatabase.database.oracle.com --all -n <namespace>
kubectl delete cdb.database.oracle.com --all -n <namespace>
kubectl delete pdb.database.oracle.com --all -n <namespace>
kubectl delete dataguardbrokers.database.oracle.com --all -n <namespace>
kubectl delete databaseobserver.observability.oracle.com --all -n <namespace>
cat rbac/* | kubectl delete -f -
After all CRD instances are deleted, it is safe to remove the CRDs, APIServices and operator deployment. To remove these files, use the following command:
kubectl delete -f oracle-database-operator.yaml --ignore-not-found=true
Note: If the CRD instances are not deleted, and the operator is deleted by using the preceding command, then operator deployment and instance objects (pods, services, PVCs, and so on) are deleted. However, if that happens, then the CRD deletion stops responding. This is because the CRD instances have properties that prevent their deletion, and that can only be removed by the operator pod, which is deleted when the APIServices are deleted.
This project welcomes contributions from the community. Before submitting a pull request, please review our contribution guide
You can submit a GitHub issue, oir submit an issue and then file an Oracle Support service request. To file an issue or a service request, use the following product ID: 14430.
Please consult the security guide for our responsible security vulnerability disclosure process
Kubernetes secrets are the usual means for storing credentials or passwords input for access. The operator reads the Secrets programmatically, which limits exposure of sensitive data. However, to protect your sensitive data, Oracle strongly recommends that you set and get sensitive data from Oracle Cloud Infrastructure Vault, or from third-party Vaults.
The following is an example of a YAML file fragment for specifying Oracle Cloud Infrastructure Vault as the repository for the admin password.
adminPassword:
ociSecretOCID: ocid1.vaultsecret.oc1...
Examples in this repository where passwords are entered on the command line are for demonstration purposes only.
Copyright (c) 2022, 2024 Oracle and/or its affiliates. Released under the Universal Permissive License v1.0 as shown at https://oss.oracle.com/licenses/upl/