The Oracle Database Operator for Kubernetes (a.k.a. OraOperator) helps developers, DBAs, DevOps and GitOps teams reduce the time and complexity of deploying and managing Oracle Databases. It eliminates the dependency on a human operator or administrator for the majority of database operations.
I like the approach to make the new database observability feature a bit more comfortable by offering an OracleOperator CR to create deployment, service and servicemonitor for me.
But I did have two issues with it which i'd like to post, perhaps You can enhance the "DatabaseObserver" a little for us or document my workarounds:
1) my prometheus default install (the one coming with OCNE 1.8/1.9) requires to have a certain label to ServiceMonitor resources so they get included in the scraping configuration. The label must be "release: prometheus".
Unfortunately, this label is already in use by OraOperator, it always creates a label "release: stable" for the service and servicemonitor no matter what has been specified in the DatabaseObserver "prometheus" section.
Request: please throw out the logic that creates the "release: stable" label in OraOperator-managed resources, or at least make it take into account that a custom "release" label is to be set, like "release: prometheus".
Workaround 1: either patch the ServiceMonitor resource after propagation to overwrite the "release: stable" label - this is at risk since it will get overwritten on next resource update or reconciliation (which OraOperator does not do unfortunately).
Workaround 2: forget the OraOperator - created ServiceMonitor resource and create Your own custom one, pointing to the same service as the one generated by OraOperator.
2) Since monitoring many different databases running OnPrem, Cloud and k8s I would like to adapt the "instance" label in prometheus. By default, it only shows k8s internal IP and port of the container which is being scraped. I would like to have a combination of CDB + PDB in the instance name. ServiceMonitor resources can do a relabeling, which means I take some kubernetes metadata labels and use them to overwrite the "instance" label/attribute in prometheus.
Unfortunately, DatabaseObserver does not allow to have a relabeling configuration for prometheus.
Request: Please add a relabeling configuration to the DatabaseObserver object and just pass it through to the ServiceMonitor object to be created.
Workaround: create a custom ServiceMonitor resource which points to the same service as DatabaseObserver does. But it is unmanaged by OraOperator - if it would do a reconcile of its managed objects (which it does not) that object would not be cleaned up.
Here is a sample ServiceMonitor which uses the correct "release: prometheus" label and which does the relabeling of the "instance" attribute/label in prometheus, based on another kubernetes label I provide (called "app"). For documentation or -even better- for fixing.
I like the approach to make the new database observability feature a bit more comfortable by offering an OracleOperator CR to create deployment, service and servicemonitor for me. But I did have two issues with it which i'd like to post, perhaps You can enhance the "DatabaseObserver" a little for us or document my workarounds:
1) my prometheus default install (the one coming with OCNE 1.8/1.9) requires to have a certain label to ServiceMonitor resources so they get included in the scraping configuration. The label must be "release: prometheus". Unfortunately, this label is already in use by OraOperator, it always creates a label "release: stable" for the service and servicemonitor no matter what has been specified in the DatabaseObserver "prometheus" section.
Request: please throw out the logic that creates the "release: stable" label in OraOperator-managed resources, or at least make it take into account that a custom "release" label is to be set, like "release: prometheus".
Workaround 1: either patch the ServiceMonitor resource after propagation to overwrite the "release: stable" label - this is at risk since it will get overwritten on next resource update or reconciliation (which OraOperator does not do unfortunately). Workaround 2: forget the OraOperator - created ServiceMonitor resource and create Your own custom one, pointing to the same service as the one generated by OraOperator.
2) Since monitoring many different databases running OnPrem, Cloud and k8s I would like to adapt the "instance" label in prometheus. By default, it only shows k8s internal IP and port of the container which is being scraped. I would like to have a combination of CDB + PDB in the instance name. ServiceMonitor resources can do a relabeling, which means I take some kubernetes metadata labels and use them to overwrite the "instance" label/attribute in prometheus. Unfortunately, DatabaseObserver does not allow to have a relabeling configuration for prometheus.
Request: Please add a relabeling configuration to the DatabaseObserver object and just pass it through to the ServiceMonitor object to be created.
Workaround: create a custom ServiceMonitor resource which points to the same service as DatabaseObserver does. But it is unmanaged by OraOperator - if it would do a reconcile of its managed objects (which it does not) that object would not be cleaned up.
Here is a sample ServiceMonitor which uses the correct "release: prometheus" label and which does the relabeling of the "instance" attribute/label in prometheus, based on another kubernetes label I provide (called "app"). For documentation or -even better- for fixing.