An operator to watch ingresses/routes and create liveness alerts for your apps/microservices in Uptime checkers.
We want to monitor ingresses in a kubernetes cluster and routes in openshift cluster via any uptime checker but the problem is having to manually check for new ingresses or routes / removed ingresses or routes and add them to the checker or remove them.
This operator will continuously watch ingresses/routes based on defined EndpointMonitor
custom resource, and
automatically add / remove monitors in any of the uptime checkers. With the help of this solution, you can keep a check
on your services and see whether they're up and running and live, without worrying about manually registering them on
the Uptime checker.
Currently we support the following monitors:
Configure the uptime checker configuration in the config.yaml
based on your uptime provider. Add create a secret
imc-config
that holds config.yaml
key:
kind: Secret
apiVersion: v1
metadata:
name: imc-config
data:
config.yaml: >-
<BASE64_ENCODED_CONFIG.YAML>
type: Opaque
Following are the available options that you can use to customize the controller:
Key | Description |
---|---|
providers | An array of uptime providers that you want to add to your controller |
enableMonitorDeletion | A safeguard flag that is used to enable or disable monitor deletion on ingress deletion (Useful for prod environments where you don't want to remove monitor on ingress deletion) |
resyncPeriod | Resync period in seconds, allows to re-sync periodically the monitors with the Routes. Defaults to 0 (= disabled) |
creationDelay | CreationDelay is a duration string to add a delay before creating new monitor (e.g., to allow DNS to catch up first) |
monitorNameTemplate | Template for monitor name eg, {{.Namespace}}-{{.Name}} |
BASE64_ENCODED_CONFIG.YAML
with your config.yaml file that is encoded in base64.config.yaml
files refer to Sample Configs.CONFIG_SECRET_NAME
.EndpointMonitor
resource can be used to manage monitors on static urls or route/ingress references.
apiVersion: endpointmonitor.stakater.com/v1alpha1
kind: EndpointMonitor
metadata:
name: stakater
spec:
forceHttps: true
url: https://stakater.com
apiVersion: endpointmonitor.stakater.com/v1alpha1
kind: EndpointMonitor
metadata:
name: frontend
spec:
forceHttps: true
urlFrom:
routeRef:
name: frontend
apiVersion: endpointmonitor.stakater.com/v1alpha1
kind: EndpointMonitor
metadata:
name: frontend
spec:
forceHttps: true
urlFrom:
ingressRef:
name: frontend
NOTE: For provider specific additional configuration refer to Docs and go through configuration guidelines for your uptime provider.
The following quickstart let's you set up Ingress Monitor Controller to register uptime monitors for endpoints:
If you have configured helm on your cluster, you can deploy IngressMonitorController via helm using below mentioned commands. For details on chart, see IMC Helm Chart
# Install CRDs
kubectl apply -f https://raw.githubusercontent.com/stakater/IngressMonitorController/master/charts/ingressmonitorcontroller/crds/endpointmonitor.stakater.com_endpointmonitors.yaml
# Install chart
helm repo add stakater https://stakater.github.io/stakater-charts
helm repo update
helm install stakater/ingressmonitorcontroller
$ git clone git@github.com:stakater/IngressMonitorController.git
$ make deploy
Key | Default | Description |
---|---|---|
WATCH_NAMESPACE | Namespace in which operator is deployed | Use comma separated list of namespaces or leave the field empty to watch all namespaces(cluster scope) |
CONFIG_SECRET_NAME | imc-config | Name of secret that holds the configuration |
REQUEUE_TIME | 300 seconds | Integer value to specify number of seconds after which the resource should be reconciled again |
You can find more detailed documentation for configuration, extension, and support for other Uptime checkers etc. here
If you'd like to contribute any fixes or enhancements, please refer to the documentation here
File a GitHub issue.
Join and talk to us on the #tools-ingressmonitor channel for discussing the Ingress Monitor Controller
registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.7.0
instead of kube-rbac-proxy. This issue can be tracked here.Apache2 © Stakater
The IngressMonitorController
is maintained by Stakater. Like it? Please let us know at hello@stakater.com
See our other projects or contact us in case of professional services and queries on hello@stakater.com
Stakater Team and the Open Source community! :trophy: