kubernetes / cloud-provider-openstack

Apache License 2.0
619 stars 610 forks source link

[occm] Support Octavia/Amphora Prometheus endpoint creation using annotations #2465

Open antonin-a opened 1 year ago

antonin-a commented 1 year ago

Component: openstack-cloud-controller-manager (occm)

FEATURE REQUEST?:

/kind feature

As a Kubernetes + occm user I would like to be able to create Prometheus endpoint (a listener with a special protocol "PROMETHEUS") so that I can easily monitor my Octavia Load Balancers using Prometheus.

What happened: Currently the only way to do so is to use Openstack CLI / APIs openstack loadbalancer listener create --name stats-listener --protocol PROMETHEUS --protocol-port 9100 --allowed-cidr 10.0.0.0/8 $os_octavia_id

What you expected to happen: Create the Prometheus endpoint using annotations at Loadbalancer creation (Kubernetes service type LoadBalancer)

Annotations that we suggest to add:

kind: Service
metadata:
  name: octavia-metrics
  annotations:
    loadbalancer.openstack.org/metrics-enable: "true"
    loadbalancer.openstack.org/metrics-port: "9100"
    loadbalancer.openstack.org/metrics-allow-cidrs: "10.0.0.0/8, fe80::/10"
    loadbalancer.openstack.org/vip-address: "10.4.2.3" #  Auto-computed field based on Octavia VIP as it is required for Prometheus configuration or any other solution (currently it is not possible to retrieve private IP of public LBs) 
  labels:
    app: test-octavia
spec:
  ports:
  - name: client
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer 

Anything else we need to know?: Related Octavia documentation: https://docs.openstack.org/octavia/latest/user/guides/monitoring.html#monitoring-with-prometheus

As an Openstack Public Cloud Provider we are currently working on a custom CCM implementation, for this reason we can potentially do the PR associated with this request, but we'd like to at least validate the implementation before starting developments.

dulek commented 1 year ago

I see this as a valid feature request. I think I'd rather skip metrics-enable annotation and assume that if metrics-port is set, we should enable the metrics listener. What I don't like here is exposing the VIP address to the end user. I guess using FIP to reach the metrics doesn't work due to security concerns?

Lucasgranet commented 1 year ago

Hello @dulek,

Most of the time, your Prometheus scrapper will be deployed in your K8S cluster. If you're scraping from a node (of the cluster), your request will go through the router to reach the FIP. If so, you will need to add the (Openstack or not) router egress IP in the Prometheus Listener's allowed-ip list to allow the client.

IMO, it's better for an integration in a K8S cluster.

Lucas,

dulek commented 1 year ago

@Lucasgranet: Fair enough, I guess this is the only way forward then.

@jichenjc, do you think exposing LB VIP IP on the Service might potentially be dangerous?

antonin-a commented 11 months ago

Hello @dulek , any update on this one ?

dulek commented 11 months ago

Hello @dulek , any update on this one ?

I've asked @jichenjc for an opinion in my previous comment. @zetaab might have something to say too.

All the being said I don't have free cycles to work on this, as it's not a use case for us. We'll definitely welcome a contribution from your side.

jichenjc commented 11 months ago

do you think exposing LB VIP IP on the Service might potentially be dangerous?

loadbalancer.openstack.org/vip-address: "10.4.2.3" # Auto-computed field based on Octavia VIP as it is required for Prometheus configuration or any other solution (currently it is not possible to retrieve private IP of public LBs)

sorry saw this just now , I am not security expert but seems no harm as anyway we have to provide the LB info for some connections ? but for normal app user (the one who create the service) need to understand the detail of LB underhood which in other service creation template I seems didn't see that before

k8s-triage-robot commented 8 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

kbudde commented 8 months ago

/remove-lifecycle stale

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

antonin-a commented 5 months ago

/remove-lifecycle stale

We will work on it

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten