lcc3108 / istio-dex-kubernetes-dashboard-example

0 stars 1 forks source link

Unable to Authenticate cluster UIs (kubernetes-dashboard) using dex+oauth2-proxy in istio. #1

Closed amalendur closed 2 years ago

amalendur commented 2 years ago

Hi, Thank You for such a wonderful documentation. In our environment we are trying to achieve the same thing but without any luck.

We are deployed Istio on the top of AWS EKS cluster. Deploy all the cluster components through helm(using template file).

We are using following versions;

EKS Version : 1.20
Helm Version: 3.5.3
Kubectl Version: 1.21.0(client)/1.20.7(server)
Istio Version: 1.11.1

*note# namespaces (auth, kubernetes-dashboard) are labeled with "istio-injection=enabled".

dex config template#

grpc: false
certs:
  grpc:
    create: false
  web:
    create: false
ports:
  web:
    servicePort: 5556
  telemetry:
    servicePort: 5558
config:
  issuer: https://dex.example.com
  connectors:
  - type: mockCallback
    id: mock
    name: <cluster_name>
  enablePasswordDB: true
  staticClients:
  - id: 'oidc-auth-client'
    redirectURIs:
    - "https://dashboard.example.com/oauth2/callback"
    name: "oidc-auth-client"
    secret: KYCI4XWYZGhu8cAC0hq0xtf1XWcQxPBU1HyzOZdGxi8=
  staticPasswords:
  - email: "admin@example.com"
    hash: "$2a$10$2b2cU8CPhOTaGrs1HRQuAueS7JTT5ZHsHSzYiFPm1leZck7Mc8T4W"
    username: "admin"
    userID: "08a8684b-db88-4b73-90a9-3cd1661f5466"
  telemetry:
    http: 0.0.0.0:5558
  frontend:
    theme: coreos
ingress:
  enabled: false
service:
  type: ClusterIP
livenessProbe:
  enabled: true
  initialDelaySeconds: 1
  failureThreshold: 1
  httpPath: "/healthz"
  periodSeconds: 10
  timeoutSeconds: 5

oath2-proxy config template#

service:
  type: ClusterIP
  port: 4180
config:
  clientID: 'oidc-auth-client'
  clientSecret: KYCI4XWYZGhu8cAC0hq0xtf1XWcQxPBU1HyzOZdGxi8=
  cookieSecret: KYCI4XWYZGhu8cAC0hq0xtf1XWcQxPBU1HyzOZdGxi8=
extraArgs:
  provider: oidc
  provider-display-name: dex
  proxy-websockets: true
  oidc-issuer-url: https://dex.example.com
  cookie-secure: true
  cookie-name: auth
  cookie-refresh: 1h
  cookie-expire: 4h
  cookie-httponly: true
  email-domain: "*"
  pass-host-header: true
  ping-path: /ping
  set-authorization-header: true
  skip-provider-button: true
  http-address: 0.0.0.0:4180
  upstream: static://200
  scope: openid profile email offline_access groups
  client-secret: KYCI4XWYZGhu8cAC0hq0xtf1XWcQxPBU1HyzOZdGxi8=
  client-id: 'oidc-auth-client'
  whitelist-domain: .example.com
  cookie-domain: .example.com
ingress:
  enabled: false

Gateway with wildcard domain;

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: gateway
  namespace: istio-system
spec:
  selector:
    app: istio-ingressgateway
  servers:
  - hosts:
    - "*.example.com"
    port:
      name: https
      number: 443
      protocol: HTTP

Virtual services;

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: dex
  namespace: auth
spec:
  gateways:
  - istio-system/gateway
  hosts:
  - dex.example.com
  http:
  - route:
    - destination:
        host: dex
        port:
          number: 5556
    retries:
      attempts: 3
      perTryTimeout: 5s
      retryOn: gateway-error,connect-failure,refused-stream

---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: oauth2-proxy
  namespace: auth
spec:
  gateways:
  - istio-system/gateway
  hosts:
  - auth.example.com
  http:
  - route:
    - destination:
        host: oauth2-proxy
        port:
          number: 4180
    retries:
      attempts: 3
      perTryTimeout: 5s
      retryOn: gateway-error,connect-failure,refused-stream

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  gateways:
    - istio-system/gateway
  hosts:
  - dashboard.example.com
  http:
  - route:
    - destination:
        host: kubernetes-dashboard
        port:
          number: 80
    retries:
      attempts: 3
      perTryTimeout: 5s
      retryOn: gateway-error,connect-failure,refused-stream

the envoyfilter initially I have tried with as following;

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: kubernetes-dashboard
  namespace: istio-system
spec:
  workloadSelector:
    labels:
      istio: ingressgateway
  configPatches:
    - applyTo: HTTP_FILTER
      match:
        context: GATEWAY
        listener:
          filterChain:
            filter:
              name: envoy.http_connection_manager
              subFilter:
                name: istio.metadata_exchange
            sni: dashboard.example.com
      patch:
        operation: INSERT_BEFORE
        value:
          name: envoy.ext_authz
          typed_config:
            "@type": "type.googleapis.com/envoy.config.filter.http.ext_authz.v2.ExtAuthz"
            http_service:
              server_uri:
                uri: "http://oauth2-proxy.auth.svc.cluster.local/"
                timeout: 1.5s
                cluster: outbound|4180||oauth2-proxy.auth.svc.cluster.local
              authorizationRequest:
                allowedHeaders:
                  patterns:
                    - exact: "cookie"
                    - exact: "authorization"
              authorizationResponse:
                allowedClientHeaders:
                  patterns:
                    - exact: "set-cookie"
                    - exact: "authorization"
                allowedUpstreamHeaders:
                  patterns:
                    - exact: "set-cookie"
                    - exact: "authorization"

the 2nd envoyfilter I have tried with;

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: kubernetes-dashboard
  namespace: istio-system
spec:
  workloadSelector:
    labels:
      app: istio-ingressgateway
  configPatches:
  - applyTo: HTTP_FILTER
    match:
      context: GATEWAY
      listener:
        filterChain:
          filter:
            name: envoy.http_connection_manager
            subFilter:
              name: istio.metadata_exchange
          sni: dashboard.example.com
    patch:
      operation: INSERT_AFTER
      value:
        name: envoy.filters.http.ext_authz
        typed_config:
          '@type': type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz
          http_service:
            authorizationRequest:
              allowedHeaders:
                patterns:
                - exact: accept
                - exact: authorization
                - exact: cookie
                - exact: from
                - exact: proxy-authorization
                - exact: user-agent
                - exact: x-forwarded-access-token
                - exact: x-forwarded-email
                - exact: x-forwarded-for
                - exact: x-forwarded-host
                - exact: x-forwarded-proto
                - exact: x-forwarded-user
                - prefix: x-auth-request
                - prefix: x-forwarded
            authorizationResponse:
              allowedClientHeaders:
                patterns:
                - exact: authorization
                - exact: location
                - exact: proxy-authenticate
                - exact: set-cookie
                - exact: www-authenticate
                - prefix: x-auth-request
                - prefix: x-forwarded
              allowedUpstreamHeaders:
                patterns:
                - exact: authorization
                - exact: location
                - exact: proxy-authenticate
                - exact: set-cookie
                - exact: www-authenticate
                - prefix: x-auth-request
                - prefix: x-forwarded
            server_uri:
              cluster: outbound|4180||oauth2-proxy.auth.svc.cluster.local
              timeout: 1.5s
              uri: http://oauth2-proxy.auth.svc.cluster.local

Nothing working. When we trying to access k8s-dashboard (https://dashboard.example.com) it's redirecting to "https://dashboard.example.com/#/overview?namespace=default". there was no log generated in oauth2-proxy. I found some log in istio-ingressgateway as following; {"level":"debug","time":"2021-10-26T07:58:22.579160Z","scope":"envoy router","msg":"[C117][S17481825958129025397] cluster 'outbound|80||kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local' match for URL '/api/v1/namespace'"}

Highly appreciate your help :)

amalendur commented 2 years ago

In our environment, we are deploying Istio on top of AWS EKS cluster and we are using AWS_ACM certificate (on a single wildcard domain [*.example.com]). istio-ingressgateway configure by using that ACM certificate.

istio-ingressgateway:
    type: LoadBalancer
    serviceAnnotations:
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <acm_arn>
    externalTrafficPolicy: "Local"

I have tried with following gateway config;

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - "*"
    port:
      name: http
      number: 80
      protocol: HTTP
    tls:
      httpsRedirect: true
  - hosts:
    - "*.example.com"
    port:
      name: https
      number: 443
      protocol: HTTP

Unfortunately there is no luck.

Would you mind if I ask about your test cluster? Was it on EKS?

amalendur commented 2 years ago

Thank You for the guidance.

I have removed the AWS_ACM certificate from our solution and using cert-manager.

Istio-ingressgateway#

istio-ingressgateway:
    type: LoadBalancer
    serviceAnnotations:
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    externalTrafficPolicy: "Local" 

Dex config#

apiVersion: cert-manager.io/v1beta1
kind: Certificate
metadata:
  name: dex
  namespace: istio-system
spec:
  secretName: dex-tls
  issuerRef:
    kind: ClusterIssuer
    name: letsencrypt-prod-istio
  commonName: dex.example.com
  dnsNames:
    - dex.example.com
---
kind: Gateway
apiVersion: networking.istio.io/v1alpha3
metadata:
  name: dex
  namespace: auth
spec:
  servers:
    - hosts:
        - dex.example.com
      port:
        name: http
        number: 80
        protocol: HTTP
      tls:
        httpsRedirect: true
    - hosts:
        - dex.example.com
      port:
        name: https
        number: 443
        protocol: HTTPS
      tls:
        credentialName: dex-tls
        mode: SIMPLE
  selector:
    app: istio-ingressgateway

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: dex
  namespace: auth
spec:
  gateways:
  - dex
  hosts:
  - dex.example.com
  http:
  - route:
    - destination:
        host: dex
    retries:
      attempts: 3
      perTryTimeout: 5s
      retryOn: gateway-error,connect-failure,refused-stream

oauth2-proxy config#

apiVersion: cert-manager.io/v1beta1
kind: Certificate
metadata:
  name: oauth2-proxy
  namespace: istio-system
spec:
  secretName: oauth2-proxy-tls
  issuerRef:
    kind: ClusterIssuer
    name: letsencrypt-prod-istio
  commonName: auth.example.com
  dnsNames:
    - auth.example.com

---
kind: Gateway
apiVersion: networking.istio.io/v1alpha3
metadata:
  name: oauth2-proxy
  namespace: auth
spec:
  servers:
    - hosts:
        - auth.example.com
      port:
        name: http
        number: 80
        protocol: HTTP
      tls:
        httpsRedirect: true
    - hosts:
        - auth.example.com
      port:
        name: https
        number: 443
        protocol: HTTPS
      tls:
        credentialName: oauth2-proxy-tls
        mode: SIMPLE
  selector:
    app: istio-ingressgateway

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: oauth2-proxy
  namespace: auth
spec:
  gateways:
  - oauth2-proxy
  hosts:
  - auth.example.com
  http:
  - route:
    - destination:
        host: oauth2-proxy
    retries:
      attempts: 3
      perTryTimeout: 5s
      retryOn: gateway-error,connect-failure,refused-stream

k8s-dashboard config#

apiVersion: cert-manager.io/v1beta1
kind: Certificate
metadata:
  name: kubernetes-dashboard
  namespace: istio-system
spec:
  secretName: kubernetes-dashboard-tls
  issuerRef:
    name: letsencrypt-prod-istio
    kind: ClusterIssuer
  commonName: dashboard.example.com
  dnsNames:
    - dashboard.example.com

---
kind: Gateway
apiVersion: networking.istio.io/v1alpha3
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  servers:
    - hosts:
        - dashboard.example.com
      port:
        name: http
        number: 80
        protocol: HTTP
      tls:
        httpsRedirect: true
    - hosts:
        - dashboard.example.com
      port:
        name: https
        number: 443
        protocol: HTTPS
      tls:
        credentialName: kubernetes-dashboard-tls
        mode: SIMPLE
  selector:
    app: istio-ingressgateway

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  gateways:
    - kubernetes-dashboard
  hosts:
  - dashboard.example.com
  http:
  - route:
    - destination:
        host: kubernetes-dashboard
    retries:
      attempts: 3
      perTryTimeout: 5s
      retryOn: gateway-error,connect-failure,refused-stream

I have created CNAME record with the istio-loadbalancer in AWS.

$ nslookup dex.example.com Server: xx.xxx.xx.xxx Address: xx.xxx.xx.xxx#53

Non-authoritative answer: dex.example.com canonical name = . Name: Address: x.xx.xxx.xxx

Versions#

EKS Version : 1.20
Helm Version: 3.5.3
Kubectl Version: 1.21.0(client)/1.20.7(server)
Istio Version: 1.11.1

It seems nlb with HTTPS is not working.

$ curl -I "https://dex.example.com" -vvv
*   Trying x.xx.xxx.xxx...
* TCP_NODELAY set
* Connected to dex.example.com (x.xx.xxx.xxx) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to dex.example.com:443 
* Closing connection 0
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to dex.example.com:443 

Not sure what is wrong. Am I missing anything here? Did you apply any additional configuration? Any idea please?

lcc3108 commented 2 years ago

@amalendur

  1. your nlb backend protocol set tcp? how to check aws console aws -> ec2 -> Load Balancers -> click your nlb -> Listeners -> click forwarding to blahblah -> check protocol set TCP

if not use tcp set istio gateway service annotation service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp

  1. Please check if the certificate has been issued.
    kubectl get certificate -A

    If Ready is True, it has been issued.

  2. The certificate must exist in a namespace such as the istio Ingress gateway.
lcc3108 commented 2 years ago

@amalendur

I got an email saying you left a comment, but I can't see it.

amalendur commented 2 years ago

Hi, My Sincere apology. It seems the issue is with certificates.

$ kubectl get clusterissuer -A
NAME                     READY   AGE
letsencrypt-prod-istio   True    37m
$ kubectl get certificates -A
NAMESPACE      NAME                   READY   SECRET                     AGE
istio-system   dex                    False   dex-tls                    35m
istio-system   kubernetes-dashboard   False   kubernetes-dashboard-tls   33m
istio-system   oauth2-proxy           False   oauth2-proxy-tls           34m

$ kubectl get secret -A | grep -i Opaque
istio-system           dex-qp82r                                          Opaque                                1      3m40s
istio-system           kubernetes-dashboard-tdrrc                         Opaque                                1      100s
istio-system           oauth2-proxy-d2fgs                                 Opaque                                1      3m16s

cert-manager pod log#

E1104 07:54:27.791293       1 sync.go:185] cert-manager/controller/challenges "msg"="propagation check failed" "error"="failed to perform self check GET request 'http://dex.example.com/.well-known/acme-challenge/Ck8verbvNZs8Fduang6U7QUU1nDBPOvyu0DmsPzYaic': Get \"https://dex.example.com/.well-known/acme-challenge/Ck8verbvNZs8Fduang6U7QUU1nDBPOvyu0DmsPzYaic\": EOF" "dnsName"="dex.example.com" "resource_kind"="Challenge" "resource_name"="dex-swghq-1613814992-1527568861" "resource_namespace"="istio-system" "resource_version"="v1" "type"="HTTP-01"

corresponding acme log#

I1104 07:53:12.670340       1 solver.go:39] cert-manager/acmesolver "msg"="starting listener"  "expected_domain"="dex.example.com" "expected_key"="Ck8verbvNZs8Fduang6U7QUU1nDBPOvyu0DmsPzYaic.Chh6NaxNaWxXCuraK6tpvsrTZJJvAaTkzL-_dc37P6Y" "expected_token"="Ck8verbvNZs8Fduang6U7QUU1nDBPOvyu0DmsPzYaic" "listen_port"=8089

oauth2-proxy log#

[2021/11/04 08:54:53] [main.go:50] Get "https://dex.example.com/.well-known/openid-configuration": EOF

I have created CNAME Route53 Record in AWS against wildcard domain (*.example.com) using the istio-loadbalancer.

Installed cert-manager version#

kubectl label namespace cert-manager istio-injection=enabled --overwrite=true
helm repo add cert-manager https://charts.jetstack.io
helm namespace upgrade --install cert-manager cert-manager/cert-manager --version 1.5.2 -n cert-manager

Here is my clusterissuer configuration#

apiVersion: cert-manager.io/v1beta1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod-istio
  namespace: istio-system
spec:
  acme:
    email: k8s@inkubate.io
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-prod-istio
    solvers:
    - http01:
        ingress:
          class: istio
      selector:
        dnsNames:
        - "example.com"
        - "*.example.com"
        - "dex.example.com"
        - "auth.example.com"
        - "dashboard.example.com"

No idea, what is wrong. Could you please suggest, what I'm missing here?

lcc3108 commented 2 years ago

Certmanager uses ingress as the default to obtain a certificate. you run below command

kubectl get ingress -A

At this time, the priorities between istio gateway and Ingres are as follows.

So certmanager sends a request to http://dex.example.com, but the certificate is not issued because istio redirects it to https://dex.example.com.

Recently, certmanager supports integration with istio, but I've never used it, so I'll explain how I've used it.

If a specific address of the istio is specified, remove the redirection part because the priority is high.

Create a wildcard http to https redirection istio gateway to redirect requests that are not addressed by istio http gateway.

kind: Gateway
apiVersion: networking.istio.io/v1alpha3
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  servers:
  #  - hosts:
  #      - dashboard.example.com
  #     port:
  #       name: http
  #       number: 80
  #       protocol: HTTP
  #     tls:
  #       httpsRedirect: true
    - hosts:
        - dashboard.example.com
      port:
        name: https
        number: 443
        protocol: HTTPS
      tls:
        credentialName: kubernetes-dashboard-tls
        mode: SIMPLE
  selector:
    app: istio-ingressgateway
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: http-general-gateway
  namespace: istio-system
  labels:
    app: ingressgateway
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        protocol: HTTP
        name: http
      hosts:
        - "*"
      tls:
        httpsRedirect: true

Additionally, please check the nlb backend protocol that I requested above.

how to check aws console aws -> ec2 -> Load Balancers -> click your nlb -> Listeners -> click forwarding to blahblah -> check protocol set TCP

amalendur commented 2 years ago

Hello,

I did implemented the above config(gateways).

$ kubectl get gw -A
NAMESPACE              NAME                   AGE
auth                   dex                    35m
auth                   oauth2-proxy           26m
istio-system           http-general-gateway   4m53s
kubernetes-dashboard   kubernetes-dashboard   25m

Istio-ingress#

gateways:
  istio-ingressgateway:
    type: LoadBalancer
    externalTrafficPolicy: "Local"
    serviceAnnotations:
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"

But it seems the issue is with the certificates...

kubectl get certificates -n istio-system -o wide                                                                         WRSWMH3BBLVDR: Tue 
NAME                   READY   SECRET                     ISSUER                   STATUS                                         AGE
dex                    False   dex-tls                    letsencrypt-prod-istio   Issuing certificate as Secret does not exist   24m
kubernetes-dashboard   False   kubernetes-dashboard-tls   letsencrypt-prod-istio   Issuing certificate as Secret does not exist   22m
oauth2-proxy           False   oauth2-proxy-tls           letsencrypt-prod-istio   Issuing certificate as Secret does not exist   23m

ClusterIssuer seem ok;

$ kubectl get clusterissuer -A -o wide
NAME                     READY   STATUS                                                 AGE
letsencrypt-prod-istio   True    The ACME account was registered with the ACME server   41m

Sample certificate config;

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: dex
  namespace: istio-system
spec:
  secretName: dex-tls
  issuerRef:
    kind: ClusterIssuer
    name: letsencrypt-prod-istio
  commonName: dex.example.com
  dnsNames:
    - dex.example.com

Not sure what is wrong. Did you perform any additional configuration?

Best Regards

lcc3108 commented 2 years ago

Hellow

gateways:
  istio-ingressgateway:
    type: LoadBalancer
    externalTrafficPolicy: "Local"
    serviceAnnotations:
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"

The tls termination should be changed as follows so that it can proceed at the istio gateway.

gateways:
  istio-ingressgateway:
    type: LoadBalancer
    externalTrafficPolicy: "Local"
    serviceAnnotations:
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"

and check ingress

$ kubectl get ingress -n istio system
# select one here cm-acme-http-solver-example
$ CERT_PATH=$(kubectl get ingress cm-acme-http-solver-example -o jsonpath='{.spec.rules[0].http.paths[0].path}')
$ echo $CERT_PATH

you can test http://fqdn/$CERT_PATH

amalendurakshit commented 2 years ago

Hi,

Finally I'm able to fix the issue. Now I'm able to expose most of the cluster urls (including k8-dashboard) except grafana-dashboard(getting "invalid API key"). Are you able to expose grafana in your cluster? Any suggestion?

Best Regards,

lcc3108 commented 2 years ago

Hi,

Finally I'm able to fix the issue. Now I'm able to expose most of the cluster urls (including k8-dashboard) except grafana-dashboard(getting "invalid API key"). Are you able to expose grafana in your cluster? Any suggestion?

Best Regards,

I'm glad you solved the issue. Can you explain how you solved it? Does the grafana-dashboard mean grafana? It can be exposed like the Kubernetis dashboard. If not, you can explain the situation in more detail.

amalendur commented 2 years ago

Hi,

Regarding the certificate issue I have created the IAM role with the identity provider and attach an assume role to fix(referring this document).

Now I am facing issue to authenticate Grafana.

For my test cluster, I'm using the Grafana-6.11.0 helm chart(https://github.com/grafana/helm-charts/tree/grafana-6.11.0) with the following template...(I have uploaded the istio dashboards inside dashboard folder)

grafana.ini:
  paths:
    data: /var/lib/grafana/
    logs: /var/log/grafana
    plugins: /var/lib/grafana/plugins
    provisioning: /etc/grafana/provisioning
  analytics:
    check_for_updates: true
  log:
    mode: console
  grafana_net:
    url: https://grafana.net
  auth.proxy:
    enabled: true
    header_name: X-WEBAUTH-USER
    header_property: username
    auto_sign_up: true
persistence:
  type: pvc
  enabled: true
  storageClassName: aws-efs
  accessModes:
    - ReadWriteMany
  size: 1Gi
  annotations:
    volume.beta.kubernetes.io/storage-class: "aws-efs"
  finalizers:
    - kubernetes.io/pvc-protection
ingress:
  enabled: false
nodeSelector:
  nodetype: utility
service:
  type: ClusterIP
  port: 80
adminUser: admin
adminPassword: <admin_password>
datasources:
  datasources.yaml:
    apiVersion: 1
    datasources:
    - name: Prometheus
      type: prometheus
      url: http://prometheus-server:9090
      access: proxy
      isDefault: true
      editable: true
      jsonData:
        timeInterval: 5s
dashboardProviders:
  dashboardproviders.yaml:
    apiVersion: 1
    providers:
    - disableDeletion: false
      folder: istio
      name: istio
      options:
        path: /var/lib/grafana/dashboards/istio
      orgId: 1
      type: file
    - disableDeletion: false
      folder: istio
      name: istio-services
      options:
        path: /var/lib/grafana/dashboards/istio-services
      orgId: 1
      type: file
dashboards:
  istio:
    istio-mesh:
      file: dashboards/istio-mesh-dashboard.json
    istio-control-plane:
      file: dashboards/istio-control-plane-dashboard.json
    istio-service:
      file: dashboards/istio-service-dashboard.json
    istio-wasm-extension:
      file: dashboards/istio-wasm-extension-dashboard.json
    istio-workload:
      file: dashboards/istio-workload-dashboard.json       

Oauth2-proxy config as following;

extraArgs:
  client-secret: <some_secret>
  client-id: '<cluster_name>-oidc-auth-client'
  provider: oidc
  http-address: 0.0.0.0:4180
  email-domain: "*"
  cookie-refresh: 1h
  cookie-secure: false
  set-xauthrequest: true
  pass-access-token: true
  set-authorization-header: true
  pass-authorization-header: true
  pass-host-header: true
  upstream: static://200
  reverse-proxy: ""
  whitelist-domain: .<domain>
  cookie-domain: .<domain>
  cookie-name: _oauth2_proxy
  cookie-samesite: lax
  skip-provider-button: true
  redirect-url: https://auth.<domain>/oauth2/callback
  oidc-issuer-url: https://dex.<domain>
  cookie-expire: 4h
  redis-connection-url: redis://redis-master.auth.svc.cluster.local:6379
  redis-password: "<redis_password>"

The Grafana logs are saying as following;

t=2021-12-21T08:07:53+0000 lvl=info msg="inserting datasource from configuration " logger=provisioning.datasources name=Prometheus uid=
t=2021-12-21T08:07:53+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=[::]:3000 protocol=http subUrl= socket=
t=2021-12-21T08:12:30+0000 lvl=eror msg="invalid API key" logger=context error="invalid API key"
t=2021-12-21T08:12:30+0000 lvl=info msg="Request Completed" logger=context userId=0 orgId=0 uname= method=GET path=/ status=401 remote_addr=xx.xx.xxx.xxx time_ms=0 size=29 referer=https://dex.<domain>/
lcc3108 commented 2 years ago

If you are using Dex and the application supports OIDC login, oauth proxy is not required. Grafana supports OIDC login, so you can proceed according to the settings below.

grafan.ini

  auth:
    sigv4_auth_enabled: true
  auth.generic_oauth:
    enabled: true
    client_id: <YOUR_DEX_CLIENT_ID>
    client_secret: <YOUR_DEX_SECRET>
    auth_url: https://<YOUR_DEX_URL>/auth
    token_url: https://<YOUR_DEX_URL>/token
    api_url: https://<YOUR_DEX_URL>/userinfo
    name: Dex
    scopes: openid,profile,email
    allow_sign_up: true

Grafana docs

amalendur commented 2 years ago

Hi,

Thank You for sharing this. I have already tried with this(forget to mention in my previous post), as following;

grafana.ini:
  paths:
    data: /var/lib/grafana/
    logs: /var/log/grafana
    plugins: /var/lib/grafana/plugins
    provisioning: /etc/grafana/provisioning
  analytics:
    check_for_updates: true
  log:
    mode: console
  grafana_net:
    url: https://grafana.net
  server:
    domain: grafana.<domain>
    enforce_domain: false
    enable_gzip: true
    root_url: https://grafana.<domain>:3000
  auth:
    sigv4_auth_enabled: true
  auth.generic_oauth:
    enabled: true
    client_id: <cluster_name>-oidc-auth-client
    client_secret: <DEX_CLIENT_ID>
    auth_url: https://dex.<domain>/auth
    token_url: https://dex.<domain>/token
    api_url: https://dex.<domain>/userinfo
    name: Dex
    scopes: openid,profile,email
    allow_sign_up: true
    tls_skip_verify_insecure: false
    cookie_samesite: none

It was not working.

t=2021-12-21T14:04:51+0000 lvl=info msg="inserting datasource from configuration " logger=provisioning.datasources name=Prometheus uid=
t=2021-12-21T14:04:51+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=[::]:3000 protocol=http subUrl= socket=
t=2021-12-21T14:06:02+0000 lvl=eror msg="Failed to look up user based on cookie" logger=context error="user token not found"
t=2021-12-21T14:06:02+0000 lvl=info msg="Request Completed" logger=context userId=0 orgId=0 uname= method=GET path=/ status=302 remote_addr=xxx.xx.xxx.xxx time_ms=25 size=29 referer=
t=2021-12-21T14:06:02+0000 lvl=eror msg="Failed to look up user based on cookie" logger=context error="user token not found"
t=2021-12-21T14:06:09+0000 lvl=eror msg="Failed to look up user based on cookie" logger=context error="user token not found"
t=2021-12-21T14:06:09+0000 lvl=info msg="Successful Login" logger=http.server User=admin@localhost

If you don't mind could you please share your solution? Personally I prefer to use oauth2-proxy to authenticate Grafana dashboard like rest of the endpoint-URL in the cluster. Did you use Oauth2-proxy to authenticate Grafana?

Best Regards.

lcc3108 commented 2 years ago

안녕,

인증서 문제와 관련하여 자격 증명 공급자와 함께 IAM 역할을 생성하고 해결할 수임 역할을 연결합니다( 문서 참조 ).

이제 Grafana를 인증하는 문제에 직면해 있습니다.

내 테스트 클러스터의 경우 다음 템플릿과 함께 Grafana-6.11.0 helm 차트( https://github.com/grafana/helm-charts/tree/grafana-6.11.0 )를 사용하고 있습니다. 대시보드 폴더 안에 istio 대시보드 업로드)

grafana.ini:
  paths:
    data: /var/lib/grafana/
    logs: /var/log/grafana
    plugins: /var/lib/grafana/plugins
    provisioning: /etc/grafana/provisioning
  analytics:
    check_for_updates: true
  log:
    mode: console
  grafana_net:
    url: https://grafana.net
  auth.proxy:
    enabled: true
    header_name: X-WEBAUTH-USER
    header_property: username
    auto_sign_up: true
persistence:
  type: pvc
  enabled: true
  storageClassName: aws-efs
  accessModes:
    - ReadWriteMany
  size: 1Gi
  annotations:
    volume.beta.kubernetes.io/storage-class: "aws-efs"
  finalizers:
    - kubernetes.io/pvc-protection
ingress:
  enabled: false
nodeSelector:
  nodetype: utility
service:
  type: ClusterIP
  port: 80
adminUser: admin
adminPassword: <admin_password>
datasources:
  datasources.yaml:
    apiVersion: 1
    datasources:
    - name: Prometheus
      type: prometheus
      url: http://prometheus-server:9090
      access: proxy
      isDefault: true
      editable: true
      jsonData:
        timeInterval: 5s
dashboardProviders:
  dashboardproviders.yaml:
    apiVersion: 1
    providers:
    - disableDeletion: false
      folder: istio
      name: istio
      options:
        path: /var/lib/grafana/dashboards/istio
      orgId: 1
      type: file
    - disableDeletion: false
      folder: istio
      name: istio-services
      options:
        path: /var/lib/grafana/dashboards/istio-services
      orgId: 1
      type: file
dashboards:
  istio:
    istio-mesh:
      file: dashboards/istio-mesh-dashboard.json
    istio-control-plane:
      file: dashboards/istio-control-plane-dashboard.json
    istio-service:
      file: dashboards/istio-service-dashboard.json
    istio-wasm-extension:
      file: dashboards/istio-wasm-extension-dashboard.json
    istio-workload:
      file: dashboards/istio-workload-dashboard.json       

Oauth2-proxy 구성은 다음과 같습니다.

extraArgs:
  client-secret: <some_secret>
  client-id: '<cluster_name>-oidc-auth-client'
  provider: oidc
  http-address: 0.0.0.0:4180
  email-domain: "*"
  cookie-refresh: 1h
  cookie-secure: false
  set-xauthrequest: true
  pass-access-token: true
  set-authorization-header: true
  pass-authorization-header: true
  pass-host-header: true
  upstream: static://200
  reverse-proxy: ""
  whitelist-domain: .<domain>
  cookie-domain: .<domain>
  cookie-name: _oauth2_proxy
  cookie-samesite: lax
  skip-provider-button: true
  redirect-url: https://auth.<domain>/oauth2/callback
  oidc-issuer-url: https://dex.<domain>
  cookie-expire: 4h
  redis-connection-url: redis://redis-master.auth.svc.cluster.local:6379
  redis-password: "<redis_password>"

Grafana 로그는 다음과 같이 말하고 있습니다.

t=2021-12-21T08:07:53+0000 lvl=info msg="inserting datasource from configuration " logger=provisioning.datasources name=Prometheus uid=
t=2021-12-21T08:07:53+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=[::]:3000 protocol=http subUrl= socket=
t=2021-12-21T08:12:30+0000 lvl=eror msg="invalid API key" logger=context error="invalid API key"
t=2021-12-21T08:12:30+0000 lvl=info msg="Request Completed" logger=context userId=0 orgId=0 uname= method=GET path=/ status=401 remote_addr=xx.xx.xxx.xxx time_ms=0 size=29 referer=https://dex.<domain>/

I am also using the above setting. If you set it as described above, is there a Sign in with Dex button as shown in the picture below?

image

As far as I know, Grafana needs to restart for configmap reload after changing the settings. Did you go through the process?

amalendur commented 2 years ago

Hi,

When I try to open the Grafana url (https://grafana.domain.com) it's redirecting to dex (as expected) and opening the dex console, then when login through the credentials getting the "{"message":"invalid API key"}".

t=2021-12-21T16:35:17+0000 lvl=eror msg="invalid API key" logger=context error="invalid API key"
t=2021-12-21T16:35:17+0000 lvl=info msg="Request Completed" logger=context userId=0 orgId=0 uname= method=GET path=/ status=401 remote_addr=xxx.xx.xxx.xxx time_ms=0 size=54 referer=

To be sure here is the custom grafana.ini content;

$ cat /etc/grafana/grafana.ini 
[analytics]
check_for_updates = true
[auth.proxy]
auto_sign_up = true
cookie_samesite = none
enabled = true
header_name = X-WEBAUTH-USER
header_property = username
[grafana_net]
url = https://grafana.net
[log]
mode = console
[paths]
data = /var/lib/grafana/
logs = /var/log/grafana
plugins = /var/lib/grafana/plugins
provisioning = /etc/grafana/provisioning
[server]
domain = grafana.domain.com
enable_gzip = true
enforce_domain = false
root_url = https://grafana.domain.com:80
lcc3108 commented 2 years ago

I think Dex + oauth proxy is basically a structure for services that do not provide authentication.

The first method is Grafana uses the options below to authenticate dex and remove oatuh proxy and destination rules. I recommend this method more because Grafana directly supports dex.

Please erase all auth options except the settings below.

  auth:
    sigv4_auth_enabled: true
  auth.generic_oauth:
    enabled: true
    client_id: <YOUR_DEX_CLIENT_ID>
    client_secret: <YOUR_DEX_SECRET>
    auth_url: https://<YOUR_DEX_URL>/auth
    token_url: https://<YOUR_DEX_URL>/token
    api_url: https://<YOUR_DEX_URL>/userinfo
    name: Dex
    scopes: openid,profile,email
    allow_sign_up: true
  server:
      # The full public facing url you use in browser, used for redirects and emails
    root_url: https://<YOUR_GRAFANA_URL>

And you have to override the direction url in the dex setting.

- id: <YOUR_DEX_CLIENT_ID>
  redirectURIs:
  - 'https://<YOUR_GRAFANA_URL>/login/generic_oauth'
  name: grafana
  secret: <YOUR_DEX_SECRET>

I think the [auth.proxy] setting is not compatible with dex because it is proxy with http basic auth.

The second method is to allow anonymous access in Grafana and use oauth proxy.

amalendurakshit commented 2 years ago

Thank You. It's working now.

Best Regards.

lcc3108 commented 2 years ago

Happy New Year!

This week was very busy. I saw the mail that you left, and I hope it was resolved and deleted.

amalendurakshit commented 2 years ago

Hello,

Wish You a Very Happy and prosperous New Year 2022 :) In your project did you configured AWS Elasticsearch Service? In our previous running EKS cluster we were using AWS Elasticsearch Service for logging. Wants to achieve the same in ISTIO as well. Trying to use Istio as a reverse proxy (similar to nginx proxy_pass). unfortunately getting '404'. By default, AWS Kibana is not exposed to the Internet, and in order to do so using the following configurations;

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: kibana
  namespace: istio-system
spec:
  hosts:
  - aws.local
  ports:
  - number: 80
    name: http
    protocol: HTTP
  - number: 443
    name: https
    protocol: HTTPS
  resolution: DNS
  endpoints:
    - address: vpc-xxxxxxxx.es.amazonaws.com
      ports:
        https: 443

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: kibana
  namespace: logging
spec:
  hosts:
  - aws.local
  http:
  - match:
    - port: 80
    route:
    - destination:
        host: aws.local
        port:
          number: 443

---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: kibana
  namespace: logging
spec:
  host: aws.local
  trafficPolicy:
    loadBalancer:
      simple: ROUND_ROBIN
    portLevelSettings:
    - port:
        number: 443
      tls:
        mode: SIMPLE

---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: kibana
  namespace: istio-system
spec:
  secretName: kibana-tls
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
  dnsNames:
  - kibana.example.com

---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: kibana
  namespace: istio-system
spec:
  selector:
    app: istio-ingressgateway
  servers:
  - hosts:
    - kibana.example.com
    port:
      name: http
      number: 80
      protocol: HTTP
    tls:
      httpsRedirect: true
  - hosts:
    - kibana.example.com
    port:
      name: https
      number: 443
      protocol: HTTPS
    tls:
      credentialName: kibana-tls
      mode: SIMPLE

---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: kibana-proxy
  namespace: logging
spec:
  hosts:
  - kibana.example.com
  gateways:
  - istio-system/kibana
  http:
  - route:
    - destination:
        host: aws.local
        port:
          number: 443
    match:
    - uri:
        exact: "vpc-xxxxxxxx.es.amazonaws.com/_plugin/kibana"

Best Regards.

lcc3108 commented 2 years ago

I don't use AWS Elastic Search, but try the yaml below.

Change virtual service first or second

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: kibana-proxy
  namespace: logging
spec:
  hosts:
  - kibana.example.com
  gateways:
  - istio-system/kibana
  http:
  - route:
    - destination:
        host: vpc-xxxxxxxx.es.amazonaws.com
        port:
          number: 443

or

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: kibana-proxy
  namespace: logging
spec:
  hosts:
  - kibana.example.com
  gateways:
  - istio-system/kibana
  http:
  - route:
    - destination:
        host: vpc-xxxxxxxx.es.amazonaws.com
        port:
          number: 443
    rewrite:
      uri: "/_plugin/kibana"

Change destiantion rule

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: kibana
  namespace: logging
spec:
  host: vpc-xxxxxxxx.es.amazonaws.com
  trafficPolicy:
    loadBalancer:
      simple: ROUND_ROBIN
    portLevelSettings:
    - port:
        number: 443
      tls:
        mode: SIMPLE
amalendur commented 2 years ago

Hello,

I have changed the config as following with respect to your suggestion.

apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: kibana-egress
  namespace: istio-system
spec:
  hosts:
  - "vpc-xxxxxxxx.es.amazonaws.com"
  location: MESH_EXTERNAL
  ports:
  - number: 80
    name: http
    protocol: HTTP
  - name: https
    number: 443
    protocol: HTTPS
  resolution: DNS

---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: kibana
  namespace: istio-system
spec:
  secretName: kibana-tls
  issuerRef:
    name: letsencrypt-prod
    kind: ClusterIssuer
  dnsNames:
  - kibana.example.com

---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: kibana
  namespace: logging
spec:
  selector:
    app: istio-ingressgateway
  servers:
  - hosts:
    - kibana.example.com
    port:
      name: http
      number: 80
      protocol: HTTP
    tls:
      httpsRedirect: true
  - hosts:
    - kibana.example.com
    port:
      name: https
      number: 443
      protocol: HTTPS
    tls:
      credentialName: kibana-tls
      mode: SIMPLE

---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: kibana-proxy
  namespace: logging
spec:
  hosts:
  - kibana.example.com
  gateways:
  - logging/kibana
  http:
  - route:
    - destination:
        host: vpc-xxxxxxxx.es.amazonaws.com
        port:
          number: 443
    rewrite:
      uri: "/_plugin/kibana"

---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: kibana
  namespace: logging
spec:
  host: vpc-xxxxxxxx.es.amazonaws.com
  trafficPolicy:
    loadBalancer:
      simple: ROUND_ROBIN
    portLevelSettings:
    - port:
        number: 443
      tls:
        mode: SIMPLE

Getting the following;

<html>
<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>The plain HTTP request was sent to HTTPS port</center>
</body>
</html>

Best Regards,

amalendur commented 2 years ago

Hello,

I have added headers in virtualServive as following;

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: kibana-proxy
  namespace: logging
spec:
  hosts:
  - kibana.example.com
  gateways:
  - logging/kibana
  http:
  - route:
    - destination:
        host: vpc-xxxxxxxx.es.amazonaws.com
        port:
          number: 443
    rewrite:
      uri: "/_plugin/kibana"
    headers:
      request:
        set:
          x-forwarded-proto: https
          x-forwarded-port: "443"

Getting the same as following;

<html>
<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>The plain HTTP request was sent to HTTPS port</center>
</body>
</html>

Best Regards,

lcc3108 commented 2 years ago

It seems that https traffic is not going. Since I am on vacation tomorrow, I will test it in a personal cluster.

lcc3108 commented 2 years ago

There is no awselasitcsearch in the local cluster, so when testing with a typical https site, the setting went well.

Well, to be exact, please check if there are other gateway,virtualservice,destinationrule,serviceentry other than the above settings.

amalendur commented 2 years ago

Hello,

For Kibana I don't have any other config settings.

Best Regards,

lcc3108 commented 2 years ago

Would you like to try this?

apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: kibana-egress
  namespace: istio-system
spec:
  hosts:
  - "vpc-xxxxxxxx.es.amazonaws.com"
  location: MESH_EXTERNAL
  ports:
  - name: http
    number: 443
    protocol: HTTP
  resolution: DNS
amalendur commented 2 years ago

Hello,

I have already tried with this as well without any luck.

<html>
<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>The plain HTTP request was sent to HTTPS port</center>
</body>
</html>

Best Regards,