provectus / kafka-ui

Open-Source Web UI for Apache Kafka Management
Apache License 2.0
9.97k stars 1.2k forks source link

Still getting `blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.` error #4017

Open eka43 opened 1 year ago

eka43 commented 1 year ago

Issue submitter TODO list

Describe the bug (actual behavior)

Hi, I tried to set SSO but I am still getting the errors below in inspect-network section in the browser.

Access to script at '(url) &client_id=(client id)&scope=openid%20profile%20email%20groups&state=(code)
&redirect_uri=(url)' (redirected from '(url)/assets/index-43f1ddca.js') 
from origin '(url)' has been blocked by CORS policy
: No 'Access-Control-Allow-Origin' header is present on the requested resource.

(url) GET (keycloak url)/auth?response_type=code&client_id=(client id)&scope=openid%20profile%20email%20groups&state=(code)&redirect_uri=(uri) net::ERR_FAILED 303

and I got the invalid credential error. Screenshot 2023-07-06 at 2 36 23 PM Once I deployed the app, I can access to the application initially(with the CORS error), but when I logged out and log in again, this error(Invalid credential error) happens.

Expected behavior

No response

Your installation details

This is my config file with keycloak

# kafka-ui/charts/kafka-ui/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: kafka-ui
  namespace: kube-system
  labels:
    app.kubernetes.io/instance: kafka-ui
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: kafka-ui
    app.kubernetes.io/version: v0.7.1
    helm.sh/chart: kafka-ui-0.7.1
data:
  config.yml: |-
    auth:
      type: OAUTH2
      oauth2:
        client:
          keycloak:
            clientId: (client id)
            clientSecret: (Client password)
            scope: openid,profile,email,groups
            issuer-uri: (issuer- uri)
            authorization-grant-type: authorization_code
            client-name: (keycloak realm name)
            user-name-attribute: preferred_username
            provider: keycloak
            custom-params:
              type: oauth
              roles-field: groups
    management:
      health:
        ldap:
          enabled: false              
    rbac:
      roles:
        - name: "readonly"
          clusters:
            - a
            - b
            - c               
          subjects:
            - provider: oauth
              type: role
              value: "role1"                                     
          permissions:
            - resource: clusterconfig
              actions: [ "view" ]
            - resource: topic
              value: ".*"
              actions: 
                - VIEW
                - MESSAGES_READ
            - resource: consumer
              value: ".*"
              actions: [ view ]
            - resource: schema
              value: ".*"
              actions: [ view ]
            - resource: connect
              value: ".*"
              actions: [ view ]
            - resource: acl
              value: ".*"
              actions: [ view ]      
        - name: "admin"
          clusters:
            - a
            - b
            - c            
          subjects:
            - provider: oauth
              type: role
              value: "role2"
          permissions:
            - resource: applicationconfig
              actions: all
            - resource: clusterconfig
              actions: all
            - resource: topic
              value: ".*"
              actions: all
            - resource: consumer
              value: ".*"
              actions: all
            - resource: schema
              value: ".*"
              actions: all
            - resource: connect
              value: ".*"
              actions: all
            - resource: ksql           
              actions: [ execute ]
            - resource: acl
              value: ".*"            
              actions: [ view, edit ]

Deployment

# kafka-ui/charts/kafka-ui/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka-ui
  namespace: kube-system
  labels:
    app.kubernetes.io/instance: kafka-ui
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: kafka-ui
    app.kubernetes.io/version: v0.7.1
    helm.sh/chart: kafka-ui-0.7.1 
spec:
  selector:
    matchLabels:
      app.kubernetes.io/instance: kafka-ui
      app.kubernetes.io/name: kafka-ui      
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: kafka-ui
        app.kubernetes.io/name: kafka-ui
    spec:
      serviceAccountName: kafka-ui
      serviceAccount: kafka-ui      
      containers:
        - name: kafka-ui
          image: 'docker.io/provectuslabs/kafka-ui:v0.7.1'
          imagePullPolicy: IfNotPresent
          env:
            - name: DYNAMIC_CONFIG_ENABLED
              value: "true"
            - name: SPRING_CONFIG_ADDITIONAL-LOCATION
              value: /kafka-ui/config.yml                                              
            - name: KAFKA_CLUSTERS_0_NAME
              value: {{ .Values.kafkaUiClusterName0 }}
            - name: KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS
              value: {{ .Values.kafkaUiBootstrapservers0 }}
            - name: KAFKA_CLUSTERS_0_USERNAME
              valueFrom:
                secretKeyRef:
                  name: kafka-secret-0
                  key: username              
            - name: KAFKA_CLUSTERS_0_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: kafka-secret-0
                  key: password
            - name: KAFKA_UI_KEYCLOAK_CLIENT_ID
              valueFrom:
                secretKeyRef:
                  name: kafka-ui-secret
                  key: clientId                  
            - name: KAFKA_UI_KEYCLOAK_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: kafka-ui-secret
                  key: clientSecret              
            - name: KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL
              value: SASL_SSL
            - name: KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM
              value: SCRAM-SHA-512
            - name: KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG
              value: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="${KAFKA_CLUSTERS_0_USERNAME}" password="${KAFKA_CLUSTERS_0_PASSWORD}";'
          {{- if .Values.kafkaUiClusterName1 }} 
            - name: KAFKA_CLUSTERS_1_NAME
              value: {{ .Values.kafkaUiClusterName1 }}
            - name: KAFKA_CLUSTERS_1_BOOTSTRAPSERVERS
              value: {{ .Values.kafkaUiBootstrapservers1 }}           
            - name: KAFKA_CLUSTERS_1_USERNAME
              valueFrom:
                secretKeyRef:
                  name: kafka-secret-1
                  key: username              
            - name: KAFKA_CLUSTERS_1_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: kafka-secret-1
                  key: password                   
            - name: KAFKA_CLUSTERS_1_PROPERTIES_SECURITY_PROTOCOL
              value: SASL_SSL
            - name: KAFKA_CLUSTERS_1_PROPERTIES_SASL_MECHANISM
              value: SCRAM-SHA-512
            - name: KAFKA_CLUSTERS_1_PROPERTIES_SASL_JAAS_CONFIG
              value: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="${KAFKA_CLUSTERS_1_USERNAME}" password="${KAFKA_CLUSTERS_1_PASSWORD}";'
          {{- end}}
          {{- if .Values.kafkaUiClusterName2 }} 
            - name: KAFKA_CLUSTERS_2_NAME
              value: {{ .Values.kafkaUiClusterName2 }}
            - name: KAFKA_CLUSTERS_2_BOOTSTRAPSERVERS
              value: {{ .Values.kafkaUiBootstrapservers2 }}            
            - name: KAFKA_CLUSTERS_2_USERNAME
              valueFrom:
                secretKeyRef:
                  name: kafka-secret-2
                  key: username              
            - name: KAFKA_CLUSTERS_2_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: kafka-secret-2
                  key: password                   
            - name: KAFKA_CLUSTERS_2_PROPERTIES_SECURITY_PROTOCOL
              value: SASL_SSL
            - name: KAFKA_CLUSTERS_2_PROPERTIES_SASL_MECHANISM
              value: SCRAM-SHA-512
            - name: KAFKA_CLUSTERS_2_PROPERTIES_SASL_JAAS_CONFIG
              value: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="${KAFKA_CLUSTERS_2_USERNAME}" password="${KAFKA_CLUSTERS_2_PASSWORD}";'
          {{- end}}                                                                           
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
          resources:
            limits:
              cpu: 200m
              memory: 512Mi
              ephemeral-storage: 20Gi
            requests:
              cpu: 200m
              memory: 256Mi
              ephemeral-storage: 15Gi              
          livenessProbe:
            httpGet:
              path: /actuator/health
              port: http
              scheme: HTTP
            initialDelaySeconds: 60
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            httpGet:
              path: /actuator/health
              port: http
              scheme: HTTP
            initialDelaySeconds: 60
            periodSeconds: 30
            timeoutSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          volumeMounts:
            - name: kafka-ui-yaml-conf
              mountPath: /kafka-ui/
      volumes:
        - name: kafka-ui-yaml-conf
          configMap: 
            name: kafka-ui
            defaultMode: 420    

Steps to reproduce

Deploy the application with those helm charts. Set the keycloak outside.

Screenshots

No response

Logs

Pod logs

21:24:33,248 |-INFO in ch.qos.logback.classic.LoggerContext[default] - This is logback-classic version 1.4.7
21:24:33,548 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
21:24:33,549 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.xml]
21:24:33,644 |-INFO in ch.qos.logback.classic.BasicConfigurator@6adbc9d - Setting up default configuration.
21:24:40,045 |-INFO in ch.qos.logback.core.joran.spi.ConfigurationWatchList@4550bb58 - URL [jar:file:/kafka-ui-api.jar!/BOOT-INF/classes!/logback-spring.xml] is not of type file
21:24:41,248 |-INFO in ch.qos.logback.core.model.processor.AppenderModelHandler - Processing appender named [STDOUT]
21:24:41,248 |-INFO in ch.qos.logback.core.model.processor.AppenderModelHandler - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
21:24:41,546 |-WARN in ch.qos.logback.core.ConsoleAppender[STDOUT] - This appender no longer admits a layout as a sub-component, set an encoder instead.
21:24:41,546 |-WARN in ch.qos.logback.core.ConsoleAppender[STDOUT] - To ensure compatibility, wrapping your layout in LayoutWrappingEncoder.
21:24:41,546 |-WARN in ch.qos.logback.core.ConsoleAppender[STDOUT] - See also http://logback.qos.ch/codes.html#layoutInsteadOfEncoder for details
21:24:41,546 |-INFO in ch.qos.logback.classic.model.processor.RootLoggerModelHandler - Setting level of ROOT logger to INFO
21:24:41,547 |-INFO in ch.qos.logback.classic.jul.LevelChangePropagator@4ec4f3a0 - Propagating INFO level on Logger[ROOT] onto the JUL framework
21:24:41,547 |-INFO in ch.qos.logback.core.model.processor.AppenderRefModelHandler - Attaching appender named [STDOUT] to Logger[ROOT]
21:24:41,547 |-INFO in ch.qos.logback.core.model.processor.DefaultProcessor@223191a6 - End of configuration.
21:24:41,547 |-INFO in org.springframework.boot.logging.logback.SpringBootJoranConfigurator@49139829 - Registering current configuration as safe fallback point

 _   _ ___    __             _                _          _  __      __ _
| | | |_ _|  / _|___ _ _    /_\  _ __ __ _ __| |_  ___  | |/ /__ _ / _| |_____
| |_| || |  |  _/ _ | '_|  / _ \| '_ / _` / _| ' \/ -_) | ' </ _` |  _| / / _`|
 \___/|___| |_| \___|_|   /_/ \_| .__\__,_\__|_||_\___| |_|\_\__,_|_| |_\_\__,|
                                 |_|                                             

2023-07-06 21:24:42,344 WARN  [main] c.p.k.u.u.DynamicConfigOperations: Dynamic config file /etc/kafkaui/dynamic_config.yaml doesnt exist or not readable
2023-07-06 21:24:42,349 INFO  [main] c.p.k.u.KafkaUiApplication: Starting KafkaUiApplication using Java 17.0.6 with PID 1 (/kafka-ui-api.jar started by kafkaui in /)
2023-07-06 21:24:42,349 DEBUG [main] c.p.k.u.KafkaUiApplication: Running with Spring Boot v3.0.6, Spring v6.0.8
2023-07-06 21:24:42,350 INFO  [main] c.p.k.u.KafkaUiApplication: No active profile set, falling back to 1 default profile: "default"
2023-07-06 21:25:12,451 DEBUG [main] c.p.k.u.s.SerdesInitializer: Configuring serdes for cluster (cluster_name)
2023-07-06 21:25:20,348 INFO  [main] o.s.b.a.e.w.EndpointLinksResolver: Exposing 2 endpoint(s) beneath base path '/actuator'
2023-07-06 21:25:20,755 INFO  [main] o.h.v.i.u.Version: HV000001: Hibernate Validator 8.0.0.Final
2023-07-06 21:25:22,455 INFO  [main] o.s.b.a.s.r.ReactiveUserDetailsServiceAutoConfiguration: 

Using generated security password: 462c3827-be0f-410e-8e7d-61cca66c22d4

2023-07-06 21:25:24,244 INFO  [main] c.p.k.u.c.a.OAuthSecurityConfig: Configuring OAUTH2 authentication.
2023-07-06 21:25:29,146 INFO  [main] o.s.b.w.e.n.NettyWebServer: Netty started on port 8080
2023-07-06 21:25:29,345 INFO  [main] c.p.k.u.KafkaUiApplication: Started KafkaUiApplication in 53.895 seconds (process running for 60.082)
2023-07-06 21:25:32,047 DEBUG [parallel-1] c.p.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: (cluster_name)
2023-07-06 21:25:32,344 INFO  [parallel-1] o.a.k.c.a.AdminClientConfig: AdminClientConfig values: 
    bootstrap.servers = [bootstrap address(omitted)]
    client.dns.lookup = use_all_dns_ips
    client.id = kafka-ui-admin-1688678732-1
    connections.max.idle.ms = 300000
    default.api.timeout.ms = 60000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 2147483647
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = [hidden]
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.connect.timeout.ms = null
    sasl.login.read.timeout.ms = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.login.retry.backoff.max.ms = 10000
    sasl.login.retry.backoff.ms = 100
    sasl.mechanism = SCRAM-SHA-512
    sasl.oauthbearer.clock.skew.seconds = 30
    sasl.oauthbearer.expected.audience = null
    sasl.oauthbearer.expected.issuer = null
    sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
    sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
    sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
    sasl.oauthbearer.jwks.endpoint.url = null
    sasl.oauthbearer.scope.claim.name = scope
    sasl.oauthbearer.sub.claim.name = sub
    sasl.oauthbearer.token.endpoint.url = null
    security.protocol = SASL_SSL
    security.providers = null
    send.buffer.bytes = 131072
    socket.connection.setup.timeout.max.ms = 30000
    socket.connection.setup.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
    ssl.endpoint.identification.algorithm = https
    ssl.engine.factory.class = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.certificate.chain = null
    ssl.keystore.key = null
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.3
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.certificates = null
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS

2023-07-06 21:25:33,443 INFO  [parallel-1] o.a.k.c.s.a.AbstractLogin: Successfully logged in.
2023-07-06 21:25:33,744 INFO  [parallel-1] o.a.k.c.u.AppInfoParser: Kafka version: 3.3.1
2023-07-06 21:25:33,744 INFO  [parallel-1] o.a.k.c.u.AppInfoParser: Kafka commitId: e23c59d00e687ff5
2023-07-06 21:25:33,744 INFO  [parallel-1] o.a.k.c.u.AppInfoParser: Kafka startTimeMs: 1688678733650
2023-07-06 21:25:43,545 DEBUG [parallel-1] c.p.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster: (cluster_name)
2023-07-06 21:25:59,246 DEBUG [parallel-1] c.p.k.u.s.ClustersStatisticsScheduler: Start getting metrics for kafkaCluster: (cluster_name)
2023-07-06 21:26:00,044 DEBUG [parallel-1] c.p.k.u.s.ClustersStatisticsScheduler: Metrics updated for cluster:(cluster_name)

Additional context

Checked the keycloak login events and they seemed okay.

Screenshot 2023-07-06 at 2 58 43 PM

No response

github-actions[bot] commented 1 year ago

Hello there eka43! 👋

Thank you and congratulations 🎉 for opening your very first issue in this project! 💖

In case you want to claim this issue, please comment down below! We will try to get back to you as soon as we can. 👀

mikhail-putilov commented 11 months ago

That behavior is most probably connected to context path. Try setting up SERVER_SERVLET_CONTEXT_PATH, I had similar issue and it was not resolved until Kafka UI started under some context path.

FYI My working setup has a disabled auth auth.type: disabled right now with SERVER_SERVLET_CONTEXT_PATH.

values.yaml is:

fullnameOverride: kafka-ui
yamlApplicationConfigConfigMap: # feed our config map as an additional config
  name: kafka-ui-additional
  keyName: config.yaml
envs:
  config:
    # This part of YAML will go into readiness/liveness probes, and in a configmap
    # Without it, it won't work behind reverse proxy
    SERVER_SERVLET_CONTEXT_PATH: /kafka-ui

And kafka-ui-additional config map contains file with name config.yaml:

kafka:
  clusters:
    - name: yaml
      bootstrapServers: ...
auth:
  type: disabled
management:
  health:
    ldap:
      enabled: false
server:
  forward-headers-strategy: native # to make it work behind reverse-proxy

Then open your Kafka UI under https://your-host-where-kafka-ui-is-hosted/kafka-ui