tchiotludo / akhq

Kafka GUI for Apache Kafka to manage topics, topics data, consumers group, schema registry, connect and more...
https://akhq.io/
Apache License 2.0
3.34k stars 646 forks source link

Missing required configuration "bootstrap.servers" which has no default value #193

Closed hagay-david-devops closed 4 years ago

hagay-david-devops commented 4 years ago

Hi also bootstrap.servers is configure right , the log is prompting the following :L Caused by: org.apache.kafka.common.config.ConfigException: Missing required configuration "bootstrap.servers" which has no default value.

any idea ? thanks .

tchiotludo commented 4 years ago

I need a configuration files to help you !

hagay-david-devops commented 4 years ago

sure , assume bootstrap servers are the real once

image: repository: tchiotludo/kafkahq tag: latest annotations: {}

prometheus.io/scrape: 'true'

#prometheus.io/port: '8080'
#prometheus.io/path: '/metrics'

extraEnv: []

configuration: | kafkahq: server: access-log: enabled: false name: org.kafkahq.log.access connections: my-cluster-plain-text: properties: bootstrap.servers: "kafkacluster01:9092,kafkacluster02:9092,kafkacluster03:9092"

secrets: | kafkahq: connections: my-cluster-plain-text: properties:

bootstrap.servers: "kafka:9092"

      bootstrap.servers: "kafkacluster01:9092,kafkacluster02:9092,kafkacluster03:9092" 

extraVolumes: []

extraVolumeMounts: []

service: enabled: true type: LoadBalancer port: 8090 annotations:

ingress: enabled: false annotations: {}

paths: [] hosts:

resources: {}

nodeSelector: {}

tolerations: []

affinity: {}

hagay-david-devops commented 4 years ago

any other configs are needed except the values.yaml ? the configmap.yaml is the original one.

tchiotludo commented 4 years ago

can you send it with formatting, understand yaml without indentation is "impossible" :smile:

hagay-david-devops commented 4 years ago
image:
  repository: tchiotludo/kafkahq
  tag: latest
  annotations: {}
    #prometheus.io/scrape: 'true'
    #prometheus.io/port: '8080'
    #prometheus.io/path: '/metrics'
  extraEnv: []
  ## You can put directly your configuration here...
  # - name: KAFKAHQ_CONFIGURATION
  #   value: |
  #       kafkahq:
  #         secrets:
  #           docker-kafka-server:
  #             properties:
  #               bootstrap.servers: "kafka:9092"

## Or you can also use configmap for the configuration...
configuration: |
  kafkahq:
    server:
      access-log: 
        enabled: false 
        name: org.kafkahq.log.access
    connections:
      my-cluster-plain-text:
        properties:
          bootstrap.servers: "kafkacluster01:9092,kafkacluster02:9092,kafkacluster03:9092"

##... and secret for connection information
secrets: |
  kafkahq:
    connections:
      my-cluster-plain-text:
        properties:
          #bootstrap.servers: "kafka:9092"
          bootstrap.servers: "kafkacluster01:9092,kafkacluster02:9092,kafkacluster03:9092" 
        #schema-registry:
        #  url: "http://schema-registry:8085"
        #  basic-auth-username: basic-auth-user
        #  basic-auth-password: basic-auth-pass
        #connect:
        #  url: "http://connect:8083"
        #  basic-auth-username: basic-auth-user
        #  basic-auth-password: basic-auth-pass

# Any extra volumes to define for the pod (like keystore/truststore)
extraVolumes: []
# Any extra volume mounts to define for the kafkaHQ container
extraVolumeMounts: []

service:
  enabled: true
  #type: ClusterIP
  type: LoadBalancer
  port: 8090
  annotations:
    # cloud.google.com/load-balancer-type: "Internal"

ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  paths: []
  hosts:
    - kafkahq.demo.com
  tls: []
  #  - secretName: kafkahq-tls
  #    hosts:
  #      - kafkahq.demo.com

resources: {}
  # limits:
  #  cpu: 100m
  #  memory: 128Mi
  # requests:
  #  cpu: 100m
  #  memory: 128Mi

nodeSelector: {}

tolerations: []

affinity: {}
tchiotludo commented 4 years ago

As i can see, you must provide the cluster only in configuration or secrets, not both.

Change with this and it must work :


configuration: |
  kafkahq:
    connections:
      my-cluster-plain-text:
        properties:
          bootstrap.servers: "kafkacluster01:9092,kafkacluster02:9092,kafkacluster03:9092"
hagay-david-devops commented 4 years ago

Hi , I am afraid that's not the case here. I have commented all secrets section and reinstall the chart but getting the same error. I can post the whole java log if needed.

tchiotludo commented 4 years ago

can you connect inside the pods and look at /app/application.yml,/app/application-secrets.yml please ? and send me files please ?

hagay-david-devops commented 4 years ago
kafkahq:
  server:
    access-log: 
      enabled: false 
      name: org.kafkahq.log.access
  connections:
    my-cluster-plain-text:
      properties:
        bootstrap.servers: "kafkacluster01:9092,kafkacluster02:9092,kafkacluster03:9092"
hagay-david-devops commented 4 years ago

/app/application-secrets.yml is not exist as per your request to have only one method

tchiotludo commented 4 years ago

Ok all seems good ... really weird.

Can you look at this one : https://github.com/tchiotludo/kafkahq/issues/184

Especially adding this to your configuration :

endpoints:
    env:
        enabled: true
        sensitive: true

And go to your pods with : curl http://localhost:8080/env to see if the configuration is applied ?

hagay-david-devops commented 4 years ago

from Host

DV-99224:deploy hagayd$ curl -v http://LB-IP:8090
*   Trying 192.168.50.38...
* TCP_NODELAY set
* Connected to 192.168.50.38 (192.168.50.38) port 8090 (#0)
> GET / HTTP/1.1
> Host: 192.168.50.38:8090
> User-Agent: curl/7.64.1
> Accept: */*
> 
< HTTP/1.1 301 Moved Permanently
< Location: /my-cluster-plain-text/topic
< Date: Mon, 20 Jan 2020 13:14:56 GMT
< transfer-encoding: chunked
< connection: close
< 
* Closing connection 0
hagay-david-devops commented 4 years ago

Inside the POD

root@kafkahq-67c97cccf4-hqk4j:/app# curl -v http://localhost:8080/
* Expire in 0 ms for 6 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 1 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
* Expire in 0 ms for 1 (transfer 0x559d284badc0)
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Expire in 150000 ms for 3 (transfer 0x559d284badc0)
* Expire in 200 ms for 4 (transfer 0x559d284badc0)
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET / HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.64.0
> Accept: */*
> 
< HTTP/1.1 301 Moved Permanently
< Location: /my-cluster-plain-text/topic
< Date: Mon, 20 Jan 2020 13:17:45 GMT
< transfer-encoding: chunked
< connection: close
< 
* Closing connection 0
tchiotludo commented 4 years ago

curl -v http://localhost:8080/env please ! the "/" will redirect to your cluster

hagay-david-devops commented 4 years ago
DV-99224:deploy hagayd$ curl -v http://LB-IP:8090/env
*   Trying LB-IP...
* TCP_NODELAY set
* Connected to /LB-IP (/LB-IP) port 8090 (#0)
> GET /env HTTP/1.1
> Host: /LB-IP:8090
> User-Agent: curl/7.64.1
> Accept: */*
> 
< HTTP/1.1 401 Unauthorized
< Date: Mon, 20 Jan 2020 13:24:27 GMT
< transfer-encoding: chunked
< connection: close
< 
* Closing connection 0
DV-99224:deploy hagayd$ 
tchiotludo commented 4 years ago

you have added this to your configuration files ?

endpoints:
    env:
        enabled: true
        sensitive: true

if yes, your configuration is not take by micronaut that will be really weird ...

Maybe remove in the bootstrap server the list and put only 1 to see if it's better ?

hagay-david-devops commented 4 years ago

Yes , i have added above section to configuration. also tried using only one bootstrap . when hitting -- > curl -v http://LB-IP:8090/env I am landing on the login page is it OK ?

hagay-david-devops commented 4 years ago

Is there a default username and pass ? or only when enabling basic-auth:

tchiotludo commented 4 years ago

no it's not ok ... Redirect to login page mean that endpoints configuration is not taken, so you can setup every things you want it will not works, since the configuration is not used for a really weird reason ... To be honest, I'm completely blindness to help you ...

unixunion commented 4 years ago

I am getting this same error, trying to spin up the dev-compose, and even though I have kafka->properties->bootstrap.servers set, kafka client throws exception when accessing kafkahq.

Caused by: org.apache.kafka.common.config.ConfigException: Missing required configuration "bootstrap.servers" which has no default value. at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:476)

tchiotludo commented 4 years ago

@unixunion : you are just trying with the docker-compose-dev.yml ? right ?

unixunion commented 4 years ago

Correct, and I have put a application.yml in the mounted root, and I had to add MICRONAUT_CONFIG_FILES to the compose too. I tried latest tag, master and dev branches. same issue.

added this to env for kafkahq service in compose.

MICRONAUT_CONFIG_FILES: /app/application.yml

I cant seem to get the entire yaml formated into here. pastbin: https://pastebin.com/cgKL85ep

    kafkahq_1          |    at java.base/java.lang.Thread.run(Thread.java:834)
    kafkahq_1          | Caused by: org.apache.kafka.common.config.ConfigException: Missing required     configuration "bootstrap.servers" which has no default value.
    kafkahq_1          |    at     org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:476)
    kafkahq_1          |    at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:466)
    kafkahq_1          |    at org.apache.kafka.common.config.AbstractConfig.<init>    (AbstractConfig.java:108)
    kafkahq_1          |    at org.apache.kafka.common.config.AbstractConfig.<init>    (AbstractConfig.java:142)
    kafkahq_1          |    at org.apache.kafka.clients.admin.AdminClientConfig.<init>    (AdminClientConfig.java:196)
    kafkahq_1          |    at org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:55)
    kafkahq_1          |    at org.kafkahq.modules.KafkaModule.getAdminClient(KafkaModule.java:97)
    kafkahq_1          |    at     org.kafkahq.modules.AbstractKafkaWrapper.lambda$listTopics$1(AbstractKafkaWrapper.java:51)
    kafkahq_1          |    at org.kafkahq.utils.Logger.call(Logger.java:19)
    kafkahq_1          |    ... 145 common frames omitted
unixunion commented 4 years ago

in the docker-compose-dev.yml , I commented out the kafkahq and the webpack and then started kafkahq via IntelliJ. I put a debugger on line 97 of KafkaModule.java, and the clusterId passed is docker-kafka-server, which is not the cluster I have in the application.yaml. So I dont know where it is getting this clusterId from.

EDIT: I see it got the clusterId from a stale URL call. Actually, it seems to be redirected to the non-existent cluster. I hit http://localhost:8080/ and get redirected to http://localhost:8080/docker-kafka-server/topic

EDIT: if I hit http://localhost:8080/my-cluster-plain-text/topic direct, I get to a new error regarding my NW setup. will post results soon.

unixunion commented 4 years ago

I think I figured the issue out, it seems that browser cache is the culprit here, because I had a kafkahq deployed in docker in another experiment, there is some contamination of the cache, so when I access "localhost:8080", for some reason I get redirected to the clusterId for the docker-kafka-server. Nuking the cache seems to resolve it.

tchiotludo commented 4 years ago

Already have also this issue ! Don't think about it, but you are right the browser cache the redirect and will try to go the last url, and if you change name, this can lead to this error ...

unixunion commented 4 years ago

So this is not really a bug, but perhaps it is needed to add a no-cache header onto the html. I guess @hagay-david-devops has the same problem with cache.

hagay-david-devops commented 4 years ago

Hi , Will check from side as well and update if it's related to browser cache . thanks.

tchiotludo commented 4 years ago

don't really want to add a no-cache header and let the browser do it's job. And the situation is during initial configuration only closing for now

Pixelshifter commented 4 years ago

I'm running against this issue AFTER initial configuration. I'm running AKHQ for two weeks now, it connects to three 3-node clusters. My users are saying that AKHQ is really slow after running for a while. I've just restarted the container after one week of running with no issues. When looking in the log i see the exact same error being logged. Only a restart of the containers flushes stuff.

Could this be browser related and if so, what kind of browser tools could I use to help find the root cause?

tchiotludo commented 4 years ago

I don't think it's a browser issue. Must be a server side issue instead.

Create a new issue with more log. Since we say that the issue is depending on the time. Take a snapshot of the log when the users complains, and a snapshot just after restart and browsing most of the page.

adacaccia commented 2 years ago

In my experience (app 0.20 from helm chart 0.27), the /env endpoint gets only deployed to the "management" port, 28081 by default in the Helm chart. 8080 is the default port for the standard service, and will never get the /env endpoint!