yahoo / CMAK

CMAK is a tool for managing Apache Kafka clusters
Apache License 2.0
11.84k stars 2.51k forks source link

Kafka-manager requires manual config to find cluster #244

Open etotheipi opened 8 years ago

etotheipi commented 8 years ago

Running kafka-manager on an existing kafka cluster using the following command:

kafka-manager/bin/kafka-manager -Dconfig.file=../../configs/application.conf

This allows me to access the kafka-manager via web browser but no clusters are preconfigured even though I gave the manager the ZK host:port in the cfg file. I have to manually "Add Cluster" from the web interface, only providing a custom name and "localhost:2181" under ZK host.

I need this automated. I'm surprised this isn't done already, I already provide it the zkhost:port in the config file (below). Perhaps it needs a name in order to create the cluster and that's not provided in the config file?

The application.conf is as follows:

play.crypto.secret=${?APPLICATION_SECRET}
play.i18n.langs=["en"]
play.http.requestHandler = "play.http.DefaultHttpRequestHandler"
play.application.loader=loader.KafkaManagerLoader
kafka-manager.zkhosts="localhost:2181"
kafka-manager.zkhosts=${?ZK_HOSTS}
pinned-dispatcher.type="PinnedDispatcher"
pinned-dispatcher.executor="thread-pool-executor"
application.features=["KMClusterManagerFeature","KMTopicManagerFeature","KMPreferredReplicaElectionFeature","KMReassignPartitionsFeature"]
akka {
  loggers = ["akka.event.slf4j.Slf4jLogger"]
  loglevel = "INFO"
}
basicAuthentication.enabled=false
basicAuthentication.username="admin"
basicAuthentication.password="password"
basicAuthentication.realm="Kafka-Manager"```
sheepkiller commented 8 years ago

Hi @etotheipi,

zookeeper configuration is for kafka manager storage only. If you need to add cluster automatically you need to make a HTTP request. Few monthes ago, I wrote a CSD for cloudera manager and, to add cluster to KM, I used curl : curl ${KM_URL} --data "name=${KM_CLUSTER_NAME}&zkHosts=${ZK_QUORUM}&kafkaVersion=${KAFKA_VERSION}" -X POST

I think it's still valid.

etotheipi commented 8 years ago

Thanks @sheepkiller, that's exactly what I needed.

Note that I needed to use ${KM_URL}/clusters to submit the POST action, but otherwise it was exactly as you said:

curl localhost:9000/clusters --data "name=primarycluster&zkHosts=localhost:2181&kafkaVersion=0.9.0.1" -X POST

dovka commented 7 years ago

Are the optional tuning parameters implemented in this rest API? I try to add &clusterManagerThreadPoolSize=4 and &tuning_clusterManagerThreadPoolSize=4, but they have no effect on the cluster created.

etotheipi commented 7 years ago

This has stopped working for me with the latest kafka-manager. The above curl command simply returns HTML of a partially-filled-in form (with the values I specified), but it doesn't actually submit it. I can confirm that the html suggests that /clusters is the correct action and the field names are still valid. Not sure why it doesn't just finish the submission, though.

etotheipi commented 7 years ago

Sorry, I meant to reopen this issue with my last message.

esteban-zenedge commented 7 years ago

Same here, need an automated way to setup the already existing clusters, here's the output of what I'm getting: POST.out command I'm executing: curl localhost:9000/clusters --data 'name=Prod2&zkHosts=172.31.46.66:2181,172.31.46.111:2181,172.31.32.9:2181&kafkaVersion=0.10.2.0' -X POST Thank you in advance.

ksandrmatveyev commented 7 years ago
curl localhost:9000/clusters --data 'name=Prod2&zkHosts=172.31.46.66:2181,172.31.46.111:2181,172.31.32.9:2181&kafkaVersion=0.10.2.0' -X POST

Doesn't work for me. I run kafka manager as container on 8080 port. I got respond as html page, where all parameters are selected (name,zkHosts, kafkaVersion), but it is create nothing(

bombompb commented 7 years ago

If you do it manually through the web UI you can find out that kafkaVersion needs to be set to: 0.9.0.1 (this also seems to work against newer versions of Kafka). Besides this, some of the default values are wrong, and need to be changed from 1 to 2 (e.g. clusterManagerThreadPoolSize). After these changes the cluster is added successfully. In the mean time when running fiddler you can capture the correct command parameters.

The following worked out for me:

curl localhost:8082/clusters --data 'name=MyKafkaCluster&zkHosts=zookeeper%3A2181&kafkaVersion=0.9.0.1&jmxUser=&jmxPass=&tuning.brokerViewUpdatePeriodSeconds=30&tuning.clusterManagerThreadPoolSize=2&tuning.clusterManagerThreadPoolQueueSize=100&tuning.kafkaCommandThreadPoolSize=2&tuning.kafkaCommandThreadPoolQueueSize=100&tuning.logkafkaCommandThreadPoolSize=2&tuning.logkafkaCommandThreadPoolQueueSize=100&tuning.logkafkaUpdatePeriodSeconds=30&tuning.partitionOffsetCacheTimeoutSecs=5&tuning.brokerViewThreadPoolSize=2&tuning.brokerViewThreadPoolQueueSize=1000&tuning.offsetCacheThreadPoolSize=2&tuning.offsetCacheThreadPoolQueueSize=1000&tuning.kafkaAdminClientThreadPoolSize=2&tuning.kafkaAdminClientThreadPoolQueueSize=1000' -X POST

ksandrmatveyev commented 7 years ago

It doesn't help. I got this respond.txt

sir4ur0n commented 6 years ago

In conclusion, the trick of cURLing works fine (even though it may seem not to work, since the response is the HTML page of cluster creation).

However, it would be better if it was actually documented in Kafka Manager.

Should this issue track the documentation update?

sonnyg commented 6 years ago

Just to add to this conversation...

I came here looking for this exact functionality, and while the examples here did not work, I was eventually able to add a cluster entry. I wanted to post my process here for others to try.

I am running firefox developer edition, but this should work with other browsers as well. Also, I am running kafka-manager inside a docker container, localhost:9000.

1) Browse to localhost:9000/addCluster 2) Open the developer tools 3) select the network tab (may need to select "persist logs") 4) Complete the form with your desired values, submit 5) In developer tools, select the network tab 6) Find the POST call 7) Right click on the entry and select Copy > Copy cURL (or Copy POST DATA) 8) Create a new script (I created a .sh script), and paste (or create) the curl command

If this doesn't work the first time, you can redirect the output to an html file. Open the html file in a browser and you can see which form entries failed.

In my scenario:

./create-cluster.sh > results.html

Then I would open results.html in a browser to see which form field was invalid, looking for entries with "This field is required" next to them.

CROSP commented 5 years ago

I have the same question, according to the answers above, there is only one way is to execute some commands on a running container.

I solved this problem with a bash-script that is executed after Kafka manager is ready.

You can find full source code here https://gist.github.com/CROSP/0d98ec1d5da389c679025a75260c7599#file-kafka-config-sh

And in my docker-compose.yml file it is defined as follows.

  bootrstrap-config:
    image: ellerbrock/alpine-bash-curl-ssl
    restart: "no"
    environment:
      - KAFKA_MANAGER_HOST=kafka-manager:9000
      - KAFKA_CLUSTERS=zookeeper:2181#BusLux,zookeeper-2:2181#BusLux2
    depends_on:
      - kafka-manager
      - kafka
      - zookeeper
    command: /bin/bash /config.sh
    volumes:
      - "./kafka-config.sh:/config.sh"

You can specify multiple clusters in the following format <zookeper_host_name>:<zookeper_port>#<cluster_name>,<zookeper_host_name>....

If you need specific options for your cluster you can modify the script or define them in properties file for example and parse while running the script.

Basically this container runs on startup, checks whether a specific cluster is already set up. This a dirty solution, however in my case it satisfies requirements.

dynnamitt commented 5 years ago

Dear yahoo developers , 1st rule of KISS: "always use a text file for ALL config" 2nd rule of KISS: "NEVER use a database for config"

eshepelyuk commented 3 years ago

Hi all,

I assume this is a still open subject. Let me add some updates to it.

In my CMAK Operator Helm chart, I've developed a tool cmak2zk, that could be run via docker to populate CMAK with preconfigured cluster from YAML configuration file.

You could find information and examples at cmak2zk homepage.