kafkaesque-io / pulsar-helm-chart

Helm Chart for an Apache Pulsar Cluster
https://helm.kafkaesque.io
Apache License 2.0
31 stars 22 forks source link

High CPU consumption by pulsar-proxy on idle cluster #73

Open yabinmeng opened 4 years ago

yabinmeng commented 4 years ago

After launching a Pulsar cluster using the helm chart, the proxy pod consumes significant CPU resources although there is no traffic.

The following is copied from "kubectl describe node" command where pulsar-proxy is launched where pulsar-proxy consumes 51% CPU requests and all other combined is just 12%.

Non-terminated Pods:          (6 in total)
  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
  kube-system                 fluentd-gke-5x2rj                                          100m (2%)     1 (25%)     200Mi (1%)       500Mi (3%)     28h
  kube-system                 gke-metrics-agent-2qwh9                                    3m (0%)       0 (0%)      50Mi (0%)        50Mi (0%)      28h
  kube-system                 kube-proxy-gke-ymtest-pulsar-default-pool-dfdb7f7c-h9d8    100m (2%)     0 (0%)      0 (0%)           0 (0%)         28h
  kube-system                 prometheus-to-sd-m6652                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         28h
  pulsar                      pulsar-proxy-f4869ddf5-d5phm                               2 (51%)       0 (0%)      2Gi (15%)        0 (0%)         45m
  pulsar                      pulsar-zookeeper-2                                         300m (7%)     0 (0%)      1Gi (7%)         0 (0%)         44m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                   Requests      Limits
  --------                   --------      ------
  cpu                        2503m (63%)   1 (25%)
  memory                     3322Mi (24%)  550Mi (4%)
  ephemeral-storage          0 (0%)        0 (0%)
  hugepages-2Mi              0 (0%)        0 (0%)
  attachable-volumes-gce-pd  0             0
Events:                      <none>

This is pretty consistent every time I launch a new cluster.