elastic / kibana

Your window into the Elastic Stack
https://www.elastic.co/products/kibana
Other
19.71k stars 8.12k forks source link

problem occurs when run kinaba-6.2.4 on kubernetes #19435

Closed kaybinwong closed 3 years ago

kaybinwong commented 6 years ago

Kibana version: 6.2.4

Elasticsearch version: es 6.2.4 with x-pack running on physical host.

[es@unipay-test elasticsearch-6.2.4]$ curl -uelastic:123456 localhost:9200
{
  "name" : "SP_YEWr",
  "cluster_name" : "inf-es",
  "cluster_uuid" : "ePKv9fwxTbO16jrmJ_eh-A",
  "version" : {
    "number" : "6.2.4",
    "build_hash" : "ccec39f",
    "build_date" : "2018-04-12T20:37:28.497551Z",
    "build_snapshot" : false,
    "lucene_version" : "7.2.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

Server OS version:

[es@unipay-test elasticsearch-6.2.4]$ cat /etc/redhat-release 
CentOS Linux release 7.5.1804 (Core) 

Browser version: Chrome lastest

Browser OS version: Window 7

Describe the bug:

# create endpoint
---
apiVersion: v1
kind: Endpoints
metadata:
  name: elasticsearch-logging
  namespace: kube-system
subsets:
- addresses:
  - ip: "192.168.64.134"  # es cluster url
  ports:
  - port: 9200

# create service
---
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-logging
  namespace: kube-system
spec:
  ports:
  - port: 9200
    protocol: TCP
    targetPort: 9200
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana-logging
  namespace: kube-system
  labels:
    k8s-app: kibana-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: kibana-logging
  template:
    metadata:
      labels:
        k8s-app: kibana-logging
    spec:
      containers:
      - name: kibana-logging
        image: docker.elastic.co/kibana/kibana:6.2.4
        resources:
          # need more cpu upon initialization, therefore burstable class
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        env:
          - name: ELASTICSEARCH_URL
            value: http://elasticsearch-logging:9200
          - name: SERVER_BASEPATH
            value: /api/v1/namespaces/kube-system/services/kibana-logging/proxy
        ports:
        - containerPort: 5601
          name: ui
          protocol: TCP

i had rewrite the default value above, but what i got still the default values.

bash-4.2$ cat config/kibana.yml 
---
# Default Kibana configuration from kibana-docker.

server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
elasticsearch.username: elastic
elasticsearch.password: changeme
xpack.monitoring.ui.container.elasticsearch.enabled: true

Expected behavior:

rewrite the config in the kibana.yml, and access the kibana dashboard successfully.

Screenshots (if relevant): here is the status of the pod

[root@k8s-node1 efk]# kubectl describe po -n kube-system kibana-logging-65446f989f-qsnhq 
Name:           kibana-logging-65446f989f-qsnhq
Namespace:      kube-system
Node:           k8s-node2/192.168.56.102
Start Time:     Fri, 25 May 2018 15:26:44 +0800
Labels:         k8s-app=kibana-logging
                pod-template-hash=2100295459
Annotations:    <none>
Status:         Running
IP:             10.233.67.13
Controlled By:  ReplicaSet/kibana-logging-65446f989f
Containers:
  kibana-logging:
    Container ID:   docker://6420a388a4073049824c882041e1487a67cc76864e0bcc2af016ae93b369b34f
    Image:          registry.seedland.cc/library/kibana:6.2.4
    Image ID:       docker-pullable://registry.seedland.cc/library/kibana@sha256:6826fbd7975702442f50b03bee8dc12223570fe6dfef121ab77c145ea58ff284
    Port:           5601/TCP
    State:          Running
      Started:      Fri, 25 May 2018 15:26:46 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:  1
    Requests:
      cpu:  100m
    Environment:
      ELASTICSEARCH_URL:  http://elasticsearch-logging:9200
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-w5spx (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  default-token-w5spx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-w5spx
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     <none>
Events:
  Type    Reason                 Age   From                Message
  ----    ------                 ----  ----                -------
  Normal  Scheduled              1m    default-scheduler   Successfully assigned kibana-logging-65446f989f-qsnhq to k8s-node2
  Normal  SuccessfulMountVolume  1m    kubelet, k8s-node2  MountVolume.SetUp succeeded for volume "default-token-w5spx"
  Normal  Pulled                 1m    kubelet, k8s-node2  Container image "registry.seedland.cc/library/kibana:6.2.4" already present on machine
  Normal  Created                1m    kubelet, k8s-node2  Created container
  Normal  Started                1m    kubelet, k8s-node2  Started container
[root@k8s-node1 efk]# kubectl exec -ti -n kube-system kibana-logging-65446f989f-qsnhq /bin/bash
bash-4.2$ cat config/kibana.yml 
---
# Default Kibana configuration from kibana-docker.

server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
elasticsearch.username: elastic
elasticsearch.password: changeme
xpack.monitoring.ui.container.elasticsearch.enabled: true
bash-4.2$ 

what had i done wrong, pls help.

jbudz commented 3 years ago

Hey kaybinwong, sorry for the slow reply on this. Are you still seeing issues with this and do you have kibana logs available?

I'm going to close this out as stale for now, we've had a few changes to how config files are looked up in the interim period and this hasn't seen more traffic but feel free to ping me and we can reopen. https://discuss.elastic.co may be a better location for getting feedback on deployments but if there's a bug here we'll certainly want to look into it.