opensearch-project / opensearch-k8s-operator

OpenSearch Kubernetes Operator
Apache License 2.0
365 stars 192 forks source link

[BUG] Operator cannot reliably bootstrap a cluster #811

Open lpeter91 opened 1 month ago

lpeter91 commented 1 month ago

What is the bug?

The operator sometimes fails to correctly bootstrap/initialize a new cluster, instead it settles on a yellow state with shards stuck in unallocated and initializing statuses.

How can one reproduce the bug?

Note that this doesn't always happen, so you might have to try multiple times; however it happens for me more often than not:

Apply the minimal example below. It's basically the first example from the docs, with the now mandatory TLS added, and Dashboards removed. Wait until the bootstrapping finishes.

apiVersion: opensearch.opster.io/v1
kind: OpenSearchCluster
metadata:
  name: my-first-cluster
  namespace: default
spec:
  general:
    serviceName: my-first-cluster
    version: 2.13.0
  security:
    tls:
      transport:
        generate: true
        perNode: true
      http:
        generate: true
  nodePools:
    - component: nodes
      replicas: 3
      diskSize: "5Gi"
      nodeSelector:
      resources:
         requests:
            memory: "2Gi"
            cpu: "500m"
         limits:
            memory: "2Gi"
            cpu: "500m"
      roles:
        - "cluster_manager"
        - "data"

When the setup process finishes, the bootstrap pod is removed. Also around this time when to operator sometimes decides to log the event "Starting to rolling restart", and recreate the first node (pod). If this happens, sometimes the cluster ends up in a yellow state, that the operator does not resolve. If at this point I manually delete the cluster_manager pod (second node, usually), it will be recreated and the issue seems to resolve itself.

What is the expected behavior?

A cluster with a green state after setup. Preferably without unnecessary restarts.

What is your host/environment?

minikube v1.33.0 on Opensuse-Tumbleweed 20240511 w/ docker driver

I'm currently evaluating the operator locally. It might be part of the problem, as it forces me to run 3 nodes on a single machine. (However it does have sufficient resources to accommodate the nodes. The issue was also reproduced on a MacBook, albeit also with minikube.)

Do you have any additional context?

See the attached files. Some logs are probably missing since a pod was recreated. kubectl_describe.txt operator.log node-2.log node-1.log node-0.log allocation_explain.json cat_shards.txt cat_nodes.txt

dtaivpp commented 1 month ago

Going to be honest here not having enough resources to host the cluster is probably where you are running into issues. OpenSearch gets really unstable when there is not enough memory. I've personally experienced this as well when running docker containers with OpenSearch.

These logs are concerning but it's hard to say the are unrelated to OOM type issues.

Node 0 Log: [2024-05-13T17:35:44,103][WARN ][o.o.s.SecurityAnalyticsPlugin] [my-first-cluster-nodes-0] Failed to initialize LogType config index and builtin log types

Node 1 Log:


[2024-05-13T17:35:42,014][INFO ][o.o.i.i.MetadataService  ] [my-first-cluster-nodes-1] ISM config index not exist, so we cancel the metadata migration job.
[2024-05-13T17:35:43,296][ERROR][o.o.s.l.LogTypeService   ] [my-first-cluster-nodes-1] Custom LogType Bulk Index had failures:

[2024-05-13T17:35:43,296][ERROR][o.o.s.l.LogTypeService   ] [my-first-cluster-nodes-1] Custom LogType Bulk Index had failures:```
lpeter91 commented 1 month ago

Now I can confirm that this issue happens on an actul production Kubernetes cluster with plenty of resources too. The operator erroneously decides to do a rolling restart and fails to deliver, leaving the cluster in a yellow state. Seems like a concurrency issue as it doesn't always happen.

jaskeerat789 commented 2 weeks ago

We have facing this issue too. We have given ample amount of resources to all node groups but controller tries a rolling restart and then get stuck at yellow cluster state. We are able to use the cluster but any updates to manifests are not enforced by the operator due to yellow cluster state

prudhvigodithi commented 1 week ago

[Triage] I was able to deploy the cluster successfully with the operator, also posted the same here https://github.com/opensearch-project/opensearch-k8s-operator/issues/844#issuecomment-2179085477, @jaskeerat789 @lpeter91 can you please test with the latest version of the operator? Thank you @dtaivpp @get

dtaivpp commented 1 week ago

Okay this feels very much like a stability issue I was having as well. @prudhvigodithi I have a feeling this is the same issue I had at reinvent where like 2/10 clusters wouldn’t bootstrap correctly.

Might be worth checking with Kyle Davis who has the code from that and testing repeatedly. I can test on a local machine to see if I experience a similar issue.

jaskeerat789 commented 1 week ago

@prudhvigodithi Cluster deployment is not an issue for us. We are able to bootstrap a cluster. Problem arises when we try to update something in the cluster manifest and apply that. Operator tries to do a rolling restart of pods in order to enforce changes but gets is unable to trigger a restart for some reason. operator then proceeds to mark cluster in yellow state. Any further changes to manifests are ignored by the operator since the cluster is in yellow state. Let me know how can i help you understand this issue in more detail.

lpeter91 commented 1 week ago

@lpeter91 can you please test with the latest version of the operator?

@prudhvigodithi Tried it, still reproducible, took me only 4 tries.

Updated versions: OpenSearch operator: v2.6.1 OpenSearch: 2.14.0 K8s: minikube v1.33.1 on Opensuse-Tumbleweed 20240619; Kubernetes v1.30.0 on Docker 26.1.1 Helm (only used for installing the operator): v3.15.2