I created a multi-k8s env and created etcd cluster, then I did a restart, the pods were restarted as expect, but the ops status is always Running without any progress, we should support ops in multi-k8s env.
➜ ~ k config use-context k8s-control
Switched to context "k8s-control".
➜ ~ k get ops
NAME TYPE CLUSTER STATUS PROGRESS AGE
etcdtrgbv-restart-q6c2v Restart etcdtrgbv Running 0/3 7m
➜ ~ k config use-context k8s-1
Switched to context "k8s-1".
➜ ~ k get pod |grep etcd
etcdtrgbv-etcd-0 3/3 Running 0 6m55s
etcdtrgbv-etcd-2 3/3 Running 0 6m37s
➜ ~ k config use-context k8s-2
Switched to context "k8s-2".
➜ ~ k get pod |grep etcd
etcdtrgbv-etcd-1 3/3 Running 0 7m23s
➜ ~ k describe ops etcdtrgbv-restart-q6c2v
Name: etcdtrgbv-restart-q6c2v
Namespace: default
Labels: app.kubernetes.io/instance=etcdtrgbv
app.kubernetes.io/managed-by=kubeblocks
ops.kubeblocks.io/ops-type=Restart
Annotations: <none>
API Version: apps.kubeblocks.io/v1alpha1
Kind: OpsRequest
Metadata:
Creation Timestamp: 2024-04-29T04:46:22Z
Finalizers:
opsrequest.kubeblocks.io/finalizer
Generate Name: etcdtrgbv-restart-
Generation: 2
Managed Fields:
API Version: apps.kubeblocks.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:generateName:
f:labels:
.:
f:app.kubernetes.io/instance:
f:app.kubernetes.io/managed-by:
f:spec:
.:
f:clusterRef:
f:restart:
.:
k:{"componentName":"etcd"}:
.:
f:componentName:
f:ttlSecondsBeforeAbort:
f:type:
Manager: kbcli
Operation: Update
Time: 2024-04-29T04:46:22Z
API Version: apps.kubeblocks.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"opsrequest.kubeblocks.io/finalizer":
f:labels:
f:ops.kubeblocks.io/ops-type:
f:ownerReferences:
.:
k:{"uid":"eaddaf8d-e036-4f72-b90a-06822bbbcbac"}:
Manager: manager
Operation: Update
Time: 2024-04-29T04:46:22Z
API Version: apps.kubeblocks.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:clusterGeneration:
f:components:
.:
f:etcd:
.:
f:phase:
f:conditions:
.:
k:{"type":"Restarting"}:
.:
f:lastTransitionTime:
f:message:
f:reason:
f:status:
f:type:
k:{"type":"Validated"}:
.:
f:lastTransitionTime:
f:message:
f:reason:
f:status:
f:type:
k:{"type":"WaitForProgressing"}:
.:
f:lastTransitionTime:
f:message:
f:reason:
f:status:
f:type:
f:phase:
f:progress:
f:startTimestamp:
Manager: manager
Operation: Update
Subresource: status
Time: 2024-04-29T04:47:17Z
Owner References:
API Version: apps.kubeblocks.io/v1alpha1
Kind: Cluster
Name: etcdtrgbv
UID: eaddaf8d-e036-4f72-b90a-06822bbbcbac
Resource Version: 19631
UID: e376f840-ae45-4ea8-a5c7-8892201d2418
Spec:
Cluster Ref: etcdtrgbv
Restart:
Component Name: etcd
Ttl Seconds Before Abort: 0
Type: Restart
Status:
Cluster Generation: 3
Components:
Etcd:
Phase: Running
Conditions:
Last Transition Time: 2024-04-29T04:46:22Z
Message: wait for the controller to process the OpsRequest: etcdtrgbv-restart-q6c2v in Cluster: etcdtrgbv
Reason: WaitForProgressing
Status: True
Type: WaitForProgressing
Last Transition Time: 2024-04-29T04:46:22Z
Message: OpsRequest: etcdtrgbv-restart-q6c2v is validated
Reason: ValidateOpsRequestPassed
Status: True
Type: Validated
Last Transition Time: 2024-04-29T04:46:22Z
Message: Start to restart database in Cluster: etcdtrgbv
Reason: RestartStarted
Status: True
Type: Restarting
Phase: Running
Progress: 0/3
Start Timestamp: 2024-04-29T04:46:22Z
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForProgressing 8m24s ops-request-controller wait for the controller to process the OpsRequest: etcdtrgbv-restart-q6c2v in Cluster: etcdtrgbv
Normal ValidateOpsRequestPassed 8m24s (x2 over 8m24s) ops-request-controller OpsRequest: etcdtrgbv-restart-q6c2v is validated
Normal RestartStarted 8m24s (x2 over 8m24s) ops-request-controller Start to restart database in Cluster: etcdtrgbv
➜ ~
I created a multi-k8s env and created etcd cluster, then I did a restart, the pods were restarted as expect, but the ops status is always Running without any progress, we should support ops in multi-k8s env.