Open oruchreis opened 6 months ago
Hi,
Would you please share the method/steps used to install the keydb operator in the cluster?
Hi, I have installed as described in the document. Also I've reinstalled operator 3 times, but no luck. Exact steps are:
I am encountering the same issue.
@oruchreis thanks for the info. Does the error also occur in standalone mode?
From the installation method and shared logs, there is a message that could lead us to the cause. Secret creation is failing:
TASK [krestomatio.k8s.keydb : resource definitions for keydb in state: present, kind: Secret] ***
task path: /opt/ansible/.ansible/collections/ansible_collections/krestomatio/k8s/roles/v1alpha1/database/keydb/tasks/common/k8s/object.yml:1
redirecting (type: lookup) ansible.builtin.k8s to kubernetes.core.k8s
fatal: [localhost]: FAILED! => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false}
The details of the Secrets steps are hidden to prevent sensitive information from leaking into the logs. I would need to recreate a dev environment as similar as possible, to dig deeper.
@jobcespedes yes, I've tried standalone mode as described in the documentation mode and the error occurs in standalone mode, too. The Crds created but there are no objects in the crds.
> kubectl get keydb
NAME AGE STATUS SINCE MODE SERVICE
keydb-sample 20m Failed 20m standalone
keydb-sample-multimaster 20d Failed 20d multimaster
> kubectl describe keydb keydb-sample
Name: keydb-sample
Namespace: **
Labels: <none>
Annotations: keydb.krestomat.io/observedGeneration: 1
keydb.krestomat.io/ready: False
API Version: keydb.krestomat.io/v1alpha1
Kind: Keydb
Metadata:
Creation Timestamp: 2024-05-27T06:34:41Z
Finalizers:
keydb.krestomat.io/finalizer
Generation: 1
Managed Fields:
API Version: keydb.krestomat.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"keydb.krestomat.io/finalizer":
Manager: ansible-operator
Operation: Update
Time: 2024-05-27T06:34:41Z
API Version: keydb.krestomat.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:spec:
.:
f:keydbExtraConfig:
f:keydbMode:
f:keydbPvcDataSize:
f:keydbResourceLimits:
f:keydbResourceLimitsCpu:
f:keydbResourceLimitsMemory:
Manager: dashboard
Operation: Update
Time: 2024-05-27T06:34:41Z
API Version: keydb.krestomat.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:status:
f:observedGeneration:
f:ready:
f:state:
Manager: OpenAPI-Generator
Operation: Update
Subresource: status
Time: 2024-05-27T06:34:44Z
API Version: keydb.krestomat.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:keydb.krestomat.io/observedGeneration:
f:keydb.krestomat.io/ready:
Manager: OpenAPI-Generator
Operation: Update
Time: 2024-05-27T06:34:45Z
API Version: keydb.krestomat.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:conditions:
Manager: ansible-operator
Operation: Update
Subresource: status
Time: 2024-05-27T06:46:52Z
Resource Version: 733901325
UID: 341fa506-e309-49ec-a97e-6bfe161eed26
Spec:
Keydb Extra Config: maxmemory 900mb
maxmemory-policy allkeys-lru
Keydb Mode: standalone
Keydb Pvc Data Size: 1Gi
Keydb Resource Limits: true
Keydb Resource Limits Cpu: 1
Keydb Resource Limits Memory: 1Gi
Status:
Conditions:
Last Transition Time: 2024-05-27T06:34:45Z
Message: There was an error before completing all tasks
Reason: Error
Status: False
Type: Ready
Ansible Result:
Changed: 2
Completion: 2024-05-27T06:34:45.323368
Failures: 1
Ok: 9
Skipped: 3
Last Transition Time: 2024-05-27T06:34:45Z
Message: unknown playbook failure
There was an error before completing all tasks
Reason: Failed
Status: True
Type: Failure
Last Transition Time: 2024-05-27T06:34:45Z
Message:
Reason:
Status: False
Type: Successful
Last Transition Time: 2024-05-27T06:46:48Z
Message: Running reconciliation
Reason: Running
Status: False
Type: Running
Observed Generation: 1
Ready: False
State: Failed
Events: <none>
@jobcespedes hello Do you have any progress?
Unfortunately not yet. I haven't found time to create an environment with the same conditions, in Azure Provider.
Hi, I'm trying to test multimaster sample but I've got this error:
Which example are you working with? keydb_v1alpha1_keydb_multimaster.yaml , operator version: 0.3.14
What is the current behavior? No Keydb instances created, there aren't any ss, pod etc. after applying sample.
Please tell us about your environment: Azure Aks (linux container) Kubernetes and nodepool version: 1.28.3 Storage class default: managed-premium
Log of keydb operator manager container