Open jaspersorrio opened 1 year ago
Hi @jaspersorrio , this is the first time I see this. Do you have the logs from the Completed
Pod?
What is the solution to this problem? I am also seeing the same.
Hi Mohit,
I managed to solve this temporarily by increasing the CPU & Ram
What does your workload look like?
Hi @mohit-sarvam , we have noticed this as well and as @jaspersorrio mentioned increasing the resources helps. It seems to be an issue when running out of Memory, in this case either the pod crashes or the kube-system decided to kill it by sending signal to the pod to gracefully shut down. This causes the Pod to be in a Completed
state. We have not figured out how to overcome this yet in a nice manner. You can try and set requests
and limits
for the Weaviate Pods so the Pod crashes properly on OOM
and not be Completed
.
Thanks @StefanBogdan @jaspersorrio, I am not seeing the issue after increasing the memory and number of nodes.
Hi Team,
Not sure if you guys are also observing this behaviour & if this is normal.
One of the pods randomly went into the completed status. Kubernetes is not attempting to restart it.
kubectl version --output=yaml
kubectl get all -n weaviate-prod
kubectl describe pod weaviate-2 -n weaviate-prod