Open simonli866 opened 2 years ago
Instead of kubectl logs stackstorm-redis-node-0 -c sentinel
, use kubectl logs --previous stackstorm-redis-node-0 -c sentinel
. I suspect the most important messages weren't included for the failing container.
It's interesting that redis-node-2
has finally reached it's alive
and up
state, while others are down.
Can you compare those for any differences and anomalies, including logs from other pods?
Also show the full kubectl describe
for the failing pods. kubectl get pv,pvc,sc
would help too.
Could you describe the resources (memory/cpu/storage) you have on that K8s cluster?
Sorry, this problem can not be repeated every time, I will update the content when it is repeated next time
The question of Redis arises again
@armab
@ShimingLee is behavior different/better when the redis
is deployed directly from bitnami
with it being disabled in stackstorm-ha values.yaml
? You may have to provide the connection string in st2.conf
(via configmap).
@arms11 I use the bitnami directly. why not use values.yaml directly? Why need to configure connection strings?
Another advantage of what @arms11 suggested is that trying the Redis chart in isolation could help to pinpoint the root cause of the issue so you don't need to re-deploy the st2 cluster every time, but deal with Redis issue only.
BTW could you provide more info about your K8s environment and resources?
Two PODS cannot be started and the Web interface cannot be accessed, but the console interface shows that the installation is successful
the error log in here: