Open abseht opened 10 months ago
Hi @abseht
Hello @tirumaraiselvan !
geometry
data in our tables. On the other end we have 15-ish services. The regular workload is to read and write rows with Point, Polygon, MultiPolygon data types.
Also, we use Subscriptions
on some of our web-applications.
Please let me know if I can help you further more.
Thank you!Hi @abseht , how did you produce those graphs? And raise the HPA limit for the hasura container? I wonder if this might help me debug an issue my team is experiencing.
I'm debugging an issue where when run inside Kubernetes on a local development machine (with minikube), the Hasura Admin UI becomes exceedingly slow.
Hi @abseht , how did you produce those graphs? And raise the HPA limit for the hasura container? I wonder if this might help me debug an issue my team is experiencing.
I'm debugging an issue where when run inside Kubernetes on a local development machine (with minikube), the Hasura Admin UI becomes exceedingly slow.
Hello @delaurentis The top graph is visualization of basic K8s metrics implemented in Kibana. The bottom graph is copied from a console of our database provider Crunchy. Not sure how to reproduce these metrics in minikube.
Version Information
VERSION=v2.33.3 Server Version: VERSION=v2.33.3
Environment
Kubernetes deployment. Nothing special in charts or values...
What is the current behaviour?
While CPU consumption resembles the actual load, the RAM consumption grows indefinitely Such behaviour became apparent after HPA limits were raised. Otherwise, K8s just kills pods very frequently. Such behaviour drives managed Postgres instance crazy too.
What is the expected behaviour?
Service resource consumption rise and fall together with load
How to reproduce the issue?
Deploy hasura instance in K8s and apply intermittent load. Wait for a couple of days.
Any possible solutions/workarounds you're aware of?
We have been struggling with issues related to Hasura stability for some time. We tried to adjust queries and parameters in our services but it did not change the picture. Pods were dying. However, after we significantly raised HPA limits for Hasura, pods stopped dying but our database management started to go in 'failover mode' out of blue and graph started showing such steady consumption growth. As of now, the workaround is to kill pods as soon as they become 3 days old.
Please consider fixing it. Also, are there average numbers for Hasura consumption and other metrics.
Thank you!
Keywords
RAM, resource consumption