Open hollinwilkins opened 6 years ago
I''m seeing this may be an issue with CPU usage. As I increase the number of istio-injected sidecars, the istio-proxy sidecars start to take more and more CPU (25% on average).
Update here, I see there is an issue that is probably the cause of this: https://github.com/istio/istio/issues/1485
Update: even with reduced CPU overhead, TCP services stop working with too many istiofied pods
Seems like 26 istiofied pods is the limit until TCP connections start being refused (presumably by envoy)
Is this a BUG or FEATURE REQUEST?: This is a bug
Did you review istio.io/help and existing issues to identify if this is already solved or being worked on?: YES
Bug: YES
What Version of Istio and Kubernetes are you using, where did you get Istio from, Installation details
istioctl - 0.4.0 kubectl - 1.8.4.gke0
Is Istio Auth enabled or not ? Did you install istio.yaml, istio-auth.yaml.... Istio auth is not enabled
What happened: I created 6 Istio-injected pods that connect to TCP services (some rabbitmq, some postgres) When creating the 7th one, it is unable to connect to either RabbitMQ or Postgres. RabbitMQ/Postgres are both headless services that are not injected with Istio sidecars.
I can create as many non-injected pods as I want and I don't see any connection issues to rabbitmq or postgres.
What you expected to happen: The TCP connections can be establishes without issue.
How to reproduce it: Spin up a TCP service (mongodb, postgres, rabbitmq). Create X pods that connect to them and see that they cannot connect after there are a certain number of them.
Feature Request: N
Describe the feature: N/A