Open alexbrand opened 6 years ago
After further investigation, I realized that what is actually limiting us is the rate at which the API server can create services.
The API server we are using for testing can handle approximately 4 services per second. I arrived at this number using time
and kubectl
. This means that discovering 1000 services will take approximately 4 minutes, which is pretty much what we are seeing in our performance tests.
Lowering the priority on this for now.
cc: @stevesloka @rosskukulinski
Are we creating Services one by one or are we using ServerList type?
One by one. There is no API endpoint that takes a list of services for creation AFAIK.
@alexbrand is this the same as #150 which was merged?
@rosskukulinski This is different. This issue is about the queue we use to put all the services and endpoints that we have found in the backend cluster, before they are created in the Gimbal cluster. Currently, we are using the default rate limiter for this queue.
Issue #150 was about the Kubernetes client, which also has a built-in rate limiter to control the number of QPS.
Is this a BUG REPORT, PERFORMANCE REPORT or FEATURE REQUEST?: FEATURE REQUEST
What happened: During performance testing, we noticed that the discoverers were taking a long time to drain the discovery queue when testing with a large number of services (>1k). The issue is that we are using the default client-go rate limited queue, which limits us to 10 items per second.
What you expected to happen: Ideally, the rate limit should be configurable so that it can be adjusted according to the environment.