vanvalenlab / kiosk-console

DeepCell Kiosk Distribution for Kubernetes on GKE and AWS
https://deepcell-kiosk.readthedocs.io
Other
35 stars 6 forks source link

1 GPU clusters can get stuck in a DEADLINE_EXCEEDED loop. #338

Open willgraf opened 4 years ago

willgraf commented 4 years ago

Describe the bug Sometimes clusters with a single GPU can get stuck with too many consumers sending requests that are taking too long and getting rejected with a DEADLINE_EXCEEDED error. This can happen with the GPU at high usage, which is accounted for with the Prometheus scaling rule. However, in cases with a single GPU, we can get stuck with the GPU at 0% usage and yet all requests are timing out. I have not seen this in any cluster with > 1 GPU.

To Reproduce I've seen this in 100k benchmarking runs, though it does not happen regularly.

Expected behavior The consumers will scale down to allow for the GPU to start processing requests in a reasonable time.

This may be fixed with a better backoff on the consumer side, a more effective GRPC_TIMEOUT setting, or improvements to the scaling rule. This additionally may be resolved with improved metrics discussed in #278.

Screenshots The HPA status where it can get stuck (tf-serving at 0 and segmentation-consumer at 1).

Screen Shot 2020-05-05 at 1 02 32 PM

Additional context The TensorFlow Serving logs had some unusual warnings that may or not be related:

[evhttp_server.cc : 238] NET_LOG: Entering the event loop ...
2020-05-05 19:25:59.266451: W external/org_tensorflow/tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 268435456 exceeds 10% of system memory.
2020-05-05 19:25:59.386629: W external/org_tensorflow/tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 268435456 exceeds 10% of system memory.
2020-05-05 19:25:59.746314: W external/org_tensorflow/tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 536870912 exceeds 10% of system memory.
2020-05-05 19:26:00.081015: W external/org_tensorflow/tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 536870912 exceeds 10% of system memory.
2020-05-05 19:26:10.632290: W external/org_tensorflow/tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 268435456 exceeds 10% of system memory.
willgraf commented 4 years ago

It seems you can just redeploy tf-serving as a workaround:

helm delete tf-serving --purge ; helmfile -l name=tf-serving sync