opensearch-project / OpenSearch

🔎 Open source distributed and RESTful search engine.
https://opensearch.org/docs/latest/opensearch/index/
Apache License 2.0
9.67k stars 1.78k forks source link

[BUG][Search Backpressure] High Heap Usage Cancellation Due to High Node-Level CPU Utilization #13295

Closed ticheng-aws closed 1 month ago

ticheng-aws commented 6 months ago

Describe the bug

With the current search backpressure cancellation logic, we've noticed that some high CPU usage search requests, such as multi-term aggregation, may result in more cancellations due to task-level heap usage settings. However, the system still has sufficient heap memory to process the tasks.

Related component

Search:Resiliency

To Reproduce

Use multi_term_agg in http_logs workload. It's often referred to as a high CPU usage search request.

  1. Setup a OpenSearch cluster and OpenSearch Benchmark client
  2. Run test with multi_term_agg operation in http_logs workload and gradually increase the search client using below sample command
    opensearch-benchmark execute-test --pipeline=benchmark-only --client-options='basic_auth_user:<USER>,basic_auth_password:<PASSWORD>,timeout:300' --target-hosts '<END_POINT>:443' --kill-running-processes --workload=http_logs --workload-param='target_throughput:none,number_of_replicas:0,number_of_shards:1,search_clients:2'
  3. Monitor the CPU utilization and JVM memory pressure of your OpenSearch cluster
  4. Retrieve cancellation count with GET _nodes/stats/search_backpressure restful API

Expected behavior

We need to adjust the current search backpressure cancellation logic to cancel tasks based on measurements of node-level resources. For example, if a node is under duress due to high CPU utilization, we should only consider canceling tasks based on CPU settings, rather than heap or elapsed time settings at the task level.

Additional Details

Host/Environment (please complete the following information):

jainankitk commented 5 months ago

Assigning to @kaushalmahi12, due to his prior context with query sandboxing and search backpressure

kaushalmahi12 commented 5 months ago

The backpressure works as follows and here the heap_domination threshold is mere 0.05 percent of total jvm memroy available for the process. The same flowchart is applicable for both SearchTasks and SearchShardTasks

There are basically three trackers which can potentially cancel a task

sohami commented 5 months ago

@kaushalmahi12 Agreed and that is what the issue is trying to explain. I think we should check the duress condition for each tracker as well. For example: If under heap duress, then only evaluate the task for heap based cancellation.

kaushalmahi12 commented 5 months ago

Thats right @sohami The weird thing about this is that, it takes the tasks for cancellation even when total jvm allocations by co-ordinator/shard level tasks are 0.05%. ref. Time Based cancellation is still not justified for cases where the cluster has very light search traffic and user is fine with higher latencies for those queries (Given only CPU is high and AC is already there to safeguard against new incoming requests).

I think we should increase this threshold for search workload JVM(or remove it altogether) and separate out the corresponding trackers.

peternied commented 5 months ago

[Triage - attendees 1 2 3 4 5 6 7 8] @ticheng-aws Thanks for creating this issue, looking forward to seeing this resolved.

jed326 commented 1 month ago

Looks like this was fixed and released in 2.15, @kaushalmahi12 please re-open if that's not correct. Thanks!