elastic / kibana

Your window into the Elastic Stack
https://www.elastic.co/products/kibana
Other
19.35k stars 7.98k forks source link

[Research] Possible performance improvements in Lens expressions #182151

Closed markov00 closed 1 week ago

markov00 commented 2 weeks ago

I will add here some of the findings related to performance bottlenecks I've found in the current Lens/SearchStrategy architecture and possible improvements to these solutions.

Unnecessary multiple requests with the same esagg query

When a search is sent to ES and a response is received, the search service is looking if the request needs a post-flight request. https://github.com/elastic/kibana/blob/74fdd1b5f25c783ef95b3fddb2eaef03ff597345/src/plugins/data/common/search/search_source/search_source.ts#L539-L548 If needed, it transforms the response to a partial response and update the body with the postflight request. This works correctly if the postflight is actually necessary, but due to the current implementation the postflight request is always "applied" even if not needed, causing a subsequent request to be sent to ES. This results to an increase of:

Analysis The current method that checks if a request needs a subsequent post-flight request relies on a loose check from the function hasPostFlightRequests. This function checks if the agg property type.postFlightRequest is a function.

https://github.com/elastic/kibana/blob/74fdd1b5f25c783ef95b3fddb2eaef03ff597345/src/plugins/data/common/search/search_source/search_source.ts#L474-L483

This function is there even if is not required. For example in a terms aggregation without the other bucket the function is still there but just return its identity https://github.com/elastic/kibana/blob/74fdd1b5f25c783ef95b3fddb2eaef03ff597345/src/plugins/data/common/search/aggs/buckets/terms.ts#L93 All the other cases this is defaulted to an identity function, so the hasPostFlightRequests function will always return true. https://github.com/elastic/kibana/blob/74fdd1b5f25c783ef95b3fddb2eaef03ff597345/src/plugins/data/common/search/aggs/agg_type.ts#L311

wait_for_completion_timeout value is too low and can't process, without delays, a full response

This parameter, used in async search, describes the timeout before returning asynch search with a partial result.. This parameter is currently set to 200ms. https://github.com/elastic/kibana/blob/b8d8c737e6cc7889c19a6e7984d618bf378ee617/src/plugins/data/config.ts#L58

After this 200ms interval the polling mechanism kicks in and the results then are just delayed everytime by at least ~300ms https://github.com/elastic/kibana/blob/b8d8c737e6cc7889c19a6e7984d618bf378ee617/src/plugins/data/common/search/poll_search.ts#L20-L35

Probably I don't have enough knowledge in that, but I don't see any major drawback to increase this value to at least 1s as proposed here https://github.com/elastic/kibana/issues/157837#issuecomment-1663100478 or even more. The main drawback with that is an open connection between ES and Kibana that last for ~1 second, instead of opening and closing a new one 5 times in the same time interval.

getXDomain can be speeded up

When using cartesian charts, we compute the x domain. If that domain is big, the time to compute is pretty relevant. For example for a 50k data point dataset it tooks ~40ms. This can probably reduced by half if we adopt a better strategy on data processing, avoiding multiple array scans to sort, filter, map values and we just loop once with a reduce.

Screenshot 2024-04-30 at 17 24 52
elasticmachine commented 2 weeks ago

Pinging @elastic/kibana-visualizations (Team:Visualizations)

dej611 commented 2 weeks ago

Probably I don't have enough knowledge in that, but I don't see any major drawback to increase this value to at least 1s as proposed here https://github.com/elastic/kibana/issues/157837#issuecomment-1663100478 or even more.

I think it makes sense to increase it. From some experiments we saw, in the past, that "quick" responses from ES where within 150ms but it the test didn't have any specific statistical significance but it was enough to increase it from 100ms to 200ms. I think pushing it to 400/500ms might be worth. wdyt @ppisljar ?