opensearch-project / OpenSearch

🔎 Open source distributed and RESTful search engine.
https://opensearch.org/docs/latest/opensearch/index/
Apache License 2.0
9.77k stars 1.82k forks source link

[RFC] Proposal for a Disk-based Tiered Caching Mechanism in OpenSearch #9001

Open kiranprakash154 opened 1 year ago

kiranprakash154 commented 1 year ago

I'm writing to propose a new caching approach for OpenSearch that could significantly enhance its performance.

OpenSearch is used primarily for two purposes:

  1. Search: OpenSearch provides robust support for text-based searches, such as when a user searches for an item on an e-commerce platform like Amazon.com.
  2. Log Analytics: It enables the indexing of logs and other time-series data, allowing users to create comprehensive analytical dashboards, either using OpenSearch Dashboards or other proprietary software.

When dealing with log analytics, there's a consistent pattern where the indexed documents are time bound and progress into the future. For instance, if a query is generated to find the count of 4xx errors between two timestamps (T1 and T2, with T2 in the past), the result will invariably remain the same. This attribute presents an opportunity to cache the computed result, allowing for faster retrieval and a reduced processing load.

Presently, OpenSearch incorporates three types of in-memory, bounded caches:

  1. Shard Request Cache: Executes search requests locally on each involved shard, returning local results to the coordinating node that compiles these shard-level results into a global result set.
  2. Node Query Cache: Caches the results of queries used in the filter context, facilitating quick lookups. The cache, shared by all shards on a node, uses an LRU eviction policy.
  3. Field Data Cache: Stores field data and global ordinals that support aggregations on certain field types. As these are on-heap data structures, monitoring the cache's use is crucial.

We limited these caches in size and subject to eviction as new or more frequent search requests demand cache space. However, this eviction mechanism may cause recomputation of certain queries, adding to the overall system's overhead.

Given the limitations of the current caching strategy, I propose implementing an optional disk-based caching tier. This tier could leverage either a remote data store (such as Amazon S3, Azure Blob Storage etc.), the disk on the node where the shard lives, or a combination of both.

The rationale for this proposal stems from our hypothesis that the cost of recomputation exceeds the expense incurred during a disk seek operation or making a call to an external storage. Introducing a disk-based cache tier would significantly reduce the need for such recomputations, leading to more efficient query processing and improved system performance.

Kindly review the proposal and provide feedback. We believe that this approach to caching would enhance OpenSearch's performance, especially in scenarios where high data throughput and fast query processing are of paramount importance.

Bukhtawar commented 1 year ago

Given the limitations of the current caching strategy, I propose implementing an optional disk-based caching tier. This tier could leverage either a remote data store (such as Amazon S3, Azure Blob Storage etc.), the disk on the node where the shard lives, or a combination of both.

Thanks for the proposal! Curious why not leverage mmap or off-heap based caches since it provides faster data access rather than performing slower disk seeks. This comes with the caveat that for safety constraints, contents need to be immutable. So anything changes we have to rebuild the cache. This will help unleash unused system memory as long as we can this bounded. With https://github.com/opensearch-project/OpenSearch/issues/8891 tiered file cache proposal the cache can also be automatically tiered across local storage and remote storage albeit few cache and the data have to be handled differently through a different policy?

sgup432 commented 1 year ago

@Bukhtawar off-heap tier does make sense but still constrained by the memory for larger datasets. Whereas disk-tier might not be constrained by that, tradeoff being latency. Though as part of this, we are also considering giving off-heap tier as an option. Disk tier will provide substantial improvements in latency for most type of queries(all which can't be fit in-memory) as it will act as a simple key value store and return results in few ms. Here we can consider to leverage mmap for better performance and exploit system memory internally. This can also be used to warm up the cache, either keeping it on disk itself or promoting to memory accordingly.

Bukhtawar commented 1 year ago

I am not opposed to the disk tier, the point I am suggesting is we used a tiered approach heap -> off-heap -> disc based on access patterns and space/memory constraints

sgup432 commented 1 year ago

@Bukhtawar Make sense!