opensearch-project / OpenSearch

🔎 Open source distributed and RESTful search engine.
https://opensearch.org/docs/latest/opensearch/index/
Apache License 2.0
9.63k stars 1.77k forks source link

[Tiered Caching] [META] Performance benchmark plan #11464

Open kiranprakash154 opened 10 months ago

kiranprakash154 commented 10 months ago

This issue captures the different performance benchmarks which we plan to do as part of evaluating Tiered Caching which is tracked here. We will gather feedback from the community to see if we are missing anything that needs to be considered.

Tiered Caching is described here

Goals of Benchmarking

Scenarios

1. Readonly vs Changing data

Mimic a log analytics behavior by having index rollover policy and perform searches on

2. Concurrent Queries trying to cache into disk based caching

Stress test the tiered cache with continuous search traffic by forcing to cache the requests. This will help us understand if there is a limit we have to be aware of or the sweet spot after which tiered caching might not make too much sense.

This is limited by number of threads in search threadpool. So we should consider different instances like (xl/2xl//8xl/16xl) which will increase number of concurrent queries running on the node. Then with increasing concurrency how the latency profile looks like

3. Long Running Performance Test

Run a workload for long duration.

4. Tune parameters of Ehcache

We use Ehcache as the disk cache and it has many parameters exposed to tune it for specific use cases, we can experiment with them to see how it impacts the performance overall. like the below parameters.

    // Ehcache disk write minimum threads for its pool
    public final Setting<Integer> DISK_WRITE_MINIMUM_THREADS;

    // Ehcache disk write maximum threads for its pool
    public final Setting<Integer> DISK_WRITE_MAXIMUM_THREADS;

    // Not be to confused with number of disk segments, this is different. Defines
    // distinct write queues created for disk store where a group of segments share a write queue. This is
    // implemented with ehcache using a partitioned thread pool exectutor By default all segments share a single write
    // queue ie write concurrency is 1. Check OffHeapDiskStoreConfiguration and DiskWriteThreadPool.
    public final Setting<Integer> DISK_WRITE_CONCURRENCY;

    // Defines how many segments the disk cache is separated into. Higher number achieves greater concurrency but
    // will hold that many file pointers.
    public final Setting<Integer> DISK_SEGMENTS;

5. Invalidate while adding to disk based cache (High Disk I/O)

Have high throughput of search queries being forced to be cached and parallely have frequent invalidations happening due to a refresh and bulk writes in the mix

6. Varying clean up intervals

Mimic a use case with varying refresh intervals, currently on heap caches are being cleaned up every min. What is the behavior with too often vs not.

7. Varying Disk Threshold

Mimic a use case with varying disk threshold, we have thought of keeping 50% as a default threshold. So the cache cleaner only cleans from the disk cache if the keys to cleanup account for 50% of keys (by count not space) in disk.

Test Dimensions

  1. Workload Concurrent segment search was benchmarked with Http_logs & nyc_taxis. We need to generate unique queries to create many cache entries. Will use the existing search query and introduce randomness to them.

  2. Search Clients OSB lets you control the number of clients through a parameter.

  3. Shard size Following our recommendation of shards ranging between 20 - 50 GB

  4. Various Instance types EBS vs SSD EBS Based instance types - r5.large, r5.2xlarge, r5.8xlarge SSD Based instance types - i3.large, i3.2xlarge, i3.8xlarge Number of cores - Less to Many Graviton vs Intel

kiranprakash154 commented 10 months ago

@reta @anasalkouz @andrross would like to get your feedback !

anasalkouz commented 10 months ago

Thanks @kiranprakash154 for the performance benchmark plan. Some of the things that you need to consider for disk based cache.

kiranprakash154 commented 9 months ago

Thanks for your comments @anasalkouz

Make sure disk based cache will not overwhelm cluster and won't compete with indexed data.

Yes, we are limiting the size of disk cache after which it starts evicting. So in terms of space we should not have any issues unless the customer sets the config very high. From scenario 5 - Invalidate while adding to disk based cache (High Disk I/O), this will help us gather more data around how the ehcache behaves in high throughput environment.

Is caching per node? then how to route same queries to same node for better cache hit.

We don't have to handle this separately, the way we have it working now is, when we have the in-mem cache(that is already at the node level) filling up instead of just evicting from the in heap, we evict and add to disk tier. So the route it took to this node should not change.

May be try to benchmark the caching feature while you are doing force merges or using searchable snapshots.

I think the scenario 5 should cover the behavior during high disk io, regardless of what is causing it, but I get your point, I will also make sure this is not regressing the segrep feature.