Store Gateway fetches series per block in batches and the default batch size is 10000. That means for the same request and the same block, the serieses you need to fetch per batch is known. This gives us a chance to cache the whole batch of serieses in a single cache item rather than 10000 items, to reduce number of requests sent to your cache server.
Describe the solution you'd like
Now index cache caches series using block ID + series ID.
If we want to cache batched series, the cache key can be block ID + matchers + batch size + current batch (start from 0) + vertical shard size + vertical shard id.
Vertical sharding might matter here because vertical sharding filters out labels you fetch.
We can probably also take lazy posting enabled or not into account because it also filters out series.
Is your proposal related to a problem?
Store Gateway fetches series per block in batches and the default batch size is 10000. That means for the same request and the same block, the serieses you need to fetch per batch is known. This gives us a chance to cache the whole batch of serieses in a single cache item rather than 10000 items, to reduce number of requests sent to your cache server.
Describe the solution you'd like
Now index cache caches series using block ID + series ID.
If we want to cache batched series, the cache key can be block ID + matchers + batch size + current batch (start from 0) + vertical shard size + vertical shard id.
Vertical sharding might matter here because vertical sharding filters out labels you fetch.
We can probably also take lazy posting enabled or not into account because it also filters out series.