opensearch-project / OpenSearch

🔎 Open source distributed and RESTful search engine.
https://opensearch.org/docs/latest/opensearch/index/
Apache License 2.0
9.16k stars 1.69k forks source link

[Feature Request] Make batch ingestion automatic, not a parameter on _bulk #14283

Open andrross opened 1 month ago

andrross commented 1 month ago

Is your feature request related to a problem? Please describe

A new batch method was added to o.o.ingest.Processor interface in #12457 that allows ingest processor to operate on multiple documents simultaneous, instead of one by one. For certain processors, this can allow for much faster and more efficient processing. However, a new batch_size parameter was added to the _bulk API with a default value of 1. This means in order to benefit from batch processing of any of my ingest processors, I have to do at a minimum two this: determine how many documents to include in my _bulk request and determine the optimal value to set for this batch_size parameter. I must also change all my ingestion tooling to support this new batch_size parameter and to specify it.

Describe the solution you'd like

I want the developers of my ingestion processors to determine good defaults for how they want to handle batches of documents, and then I can see increased performance with no change to my ingestion tooling by simply updating to the latest software. I acknowledge I may still have to experiment with finding the optimal number of documents to include in each _bulk request (this is the status quo for this API and not specific to ingest processors). Also, certain ingest processors may define expert-level configuration options to further optimize if necessary, but I expect the defaults to work well most of the time and to almost always be better than the performance I saw before batching was implemented.

Related component

Indexing

Additional context

I believe this can be implemented as follows:

Additional discussion exists on the original RFC starting here: https://github.com/opensearch-project/OpenSearch/issues/12457#issuecomment-2146287178

dbwiddis commented 1 month ago

Makes a lot of sense, @andrross ! I support this.

navneet1v commented 1 month ago

+1 on the feature request.

dblock commented 1 month ago

Semantically speaking, because it's not a max, I think batch_size should be unspecified (null) by default, not Integer.MAX_VALUE. If a user specifies it in the API call, then use that.

andrross commented 1 month ago

I think batch_size should be unspecified (null) by default

@dblock Agreed, Integer.MAX_VALUE is just a work-around as it is practically the same as "unspecified", but you're right.


I want to work through a use case to illustrate why I think this needs to change:

Let's say I'm the OpenSearch admin for an organization that uses OpenSearch for log analytics. I have about a dozen different teams writing logs into the cluster I manage. I'm using an ingestion feature that can benefit from batching. The experience I want is that I can test out the new changes, run some performance tests, and then set a "batch_size" parameter in my system configuration that optimizes throughput and latency for my use case. Then I can deploy these changes and all my users see a performance improvement.

I do not want to have to chase down a dozen teams, force them to deploy an update to the tool they use to send _bulk requests to the cluster, and get them to configure a good "batch_size". I do not want to repeat this exercise if I change the host type or anything else that results in a different optimal setting for "batch_size".

dblock commented 1 month ago

@andrross A different default batch_size per analyzer makes sense. It's also possible that you'll want a max value per analyzer as well.

chishui commented 1 month ago

Regarding a default batch size other than 1: Different ML servers have different maximum batch size limitations. For example, Cohere requires batch size to be less than 96 and OpenAI requires it to be less than 2048. Exceeding the limit could lead to data loss. I'm afraid that to enable a default batch size other than 1 can't suit all users' situations. Additionally, we already added batchExecute support in two neural search processors: text_embedding and sparse_encoding, only changing default batch_size value could impact user immediately.

Based on your suggestion, I think we could do as such:

  1. Add batch_size parameter in text_embedding and sparse_encoding processors, and set it to 1 by default (Seems like we can't make it a required parameter as it's backward incompatible). Another option is to set the parameter in AbstractProcessor similar to tag and description, it'll be ignored by most processors, but we can have the cutting batches logic as a default logic for all ingest processor which have batch_size parameter set.
  2. Change the default (or unspecified) behavior of _bulk API's batch_size to include all bulk documents.
  3. Mark the batch_size of bulk API as deprecated and start deprecating process.

This method won't give user a performance gain automatically either due to limitation mentioned in the first paragraph. For users who haven't started to use bulk with batch_size, they'll see no change. For users who already started to use bulk with batch_size, they'll see the performance degradation (fall back to when they don't use batch_size), but there should be no data loss. And once they update their processor with batch_size, they could see performance improvement.

andrross commented 1 month ago

Thanks @chishui. Your suggestions above are exactly what I've had in mind here. I realize some of what I suggested above may be unclear. I never intended to suggest that the effective behavior of the sub-batching in the neural-search processors should change. The key point is that the sub-batching of documents into batches smaller than what was provided in the original _bulk request should happen inside the processors and not at the IngestService layer.

Another option is to set the parameter in AbstractProcessor

If this sub-batching behavior is a common pattern, we could certainly create an AbstractBatchingProcessor that defines a batch size parameter and even implements the logic to break the list from _bulk into smaller batches. I'd recommend starting with this pattern inside the neural-search plugin to be shared across its processors, and then we can promote the base class up to a common location if it proves useful and reusable.