[ ] Report a technical problem with the documentation
[ ] Other
Tell us about your request. Provide a summary of the request and all versions that are affected.
Background: https://github.com/opensearch-project/ml-commons/issues/2438
Some OpenSearch users deploy their ML models at SageMaker to encode documents. But some times users get throttling exceptions, especially during ingestion, even with bulk size 100. It is hard to diagnose the problem, and the failed documents won't be ingested (may lead to data lost).
In 2.15 we implmented retry policy for this problem. The retry policy is binded with the connector client_config. We want to update the documentation about setting the policy, as well as to reduce the max_connections to reduce max concurrency.
What other resources are available? Provide links to related issues, POCs, steps for testing, etc.
What do you want to do?
Tell us about your request. Provide a summary of the request and all versions that are affected. Background: https://github.com/opensearch-project/ml-commons/issues/2438 Some OpenSearch users deploy their ML models at SageMaker to encode documents. But some times users get throttling exceptions, especially during ingestion, even with bulk size 100. It is hard to diagnose the problem, and the failed documents won't be ingested (may lead to data lost). In 2.15 we implmented retry policy for this problem. The retry policy is binded with the connector client_config. We want to update the documentation about setting the policy, as well as to reduce the max_connections to reduce max concurrency.
What other resources are available? Provide links to related issues, POCs, steps for testing, etc.