There is a problem with the current approach of loading and deploying local ML model for integration tests. Current approach is to deploy one model per test method. At the end of test method execution that model is undeployed.
Such approach leads to multiple deploy/undeploy calls on a single test cluster.
Currently we're using ml-commons to deploy the model. As per ml-commons team the engine they are using (PyTorch) is not optimized for recurring model redeployments.
In environment with limited memory that may lead to high memory consumption, and in such case Native Memory Circuit Breaker in ml-commons will be opened. In such case no new model deployment is possible and CB exception will be returned.
Suggested approach is to change paradigm from model per test case to a shared models for all test suite. This way models can be deployed once during the cluster setup, used by the test and then undeployed in the tear down phase. This seems feasible as models are using in a read-only mode and there is limited number of different local models. Currently there are 3 different models that are used in integ tests (https://github.com/opensearch-project/neural-search/tree/main/src/test/resources/processor):
There is a problem with the current approach of loading and deploying local ML model for integration tests. Current approach is to deploy one model per test method. At the end of test method execution that model is undeployed. Such approach leads to multiple deploy/undeploy calls on a single test cluster.
Currently we're using
ml-commons
to deploy the model. As perml-commons
team the engine they are using (PyTorch) is not optimized for recurring model redeployments.In environment with limited memory that may lead to high memory consumption, and in such case Native Memory Circuit Breaker in
ml-commons
will be opened. In such case no new model deployment is possible and CB exception will be returned.Suggested approach is to change paradigm from model per test case to a shared models for all test suite. This way models can be deployed once during the cluster setup, used by the test and then undeployed in the tear down phase. This seems feasible as models are using in a read-only mode and there is limited number of different local models. Currently there are 3 different models that are used in integ tests (https://github.com/opensearch-project/neural-search/tree/main/src/test/resources/processor):
Ref:
neural-search
https://github.com/opensearch-project/neural-search/pull/683