Open davidkyle opened 2 months ago
Pinging @elastic/ml-core (Team:ML)
xpack.ml.use_auto_machine_memory_percent = true
should be added to the node settings in the docker compose file linked from the instructions https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-compose-file
Elasticsearch Version
Tested on 8.15 and 8.14
Installed Plugins
No response
Java Version
bundled
OS Version
any
Problem Description
Following the instructions for running Elasticsearch in Docker I found deploying the ELSER model failed with the message:
Could not start deployment because no ML nodes with sufficient capacity were found
The error comes from the model assignment code which does not think there is not enough memory to deploy the model even though the docker image has 4GB of memory - which is plenty.
The fix is to set
xpack.ml.use_auto_machine_memory_percent
totrue
.If
xpack.ml.use_auto_machine_memory_percent == false
then the maximum about of memory that ml can use is governed byxpack.ml.max_machine_memory_percent
which defaults to 30% of the available memory. 30% is not enough to deploy the model on a 4GB node. When running in a container all the memory should be available to ml, this is what happens whenxpack.ml.use_auto_machine_memory_percent == true
. In Elastic cloudxpack.ml.use_auto_machine_memory_percent
is set totrue
.The ml settings are documented at https://www.elastic.co/guide/en/elasticsearch/reference/current/ml-settings.html
Steps to Reproduce
Follow the instructions to run Elasticsearch in docker (https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html), download the
.elser_model_2
model and try to deploy it.Logs (if relevant)
No response