I received access to Elasticsearch serverless and would like to move over, but I am unable to get the elasticsearch-spark connector to work. I am using Databricks with 13.3LTS Runtime, Scala 2.12 and Spark 3.4.1. Using org.elasticsearch:elasticsearch-spark-30_2.12:8.11.0 because when calling the client using the elasticsearch-serverless library it gave 8.11.0 as version.
Py4JJavaError: An error occurred while calling o609.save.
: org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
Changin es.nodes.wan.only to false does not change the outcome.
Version Info
OS: : Databricks with 13.3LTS Runtime, Scala 2.12 and Spark 3.4.1
Hadoop/Spark: org.elasticsearch:elasticsearch-spark-30_2.12:8.11.0
ES : Elasticsearch Serverless
Hi @RalphSchuurman. Serverless Elasticsearch currently only supports a subset of full Elasticsearch functionality. Es-hadoop/spark is not supported, and there are no immediate plans to support it.
What kind an issue is this?
Issue description
I received access to Elasticsearch serverless and would like to move over, but I am unable to get the elasticsearch-spark connector to work. I am using Databricks with 13.3LTS Runtime, Scala 2.12 and Spark 3.4.1. Using org.elasticsearch:elasticsearch-spark-30_2.12:8.11.0 because when calling the client using the elasticsearch-serverless library it gave 8.11.0 as version.
gives
Steps to reproduce
Code:
Strack trace:
Changin es.nodes.wan.only to false does not change the outcome.
Version Info
OS: : Databricks with 13.3LTS Runtime, Scala 2.12 and Spark 3.4.1 Hadoop/Spark: org.elasticsearch:elasticsearch-spark-30_2.12:8.11.0 ES : Elasticsearch Serverless