Azure / spark-cdm-connector

MIT License
76 stars 33 forks source link

Is Azure Databricks mount point supported in spark-cdm-connector? #58

Closed subashsivaji closed 4 years ago

subashsivaji commented 4 years ago

Is Azure Databricks mount point supported in spark-cdm-connector?

I am using com.microsoft.azure:spark-cdm-connector:0.18.1 Databricks Runtime 6.6

When I provided explicit storage account and container name - this works.

readDf = spark.read.format("com.microsoft.cdm")\
.option("storage", "mystorageaccount.dfs.core.windows.net")\
.option("manifestPath", "/commondataservice-xxxx-orgxxxxx/model.json")\
.option("entity", "businessunit")\
.option("appId","xxxxxxxxx-8286f4564568")\
.option("appKey", "xxxxxxxxxx_MX3U_uETaXMaererccccc~")\
.option("tenantId", "67878678-vvvvv-vvvv-vvvv-8347ddb3daa7")\
.load()

image

However if I create a mount point in databricks for ADLS gen2 and the container - this DOES NOT work.

Created mount point "/mnt/cds/" for this location https://mystorageaccount.blob.core.windows.net/commondataservice-xxxx-orgxxxxx

readDf = spark.read.format("com.microsoft.cdm")\
.option("storage", "/mnt/cds/")\
.option("manifestPath", "model.json")\
.option("entity", "businessunit")\
.option("appId","xxxxxxxxx-8286f4564568")\
.option("appKey", "xxxxxxxxxx_MX3U_uETaXMaererccccc~")\
.option("tenantId", "67878678-vvvvv-vvvv-vvvv-8347ddb3daa7")\
.load()

Error

java.lang.Exception: Container is not specified in the manifestPath

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<command-3904640576930790> in <module>
      6 .option("appId","xxxxxxxxx-8286f4564568")\
      7 .option("appKey", "xxxxxxxxxx_MX3U_uETaXMaererccccc~")\
----> 8 .option("tenantId", "67878678-vvvvv-vvvv-vvvv-8347ddb3daa7")\
      9 .load()
     10 

/databricks/spark/python/pyspark/sql/readwriter.py in load(self, path, format, schema, **options)
    170             return self._df(self._jreader.load(self._spark._sc._jvm.PythonUtils.toSeq(path)))
    171         else:
--> 172             return self._df(self._jreader.load())
    173 
    174     @since(1.4)

/databricks/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

/databricks/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/databricks/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling o1537.load.
: java.lang.Exception: Container is not specified in the manifestPath
    at com.microsoft.cdm.DefaultSource.getContainerManifestPathAndFile(DefaultSource.scala:166)
    at com.microsoft.cdm.DefaultSource.createReader(DefaultSource.scala:88)
    at com.microsoft.cdm.DefaultSource.createReader(DefaultSource.scala:24)
    at org.apache.spark.sql.execution.datasources.v2.DataSourceV2Relation$SourceHelpers.createReader(DataSourceV2Relation.scala:155)
    at org.apache.spark.sql.execution.datasources.v2.DataSourceV2Relation$.create(DataSourceV2Relation.scala:172)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:290)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:203)
    at sun.reflect.GeneratedMethodAccessor440.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
    at py4j.Gateway.invoke(Gateway.java:295)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:251)
    at java.lang.Thread.run(Thread.java:748)
srichetar commented 4 years ago

Azure Databricks mount point is not supported in spark-cdm-connector. This is how you should provide manifestPath = <container>/{<folderPath>/}<manifestFileName>

subashsivaji commented 4 years ago

Thank you for the clarification. It might be worth adding this line in the documentation or under unsupported scenarios. Azure Databricks mount point is not supported in spark-cdm-connector.

billgib commented 4 years ago

Hi subashsivaji, thanks for the suggestion. I have updated the doc as you suggested.