Open Krukosz opened 7 months ago
Oh, I tested it on "legacy" Databricks Cluster and it works.
My code:
reader = spark._jvm.com.crealytics.spark.excel.WorkbookReader.apply({"path": 'my_file.xlsx'}, spark.sparkContext._jsc.hadoopConfiguration())
d = reader.sheetNames()
print(d)
In Unity Catalog environment i'm getting error (it's directly related to Cluster Mode, it cannot be changed in my case):
py4j.security.Py4JSecurityException: Method public org.apache.hadoop.conf.Configuration org.apache.spark.api.java.JavaSparkContext.hadoopConfiguration() is not whitelisted on class class org.apache.spark.api.java.JavaSparkContext
Is there any other way to get sheet names, without WorkbookReader constructor? I'd rather not mixing crealytics spark code with pandas or any other library.
We are having the same issue with our Scala code in Unity Catalog (DBR 14.3 LTS)
As per this documentation: https://learn.microsoft.com/en-us/azure/databricks/compute/access-mode-limitations#spark-api-limitations-and-requirements-for-unity-catalog-shared-access-mode sparkContext (and therefore hadoopConfiguration) can't be accessed in DBR 14.0 and newer.
So, even if there's a workaround for 13.3 for now, newer runtimes won't be able to support it.
Hmm, that would require a bigger refactoring then because we also need a HadoopConfiguration in the standard use case (even without reading sheet names): https://github.com/crealytics/spark-excel/blob/main/src/main/scala/com/crealytics/spark/excel/DefaultSource.scala#L38
Am I using the newest version of the library?
Is there an existing issue for this?
Current Behavior
I have problem with class WorkbookReader. Code in Python looks like:
reader = spark._jvm.com.crealytics.spark.excel.WorkbookReader( {"path": "Worktime.xlsx"}, spark.sparkContext._jsc.hadoopConfiguration() ) sheetnames = reader.sheetNames()
My problems:
py4j.Py4JException: Constructor com.crealytics.spark.excel.WorkbookReader([class java.util.HashMap]) does not exist
In PR #196 there's a discussion about using apply method but I don't know how to call it.
Is there anyone who made working it on PySpark? I can't use Scala, because is blocked by administrator in my environment.
Expected Behavior
No response
Steps To Reproduce
No response
Environment
Anything else?
No response