Open b1ackout opened 6 months ago
import pyspark
try:
return handle_spark_dataframe(self._connection.sql(query))
except pyspark.sql.utils.AnalysisException as e:
print(e)
except Exception as e:
print(e)
raise (e)
Would this be a solution?
did you try the short_errors option?
I remember the spark compatibility came from an external contributor so I'm unsure if the short_errors
option will work, but if it doesn't, feel free to open a PR
did you try the short_errors option?
I remember the spark compatibility came from an external contributor so I'm unsure if the
short_errors
option will work, but if it doesn't, feel free to open a PR
@edublancas I tested it, doesn't work on sparkConnect, I'm gonna open a PR
Can anyone review the PR?
What happens?
There are a lot of complains that the stack trace is really long and doesn't help identify the error. The solution would be to just print the error that sparkSQL provides.
E.G.
Running this query:
The stack trace should be:
To Reproduce
After connecting to a spark cluster, run the code below:
This should output something like this:
OS:
Linux
JupySQL Version:
0.10.10
Full Name:
Athanasios Keramas
Affiliation:
-