Closed flatballer closed 5 months ago
Hi!
This problem might be related to this issue: https://github.com/exasol/spark-connector/issues/218
Could you please check your DB configuration? If multiple datanodes are used, make sure that firewall is open between databricks (master and executors) and DB cluster.
Hi! I tried the code with max_nodes=1. Same result. I also seem to have network connectivity:
8563/tcp open unknown
20000/tcp filtered dnp
20001/tcp filtered microsan
20002/tcp filtered commtact-http
Yes, looks good to me. Thanks for the reply!
Could you please create a support ticket? We have a special process to handle customer issues and support team might have better ideas what might be the cause.
Thanks. I'm trying my luck there. I'll post the solution here if they find one.
Hi team, I can connect to our Exasol from Databricks and get the schema of the table I am trying to read, but whenever I try to read actual records from the table, I get this error: "org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3) (172.16.12.159 executor 0): com.exasol.jdbc.ConnectFailed: java.net.SocketTimeoutException: connect timed out" my query is a simple "SELECT * FROM table".
Is this a bug or user error?