Closed melin closed 1 month ago
@melin, the connector does not log this information because it could contain sensitive customer information. However, there is a way to enable JDBC driver logging to Spark which will show the generated SQL statements (and other JDBC information) with sensitive information masked off:
val df = sqlContext.read
.format("io.github.spark_redshift_community.spark.redshift")
.options(rsOptions)
.option("jdbc.LogLevel", "6")
.option("jdbc.LogPath", "/dev/stdout")
.option("dbtable", "MyTable")
.load()
Sometimes need to analyze task performance to see the generated sql pushdown conditions. redshift console is not easy to find