sodadata / soda-core

:zap: Data quality testing for the modern data stack (SQL, Spark, and Pandas) https://www.soda.io
https://go.soda.io/core-docs
Apache License 2.0
1.87k stars 204 forks source link

Spark partitioned tables #2083

Open tombaeyens opened 4 months ago

tombaeyens commented 4 months ago

We were testing the schema validation with the Databricks connection, and we found a problem with partitioned tables. SODA uses the columns # Partition Information and # col_name for the validation (check the first image). We think this happens because of the table's describe (second image) Is there anything that we can change on our side like a setting? Or is it a bug on SODA side that needs to be fixed?

spark-partition-1 spark-partition-2

The info for partition is irrelevant because the column appears in the first list and then in the partition information.

tools-soda commented 4 months ago

SAS-3465

tombaeyens commented 4 months ago

Potential fix:

In pyspark one can do this: partitions_columns = [col.name for col in spark.catalog.listColumns("schema_name.table_name") if col.isPartition] and non_paritions_columns = [col.name for col in spark.catalog.listColumns("schema_name.table_name") if not col.isPartition]

(source: https://stackoverflow.com/questions/51540906/how-to-get-the-hive-partition-column-name-using-spark )

Potentially it's suffice to apply the fix:

https://github.com/sodadata/soda-core/blob/main/soda/spark/soda/data_sources/spark_data_source.py#L213

and

https://github.com/sodadata/soda-core/blob/main/soda/spark/soda/data_sources/spark_data_source.py#L355