crealytics / spark-excel

A Spark plugin for reading and writing Excel files
Apache License 2.0
450 stars 145 forks source link

[BUG] Filters on partition columns don't work | Spark 3.3.1 | com.crealytics:spark-excel_2.12:3.3.1_0.18.5 #727

Open gaya3dk2490 opened 1 year ago

gaya3dk2490 commented 1 year ago

Is there an existing issue for this?

Current Behavior

There is some weird behaviour when filtering columns on a dataframe produced by the excel reader.

I have some excel files, partitioned in Azure Storage account and I am trying to fire a simple read from Databricks (Run time 12.1, Spark 3.3.1)

Example Path on Storage account - /landing/excel/version=x/day=x where version and day will become partition columns on read

I have version=1 and version=2 and day=1 as sample partitions.

Below read stores 2 rows into dataframe df

val df = spark.read
      .format("excel")              
      .option("dataAddress", dataAddress) 
      .option("header", "true")       
      .option("inferSchema", true)   
      .load(myExcelPath)

schema inferred


root
 |-- int_col: integer (nullable = true)
 |-- string_col: string (nullable = true)
 |-- version: integer (nullable = true)
 |-- day: integer (nullable = true)

Now, if you filter on the df produced for version=1 , it always returns all results

df.filter(col("version") === 1) returns 2 rows (version =1 and version =2 )

Also tried the following variants

df.filter(col("version") === lit(1)) and df.filter($"version" === 1)

Try filtering on a value of version that doesn't exist, returns all rows

df.filter(col("version") === 100) returns 2 rows

Note: Filters on other normal columns work fine, so there seems to be something wrong on predicate pushdown

Expected Behavior

Filter on dataframe partition columns should return only rows from that partition

Steps To Reproduce

Environment

- Spark version: 3.31
- Spark-Excel version: 0.18.5
- OS: Mac/ Databricks
- Cluster environment - Databricks 12.1 run time

Anything else?

No response

nightscape commented 1 year ago

Not sure if this is a typo, but afaik you need to use === instead of == when comparing columns. Also the value might need to be wrapped in lit.

gaya3dk2490 commented 1 year ago

@nightscape apologies, that was a typo :) edited the original question

gaya3dk2490 commented 1 year ago

Update:

I downgraded the library to com.crealytics:spark-excel_2.12:3.2.2_0.18.5and that has no problems with filters on partition columns!

this is definitely a bug in the latest version on Spark 3.3.1

nightscape commented 1 year ago

Ok, interesting! Might be a change in the API that we'd need to account for. @gaya3dk2490 if you don't mind, you could skim the Spark changelogs if there's sth. in there regarding predicate push-down. Maybe you can also find a corresponding change in the CSV reader (from which a lot of the code was taken).

intelligencecompany commented 1 year ago

I did a temp workaround to temporary save it as a parquet and reload the dataframe as soon as I want to apply a filter:

df.Write() .Mode("overwrite") .Parquet($"xxx");

df.Unpersist();

df = spark.Read() .Parquet($"xxx");

df = df.Filter("condition");