When using pushdown filters, the planner pushes the volatile random() filter to the table source, so it executes in scan (for example, in parquet) and in the query engine, which leads to weird results.
Closes #13268.
Rationale for this change
It's impossible to evaluate volatile filters in different layers.
What changes are included in this PR?
Improvement for rule optimiser to avoid passing volatile filters to scan
Unit test
Are these changes tested?
Unit tests
Regression tests
Manual test
As proposed in the original issue, I tried alltypes_tiny_pages_plain.parquet sample file containing 7300 lines:
set datafusion.execution.parquet.pushdown_filters=true;
create external table data stored as parquet location 'alltypes_tiny_pages_plain.parquet';
Running a query
select COUNT(*) from data WHERE RANDOM() < 0.1;
with datafusion-cli gives an answer of 726, which is pretty close to the expected 730.
Which issue does this PR close?
When using pushdown filters, the planner pushes the volatile
random()
filter to the table source, so it executes in scan (for example, in parquet) and in the query engine, which leads to weird results.Closes #13268.
Rationale for this change
It's impossible to evaluate volatile filters in different layers.
What changes are included in this PR?
Are these changes tested?
As proposed in the original issue, I tried
alltypes_tiny_pages_plain.parquet
sample file containing 7300 lines:Running a query
with
datafusion-cli
gives an answer of 726, which is pretty close to the expected 730.New plan
Before the change plan was
Are there any user-facing changes?
No breaking changes.