Open jarno-r opened 2 years ago
I'm not sure the integration test is meaningful, since the test seed does not contain the partition column. That also applies to the previous people_csv_partitioned_using test. If 'section' was added to the people.csv, the two tests could be updated to include the partitioning.
Thanks @jarno-r!
If 'section' was added to the people.csv, the two tests could be updated to include the partitioning.
Could you say slightly more about that? Would section
need to be included in the contents of the CSVs themselves? It's already included in the file paths, using Hive-style formatting: https://dbt-external-tables-testing.s3.us-east-2.amazonaws.com/
If people.csv also had 'section' column, the two tests could be changed to include 'section' in 'compare_columns' . Then both tests would show that the partitions are read correctly.
@jtcohen6 Could this be merged? Adding the section to people.csv would somewhat improve the test coverage, but it is not necessary.
This PR has been marked as Stale because it has been open with no activity as of late. If you would like the PR to remain open, please comment on the PR or else it will be closed in 7 days.
This is still relevant. FYI, the integration-snowflake test is failing for reasons that have nothing to do with this PR. I would probably succeed if retriggered.
@jeremyyeo Could this be merged?
Description & motivation
resolves: #126 Fix for issue Running RECOVER PARTITIONS without defining partitions #126
Add 'recover_partitions' option for Spark to run ALTER TABLE RECOVER PARTITIONS even if partitions are not explicitly specified. This makes using inferred partitions possible.
Checklist