[X] I have searched the existing issues, and I could not find an existing issue for this bug
Current Behavior
When called with a part index that is out of bounds, and ansi-mode on, the split_part macro leads to an exception
Expected Behavior
Per the tests in BaseSplitPart in the adapter tests, the expectation is that this macro can be invoked with part indexes greater than the number of parts generated without throwing an exception specifically this row in the seed:
,|,,,,
We can accommodate this behavior by using get, rather than indexing the array, but only in Spark 3.4.0 or later.
Steps To Reproduce
set spark.sql.ansi.enabled=true
use split_part passing an out of bounds index
observe exception
Relevant log output
No response
Environment
This issue has been in there for a while, but I'm just hitting it now due to new defaults in a Databricks environment I was asked to test against.
Is this a new bug in dbt-spark?
Current Behavior
When called with a part index that is out of bounds, and ansi-mode on, the split_part macro leads to an exception
Expected Behavior
Per the tests in BaseSplitPart in the adapter tests, the expectation is that this macro can be invoked with part indexes greater than the number of parts generated without throwing an exception specifically this row in the seed:
We can accommodate this behavior by using get, rather than indexing the array, but only in Spark 3.4.0 or later.
Steps To Reproduce
Relevant log output
No response
Environment
This issue has been in there for a while, but I'm just hitting it now due to new defaults in a Databricks environment I was asked to test against.
Additional Context
No response