Closed evantahler closed 1 year ago
In https://github.com/airbytehq/airbyte/pull/25446/ I tried to see if we could just bump the version of the driver and everything would be fine... and it wasn't :(
The bug might be with our test harness when are reading the data to confirm what we wrote
Decision - Let's remove normalization + snowfalke and revisit
This issue is back on the table due to potential bugs with file uploading. See https://github.com/airbytehq/oncall/issues/1493 and #bug-snowflake-driver
for details
From @ryankfu
Hypothesizing here that our retry logic may have a :bug: https://github.com/airbytehq/airbyte/blob/0f39af75b63d54beed9cfbeec2c12f7caa40d2e6[…]estination/snowflake/SnowflakeInternalStagingSqlOperations.java Another option that could be tested first is to set some queryTimeout parameter in the JDBC params for Snowflake (this is probably the fastest option although replication would be hard since what triggers this infinite retry isn't abundantly clear)
just for posterity, stacktrace of one of the failed test cases from https://github.com/airbytehq/airbyte/pull/25446
Note: we are currently running a fork of the snowflake JDBC driver which has a fix they have yet to merge. We will likely need to do the same for the later versions. @edgao will move this fork to the airbyte repo shortly.
https://github.com/airbytehq/snowflake-jdbc now exists! for reference:
Issue opened with snowflake: https://github.com/snowflakedb/snowflake-jdbc/issues/1431
Hey Evan, I have opened an issue in the
snowflake-jdbc
repository with my findings. I have tried debugging the jdbc driver but the error doesn't really make sense since nothing substantial was changed between 3.13.19 and 3.13.20 when the error starts occurring. I will wait for their input to see if this is an expected behaviour going forward and if we are going to be required to implement our own mappers and will act accordingly.
@itaseskii looks like snowflake responeded!
@itaseskii - it's been a minute! Any update?
We need to get back to working on this. Un-assigning from @itaseskii as we haven't heard back from him, and moving back to our backlog.