Closed willdonnelly closed 1 year ago
Since we don't (to the best of my knowledge) use temp-data-plane
in this way anymore, and we now understand that BigQuery concurrent update errors are something we have to deal with and retry (see https://github.com/estuary/connectors/pull/742), I'm going to close this out, since I don't think it's currently an issue.
We also do now have tests for the BigQuery materialization which can be run locally and use flowctl
to drive the connector, and they will clean up after themselves: https://github.com/estuary/connectors/blob/main/tests/materialize/materialize-bigquery/cleanup.sh
But LMK if I missed anything and we can re-open.
When deploying with
--wait-and-cleanup
to a localtemp-data-plane
the BigQuery materialization should delete the tables it created on exit. But, at least in my specific situation, it isn't doing this and the tables just linger. This in turn means that subsequent attempts to run the same catalog will silently fail to materialize any new data.The deploy exits with the following error message (edited lightly for readability):
This error message appears to be coming from the code responsible for deleting the materialized tables on exit, which would explain why those tables aren't deleted as expected. However according to Johnny: "this smells like a bigger problem [...] because raced transactions against the table are expected to happen".
I have not dug any further into the issue for now, just filing a tracking bug so it doesn't get forgotten.