Before this PR, the BigLake integration did not work at all, because of a mistake in how we layer the sbt dependencies. This PR fixes that mistake.
The loader will no longer create the BigLake database during initialisation. It is a better separation of concerns if the loader requires the database to exist before the loader first runs. A user can manually create a BigLake database in advance e.g. with terraform. The loader still creates the table, but not the database.
The loader will no longer create the BigQuery external table. It is better if the loader interacts only with BigLake, and does not require any permissions to interact with BigQuery. A user can easily manually create the BigQuery external table by running:
CREATE EXTERNAL TABLE `<bq_dataset_name>.<bq_table_name>`
WITH CONNECTION `<region_name>.<connection_name>`
OPTIONS (
format = 'ICEBERG',
uris = ['blms://projects/<project_name>/locations/<region_name>/catalogs/<biglake_catalog_name>/databases/<biglake_database_name>/tables/<biglake_table_name>']
)
Before this PR, the BigLake integration did not work at all, because of a mistake in how we layer the sbt dependencies. This PR fixes that mistake.
The loader will no longer create the BigLake database during initialisation. It is a better separation of concerns if the loader requires the database to exist before the loader first runs. A user can manually create a BigLake database in advance e.g. with terraform. The loader still creates the table, but not the database.
The loader will no longer create the BigQuery external table. It is better if the loader interacts only with BigLake, and does not require any permissions to interact with BigQuery. A user can easily manually create the BigQuery external table by running: