databrickslabs / ucx

Automated migrations to Unity Catalog
Other
227 stars 80 forks source link

Test failure: `test_table_migration_job` #1336

Closed github-actions[bot] closed 6 months ago

github-actions[bot] commented 6 months ago
❌ test_table_migration_job: AssertionError: ucx_tzyz8 and ucx_tn4m1 not found in ucx_clxzs.migrate_0qnbp (15m48.791s) ``` AssertionError: ucx_tzyz8 and ucx_tn4m1 not found in ucx_clxzs.migrate_0qnbp assert False [gw8] linux -- Python 3.10.14 /home/runner/work/ucx/ucx/.venv/bin/python 07:03 INFO [databricks.labs.ucx.mixins.fixtures] Schema hive_metastore.migrate_0qnbp: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_0qnbp 07:03 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_0qnbp', metastore_id=None, name='migrate_0qnbp', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:03 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_0qnbp.ucx_tzyz8: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_0qnbp/ucx_tzyz8 07:03 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_0qnbp.ucx_tzyz8', metastore_id=None, name='ucx_tzyz8', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_0qnbp', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_0qnbp/ucx_tzyz8', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:03 DEBUG [databricks.labs.ucx.mixins.fixtures] added make_dbfs_data_copy fixture: dbfs:/mnt/TEST_MOUNT_NAME/a/b/o8hn 07:03 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_0qnbp.ucx_tn4m1: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_0qnbp/ucx_tn4m1 07:03 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_0qnbp.ucx_tn4m1', metastore_id=None, name='ucx_tn4m1', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_0qnbp', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/o8hn', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:03 DEBUG [databricks.labs.ucx.mixins.fixtures] added catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1712646231665, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_clxzs', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_clxzs', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1712646231665, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:03 INFO [databricks.labs.ucx.mixins.fixtures] Schema ucx_clxzs.migrate_0qnbp: https://DATABRICKS_HOST/explore/data/ucx_clxzs/migrate_0qnbp 07:03 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_clxzs', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_clxzs.migrate_0qnbp', metastore_id=None, name='migrate_0qnbp', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:10 DEBUG [databricks.labs.ucx.install] Cannot find previous installation: Path (/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/config.yml) doesn't exist. 07:10 INFO [databricks.labs.ucx.install] Please answer a couple of questions to configure Unity Catalog migration 07:10 INFO [databricks.labs.ucx.installer.hms_lineage] HMS Lineage feature creates one system table named system.hms_to_uc_migration.table_access and helps in your migration process from HMS to UC by allowing you to programmatically query HMS lineage data. 07:10 INFO [databricks.labs.ucx.install] Fetching installations... 07:10 INFO [databricks.labs.ucx.installer.policy] Creating UCX cluster policy. 07:10 INFO [databricks.labs.ucx.install] Installing UCX v0.21.1+3720240409071043 07:10 INFO [databricks.labs.ucx.install] Creating dashboards... 07:10 INFO [databricks.labs.ucx.installer.mixins] Fetching warehouse_id from a config 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Reading step folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/views... 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Reading step folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment... 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Reading dashboard folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment/estimates... 07:10 INFO [databricks.labs.ucx.framework.dashboards] Creating dashboard [UCX] UCX Assessment (Estimates)... 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 01_0_group_migration.md because it's a text widget 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 01_0_group_migration.md because it's a text widget 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 00_0_metastore_assignment.md because it's a text widget 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 00_0_metastore_assignment.md because it's a text widget 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 02_0_data_modeling.md because it's a text widget 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 02_0_data_modeling.md because it's a text widget 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 03_0_data_migration.md because it's a text widget 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 03_0_data_migration.md because it's a text widget 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Reading dashboard folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment/main... 07:10 INFO [databricks.labs.ucx.framework.dashboards] Creating dashboard [UCX] UCX Assessment (Main)... 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 10___data_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 10___data_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 15___storage_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 15___storage_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 40___last_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 40___last_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 20___compute_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 20___compute_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 00___assessment_overview.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 00___assessment_overview.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 05___findings_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 05___findings_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 30_0_job_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 30_0_job_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Reading dashboard folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment/CLOUD_ENV... 07:11 INFO [databricks.labs.ucx.framework.dashboards] Creating dashboard [UCX] UCX Assessment (Azure)... 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Reading dashboard folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment/interactive... 07:11 INFO [databricks.labs.ucx.framework.dashboards] Creating dashboard [UCX] UCX Assessment (Interactive)... 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 00_0_interactive.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 00_0_interactive.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 02_0_cluster_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 02_0_cluster_summary.md because it's a text widget 07:11 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=validate-groups-permissions 07:11 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-tables 07:11 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups 07:11 INFO [databricks.labs.ucx.installer.mixins] Fetching warehouse_id from a config 07:11 INFO [databricks.labs.ucx.installer.mixins] Fetching warehouse_id from a config 07:11 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=assessment 07:11 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=remove-workspace-local-backup-groups 07:11 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-tables-in-mounts-experimental 07:11 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups-experimental 07:11 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=099-destroy-schema 07:11 INFO [databricks.labs.ucx.install] Installation completed successfully! Please refer to the https://DATABRICKS_HOST/#workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/README for the next steps. 07:12 DEBUG [databricks.labs.ucx.installer.workflows] starting migrate-tables job: https://DATABRICKS_HOST#job/472536245561955 07:19 DEBUG [databricks.labs.ucx.installer.workflows] Validating migrate-tables workflow: https://DATABRICKS_HOST#job/472536245561955 07:03 INFO [databricks.labs.ucx.mixins.fixtures] Schema hive_metastore.migrate_0qnbp: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_0qnbp 07:03 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_0qnbp', metastore_id=None, name='migrate_0qnbp', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:03 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_0qnbp.ucx_tzyz8: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_0qnbp/ucx_tzyz8 07:03 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_0qnbp.ucx_tzyz8', metastore_id=None, name='ucx_tzyz8', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_0qnbp', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_0qnbp/ucx_tzyz8', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:03 DEBUG [databricks.labs.ucx.mixins.fixtures] added make_dbfs_data_copy fixture: dbfs:/mnt/TEST_MOUNT_NAME/a/b/o8hn 07:03 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_0qnbp.ucx_tn4m1: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_0qnbp/ucx_tn4m1 07:03 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_0qnbp.ucx_tn4m1', metastore_id=None, name='ucx_tn4m1', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_0qnbp', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/o8hn', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:03 DEBUG [databricks.labs.ucx.mixins.fixtures] added catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1712646231665, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_clxzs', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_clxzs', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1712646231665, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:03 INFO [databricks.labs.ucx.mixins.fixtures] Schema ucx_clxzs.migrate_0qnbp: https://DATABRICKS_HOST/explore/data/ucx_clxzs/migrate_0qnbp 07:03 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_clxzs', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_clxzs.migrate_0qnbp', metastore_id=None, name='migrate_0qnbp', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:10 DEBUG [databricks.labs.ucx.install] Cannot find previous installation: Path (/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/config.yml) doesn't exist. 07:10 INFO [databricks.labs.ucx.install] Please answer a couple of questions to configure Unity Catalog migration 07:10 INFO [databricks.labs.ucx.installer.hms_lineage] HMS Lineage feature creates one system table named system.hms_to_uc_migration.table_access and helps in your migration process from HMS to UC by allowing you to programmatically query HMS lineage data. 07:10 INFO [databricks.labs.ucx.install] Fetching installations... 07:10 INFO [databricks.labs.ucx.installer.policy] Creating UCX cluster policy. 07:10 INFO [databricks.labs.ucx.install] Installing UCX v0.21.1+3720240409071043 07:10 INFO [databricks.labs.ucx.install] Creating dashboards... 07:10 INFO [databricks.labs.ucx.installer.mixins] Fetching warehouse_id from a config 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Reading step folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/views... 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Reading step folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment... 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Reading dashboard folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment/estimates... 07:10 INFO [databricks.labs.ucx.framework.dashboards] Creating dashboard [UCX] UCX Assessment (Estimates)... 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 01_0_group_migration.md because it's a text widget 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 01_0_group_migration.md because it's a text widget 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 00_0_metastore_assignment.md because it's a text widget 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 00_0_metastore_assignment.md because it's a text widget 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 02_0_data_modeling.md because it's a text widget 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 02_0_data_modeling.md because it's a text widget 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 03_0_data_migration.md because it's a text widget 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 03_0_data_migration.md because it's a text widget 07:10 DEBUG [databricks.labs.ucx.framework.dashboards] Reading dashboard folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment/main... 07:10 INFO [databricks.labs.ucx.framework.dashboards] Creating dashboard [UCX] UCX Assessment (Main)... 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 10___data_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 10___data_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 15___storage_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 15___storage_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 40___last_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 40___last_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 20___compute_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 20___compute_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 00___assessment_overview.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 00___assessment_overview.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 05___findings_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 05___findings_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 30_0_job_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 30_0_job_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Reading dashboard folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment/CLOUD_ENV... 07:11 INFO [databricks.labs.ucx.framework.dashboards] Creating dashboard [UCX] UCX Assessment (Azure)... 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Reading dashboard folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment/interactive... 07:11 INFO [databricks.labs.ucx.framework.dashboards] Creating dashboard [UCX] UCX Assessment (Interactive)... 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 00_0_interactive.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 00_0_interactive.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 02_0_cluster_summary.md because it's a text widget 07:11 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 02_0_cluster_summary.md because it's a text widget 07:11 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=validate-groups-permissions 07:11 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-tables 07:11 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups 07:11 INFO [databricks.labs.ucx.installer.mixins] Fetching warehouse_id from a config 07:11 INFO [databricks.labs.ucx.installer.mixins] Fetching warehouse_id from a config 07:11 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=assessment 07:11 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=remove-workspace-local-backup-groups 07:11 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-tables-in-mounts-experimental 07:11 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups-experimental 07:11 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=099-destroy-schema 07:11 INFO [databricks.labs.ucx.install] Installation completed successfully! Please refer to the https://DATABRICKS_HOST/#workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/README for the next steps. 07:12 DEBUG [databricks.labs.ucx.installer.workflows] starting migrate-tables job: https://DATABRICKS_HOST#job/472536245561955 07:19 DEBUG [databricks.labs.ucx.installer.workflows] Validating migrate-tables workflow: https://DATABRICKS_HOST#job/472536245561955 07:19 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 1 make_dbfs_data_copy fixtures 07:19 DEBUG [databricks.labs.ucx.mixins.fixtures] removing make_dbfs_data_copy fixture: dbfs:/mnt/TEST_MOUNT_NAME/a/b/o8hn 07:19 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 2 table fixtures 07:19 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_0qnbp.ucx_tzyz8', metastore_id=None, name='ucx_tzyz8', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_0qnbp', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_0qnbp/ucx_tzyz8', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:19 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_0qnbp.ucx_tn4m1', metastore_id=None, name='ucx_tn4m1', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_0qnbp', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/o8hn', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:19 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 2 schema fixtures 07:19 DEBUG [databricks.labs.ucx.mixins.fixtures] removing schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_0qnbp', metastore_id=None, name='migrate_0qnbp', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:19 DEBUG [databricks.labs.ucx.mixins.fixtures] removing schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_clxzs', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_clxzs.migrate_0qnbp', metastore_id=None, name='migrate_0qnbp', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:19 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 1 catalog fixtures 07:19 DEBUG [databricks.labs.ucx.mixins.fixtures] removing catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1712646231665, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_clxzs', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_clxzs', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1712646231665, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:19 DEBUG [databricks.labs.ucx.mixins.fixtures] ignoring error while catalog CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1712646231665, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_clxzs', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_clxzs', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1712646231665, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') teardown: Catalog 'ucx_clxzs' does not exist. 07:19 INFO [databricks.labs.ucx.install] Deleting UCX v0.21.1+3720240409071931 from https://DATABRICKS_HOST 07:19 INFO [databricks.labs.ucx.install] Deleting inventory database ucx_SjrBT_migrate_inventory 07:19 INFO [databricks.labs.ucx.install] Deleting jobs 07:19 INFO [databricks.labs.ucx.install] Deleting validate-groups-permissions job_id=898398201557537. 07:19 INFO [databricks.labs.ucx.install] Deleting migrate-tables job_id=472536245561955. 07:19 INFO [databricks.labs.ucx.install] Deleting migrate-groups job_id=664429489893723. 07:19 INFO [databricks.labs.ucx.install] Deleting assessment job_id=89361315925817. 07:19 INFO [databricks.labs.ucx.install] Deleting remove-workspace-local-backup-groups job_id=253602479288181. 07:19 INFO [databricks.labs.ucx.install] Deleting migrate-tables-in-mounts-experimental job_id=229136167664020. 07:19 INFO [databricks.labs.ucx.install] Deleting migrate-groups-experimental job_id=1083786513789162. 07:19 INFO [databricks.labs.ucx.install] Deleting 099-destroy-schema job_id=458666621610067. 07:19 INFO [databricks.labs.ucx.install] Deleting cluster policy 07:19 INFO [databricks.labs.ucx.install] Deleting secret scope 07:19 INFO [databricks.labs.ucx.install] UnInstalling UCX complete [gw8] linux -- Python 3.10.14 /home/runner/work/ucx/ucx/.venv/bin/python ```

Running from nightly #22

github-actions[bot] commented 6 months ago
❌ test_table_migration_job: databricks.sdk.errors.platform.InvalidParameterValue: Job cluster 'main' is not defined in field 'job_clusters'. (1m34.29s) ``` databricks.sdk.errors.platform.InvalidParameterValue: Job cluster 'main' is not defined in field 'job_clusters'. [gw4] linux -- Python 3.10.14 /home/runner/work/ucx/ucx/.venv/bin/python 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Schema hive_metastore.migrate_lhots: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_lhots 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_lhots', metastore_id=None, name='migrate_lhots', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_lhots.ucx_tqpo6: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_lhots/ucx_tqpo6 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_lhots.ucx_tqpo6', metastore_id=None, name='ucx_tqpo6', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_lhots', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_lhots/ucx_tqpo6', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added make_dbfs_data_copy fixture: dbfs:/mnt/TEST_MOUNT_NAME/a/b/rua8 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_lhots.ucx_tdqkz: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_lhots/ucx_tdqkz 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_lhots.ucx_tdqkz', metastore_id=None, name='ucx_tdqkz', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_lhots', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/rua8', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1712819130185, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cxnyh', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_cxnyh', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1712819130185, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Schema ucx_cxnyh.migrate_lhots: https://DATABRICKS_HOST/explore/data/ucx_cxnyh/migrate_lhots 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_cxnyh', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cxnyh.migrate_lhots', metastore_id=None, name='migrate_lhots', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:05 DEBUG [databricks.labs.ucx.install] Cannot find previous installation: Path (/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/config.yml) doesn't exist. 07:05 INFO [databricks.labs.ucx.install] Please answer a couple of questions to configure Unity Catalog migration 07:05 INFO [databricks.labs.ucx.installer.hms_lineage] HMS Lineage feature creates one system table named system.hms_to_uc_migration.table_access and helps in your migration process from HMS to UC by allowing you to programmatically query HMS lineage data. 07:05 INFO [databricks.labs.ucx.install] Fetching installations... 07:05 INFO [databricks.labs.ucx.installer.policy] Creating UCX cluster policy. 07:05 INFO [databricks.labs.ucx.install] Installing UCX v0.21.1+4220240411070535 07:05 INFO [databricks.labs.ucx.install] Creating dashboards... 07:05 INFO [databricks.labs.ucx.installer.mixins] Fetching warehouse_id from a config 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Reading step folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/views... 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Reading step folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment... 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Reading dashboard folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment/estimates... 07:05 INFO [databricks.labs.ucx.framework.dashboards] Creating dashboard [UCX] UCX Assessment (Estimates)... 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 01_0_group_migration.md because it's a text widget 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 01_0_group_migration.md because it's a text widget 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 00_0_metastore_assignment.md because it's a text widget 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 00_0_metastore_assignment.md because it's a text widget 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 02_0_data_modeling.md because it's a text widget 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 02_0_data_modeling.md because it's a text widget 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 03_0_data_migration.md because it's a text widget 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 03_0_data_migration.md because it's a text widget 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Reading dashboard folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment/main... 07:05 INFO [databricks.labs.ucx.framework.dashboards] Creating dashboard [UCX] UCX Assessment (Main)... 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 10___data_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 10___data_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 15___storage_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 15___storage_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 40___last_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 40___last_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 20___compute_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 20___compute_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 00___assessment_overview.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 00___assessment_overview.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 05___findings_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 05___findings_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 30_0_job_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 30_0_job_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Reading dashboard folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment/CLOUD_ENV... 07:06 INFO [databricks.labs.ucx.framework.dashboards] Creating dashboard [UCX] UCX Assessment (Azure)... 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Reading dashboard folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment/interactive... 07:06 INFO [databricks.labs.ucx.framework.dashboards] Creating dashboard [UCX] UCX Assessment (Interactive)... 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 00_0_interactive.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 00_0_interactive.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 02_0_cluster_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 02_0_cluster_summary.md because it's a text widget 07:06 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=remove-workspace-local-backup-groups 07:06 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups 07:06 INFO [databricks.labs.ucx.installer.mixins] Fetching warehouse_id from a config 07:06 INFO [databricks.labs.ucx.installer.mixins] Fetching warehouse_id from a config 07:06 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=assessment 07:06 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups-experimental 07:06 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-tables 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Schema hive_metastore.migrate_lhots: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_lhots 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_lhots', metastore_id=None, name='migrate_lhots', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_lhots.ucx_tqpo6: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_lhots/ucx_tqpo6 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_lhots.ucx_tqpo6', metastore_id=None, name='ucx_tqpo6', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_lhots', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_lhots/ucx_tqpo6', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added make_dbfs_data_copy fixture: dbfs:/mnt/TEST_MOUNT_NAME/a/b/rua8 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_lhots.ucx_tdqkz: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_lhots/ucx_tdqkz 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_lhots.ucx_tdqkz', metastore_id=None, name='ucx_tdqkz', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_lhots', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/rua8', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1712819130185, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cxnyh', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_cxnyh', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1712819130185, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Schema ucx_cxnyh.migrate_lhots: https://DATABRICKS_HOST/explore/data/ucx_cxnyh/migrate_lhots 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_cxnyh', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cxnyh.migrate_lhots', metastore_id=None, name='migrate_lhots', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:05 DEBUG [databricks.labs.ucx.install] Cannot find previous installation: Path (/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/config.yml) doesn't exist. 07:05 INFO [databricks.labs.ucx.install] Please answer a couple of questions to configure Unity Catalog migration 07:05 INFO [databricks.labs.ucx.installer.hms_lineage] HMS Lineage feature creates one system table named system.hms_to_uc_migration.table_access and helps in your migration process from HMS to UC by allowing you to programmatically query HMS lineage data. 07:05 INFO [databricks.labs.ucx.install] Fetching installations... 07:05 INFO [databricks.labs.ucx.installer.policy] Creating UCX cluster policy. 07:05 INFO [databricks.labs.ucx.install] Installing UCX v0.21.1+4220240411070535 07:05 INFO [databricks.labs.ucx.install] Creating dashboards... 07:05 INFO [databricks.labs.ucx.installer.mixins] Fetching warehouse_id from a config 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Reading step folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/views... 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Reading step folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment... 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Reading dashboard folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment/estimates... 07:05 INFO [databricks.labs.ucx.framework.dashboards] Creating dashboard [UCX] UCX Assessment (Estimates)... 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 01_0_group_migration.md because it's a text widget 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 01_0_group_migration.md because it's a text widget 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 00_0_metastore_assignment.md because it's a text widget 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 00_0_metastore_assignment.md because it's a text widget 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 02_0_data_modeling.md because it's a text widget 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 02_0_data_modeling.md because it's a text widget 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 03_0_data_migration.md because it's a text widget 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 03_0_data_migration.md because it's a text widget 07:05 DEBUG [databricks.labs.ucx.framework.dashboards] Reading dashboard folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment/main... 07:05 INFO [databricks.labs.ucx.framework.dashboards] Creating dashboard [UCX] UCX Assessment (Main)... 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 10___data_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 10___data_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 15___storage_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 15___storage_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 40___last_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 40___last_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 20___compute_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 20___compute_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 00___assessment_overview.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 00___assessment_overview.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 05___findings_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 05___findings_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 30_0_job_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 30_0_job_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Reading dashboard folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment/CLOUD_ENV... 07:06 INFO [databricks.labs.ucx.framework.dashboards] Creating dashboard [UCX] UCX Assessment (Azure)... 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Reading dashboard folder /home/runner/work/ucx/ucx/src/databricks/labs/ucx/queries/assessment/interactive... 07:06 INFO [databricks.labs.ucx.framework.dashboards] Creating dashboard [UCX] UCX Assessment (Interactive)... 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 00_0_interactive.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 00_0_interactive.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping query 02_0_cluster_summary.md because it's a text widget 07:06 DEBUG [databricks.labs.ucx.framework.dashboards] Skipping viz 02_0_cluster_summary.md because it's a text widget 07:06 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=remove-workspace-local-backup-groups 07:06 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups 07:06 INFO [databricks.labs.ucx.installer.mixins] Fetching warehouse_id from a config 07:06 INFO [databricks.labs.ucx.installer.mixins] Fetching warehouse_id from a config 07:06 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=assessment 07:06 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups-experimental 07:06 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-tables 07:06 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 1 make_dbfs_data_copy fixtures 07:06 DEBUG [databricks.labs.ucx.mixins.fixtures] removing make_dbfs_data_copy fixture: dbfs:/mnt/TEST_MOUNT_NAME/a/b/rua8 07:06 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 2 table fixtures 07:06 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_lhots.ucx_tqpo6', metastore_id=None, name='ucx_tqpo6', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_lhots', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_lhots/ucx_tqpo6', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:06 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_lhots.ucx_tdqkz', metastore_id=None, name='ucx_tdqkz', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_lhots', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/rua8', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:06 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 2 schema fixtures 07:06 DEBUG [databricks.labs.ucx.mixins.fixtures] removing schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_lhots', metastore_id=None, name='migrate_lhots', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:06 DEBUG [databricks.labs.ucx.mixins.fixtures] removing schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_cxnyh', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cxnyh.migrate_lhots', metastore_id=None, name='migrate_lhots', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:06 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 1 catalog fixtures 07:06 DEBUG [databricks.labs.ucx.mixins.fixtures] removing catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1712819130185, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cxnyh', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_cxnyh', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1712819130185, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') [gw4] linux -- Python 3.10.14 /home/runner/work/ucx/ucx/.venv/bin/python ```

Running from nightly #24

github-actions[bot] commented 6 months ago
❌ test_table_migration_job: databricks.labs.blueprint.parallel.ManyError: Detected 3 failures: NotFound: The schema `ucx_shucy_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. (20m56.946s) ``` databricks.labs.blueprint.parallel.ManyError: Detected 3 failures: NotFound: The schema `ucx_shucy_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. [gw6] linux -- Python 3.10.14 /home/runner/work/ucx/ucx/.venv/bin/python 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Schema hive_metastore.migrate_iqycf: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_iqycf 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_iqycf', metastore_id=None, name='migrate_iqycf', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_iqycf.ucx_tfckx: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_iqycf/ucx_tfckx 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_iqycf.ucx_tfckx', metastore_id=None, name='ucx_tfckx', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_iqycf', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_iqycf/ucx_tfckx', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added make_dbfs_data_copy fixture: dbfs:/mnt/TEST_MOUNT_NAME/a/b/Nhro 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_iqycf.ucx_tcefi: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_iqycf/ucx_tcefi 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_iqycf.ucx_tcefi', metastore_id=None, name='ucx_tcefi', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_iqycf', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/Nhro', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_iqycf.ucx_tnxmo: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_iqycf/ucx_tnxmo 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_iqycf.ucx_tnxmo', metastore_id=None, name='ucx_tnxmo', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_iqycf', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_iqycf.ucx_tfckx', view_dependencies=None) 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_iqycf.ucx_t47z5: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_iqycf/ucx_t47z5 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_iqycf.ucx_t47z5', metastore_id=None, name='ucx_t47z5', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_iqycf', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_iqycf.ucx_tnxmo', view_dependencies=None) 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1712991941731, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cjb4e', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_cjb4e', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1712991941731, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Schema ucx_cjb4e.migrate_iqycf: https://DATABRICKS_HOST/explore/data/ucx_cjb4e/migrate_iqycf 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_cjb4e', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cjb4e.migrate_iqycf', metastore_id=None, name='migrate_iqycf', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:05 DEBUG [tests.integration.test_installation] Creating new installation... 07:05 DEBUG [tests.integration.test_installation] Waiting for clusters to start... 07:11 DEBUG [tests.integration.test_installation] Waiting for clusters to start... 07:11 DEBUG [databricks.labs.ucx.install] Cannot find previous installation: Path (/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/config.yml) doesn't exist. 07:11 INFO [databricks.labs.ucx.install] Please answer a couple of questions to configure Unity Catalog migration 07:11 INFO [databricks.labs.ucx.installer.hms_lineage] HMS Lineage feature creates one system table named system.hms_to_uc_migration.table_access and helps in your migration process from HMS to UC by allowing you to programmatically query HMS lineage data. 07:11 INFO [databricks.labs.ucx.install] Fetching installations... 07:11 INFO [databricks.labs.ucx.installer.policy] Setting up an external metastore 07:11 INFO [databricks.labs.ucx.installer.policy] Creating UCX cluster policy. 07:11 INFO [databricks.labs.ucx.install] Installing UCX v0.21.1+6120240413071147 07:11 INFO [databricks.labs.ucx.install] Creating ucx schemas... 07:12 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups-experimental 07:12 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=failing 07:12 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=validate-groups-permissions 07:12 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-tables 07:12 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups 07:12 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-tables-in-mounts-experimental 07:12 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=remove-workspace-local-backup-groups 07:12 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=assessment 07:12 INFO [databricks.labs.ucx.install] Installation completed successfully! Please refer to the https://DATABRICKS_HOST/#workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/README for the next steps. 07:12 DEBUG [databricks.labs.ucx.installer.workflows] starting migrate-tables job: https://DATABRICKS_HOST#job/200542435545718 07:26 INFO [databricks.labs.ucx.installer.workflows] ---------- REMOTE LOGS -------------- 07:26 INFO [databricks.labs.ucx:migrate_external_tables_sync] UCX v0.21.1+6120240413071215 After job finishes, see debug logs at /Workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-622117178196563-0/migrate_external_tables_sync.log 07:26 DEBUG [databricks.sdk:migrate_external_tables_sync] GET /api/2.0/clusters/list < 200 OK < { < "clusters": [ < { < "autotermination_minutes": 60, < "CLOUD_ENV_attributes": { < "availability": "SPOT_WITH_FALLBACK_AZURE", < "first_on_demand": 2147483647, < "spot_bid_max_price": -1.0 < }, < "cluster_cores": 4.0, < "cluster_id": "TEST_DEFAULT_CLUSTER_ID", < "cluster_memory_mb": 16384, < "cluster_name": "DEFAULT Test Cluster (Single Node, No Isolation)", < "cluster_source": "UI", < "creator_user_name": "serge.smertin@databricks.com", < "custom_tags": { < "ResourceClass": "SingleNode" < }, < "TEST_SCHEMA_tags": { < "Budget": "opex.sales.labs", < "ClusterId": "TEST_DEFAULT_CLUSTER_ID", < "ClusterName": "DEFAULT Test Cluster (Single Node, No Isolation)", < "Creator": "serge.smertin@databricks.com", < "DatabricksInstanceGroupId": "-9023997030385719732", < "DatabricksInstancePoolCreatorId": "4183391249163402", < "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID", < "Owner": "labs-oss@databricks.com", < "Vendor": "Databricks" < }, < "disk_spec": {}, < "driver": { < "host_private_ip": "10.139.0.15", < "instance_id": "ea792f3a215c45ebb43a12403106a88c", < "node_attributes": { < "is_spot": false < }, < "node_id": "250d38974776499ab48674ba6cab3312", < "private_ip": "10.139.64.15", < "public_dns": "20.36.172.63", < "start_timestamp": 1712991965051 < }, < "driver_healthy": true, < "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID", < "driver_instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "driver_node_type_id": "Standard_D4s_v3", < "effective_spark_version": "15.0.x-scala2.12", < "enable_elastic_disk": true, < "enable_local_disk_encryption": false, < "init_scripts_safe_mode": false, < "instance_pool_id": "TEST_INSTANCE_POOL_ID", < "instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "jdbc_port": 10000, < "last_activity_time": 1712991999372, < "last_restarted_time": 1712992044424, < "last_state_loss_time": 1712992044397, < "node_type_id": "Standard_D4s_v3", < "num_workers": 0, < "pinned_by_user_name": "4183391249163402", < "spark_conf": { < "spark.databricks.cluster.profile": "singleNode", < "spark.master": "local[*]" < }, < "spark_context_id": 7105520327459125546, < "spark_version": "15.0.x-scala2.12", < "start_time": 1711403415696, < "state": "RUNNING", < "state_message": "" < }, < "... (8 additional elements)" < ] < } 07:26 DEBUG [databricks.sdk:migrate_external_tables_sync] GET /api/2.1/unity-catalog/external-locations < 200 OK < { < "external_locations": [ < { < "created_at": 1712317369786, < "created_by": "serge.smertin@databricks.com", < "credential_id": "462bd121-a3ff-4f51-899f-236868f3d2ab", < "credential_name": "TEST_STORAGE_CREDENTIAL", < "full_name": "TEST_A_LOCATION", < "id": "98c25265-fb0f-4b15-a727-63855b7f78a7", < "isolation_mode": "ISOLATION_MODE_OPEN", < "metastore_id": "8952c1e3-b265-4adf-98c3-6f755e2e1453", < "name": "TEST_A_LOCATION", < "owner": "labs.scope.account-admin", < "read_only": false, < "securable_kind": "EXTERNAL_LOCATION_STANDARD", < "securable_type": "EXTERNAL_LOCATION", < "updated_at": 1712566812808, < "updated_by": "serge.smertin@databricks.com", < "url": "TEST_MOUNT_CONTAINER/a" < }, < "... (3 additional elements)" < ] < } 07:26 DEBUG [databricks.labs.blueprint.installation:migrate_external_tables_sync] Loading list from CLOUD_ENV_storage_account_info.csv 07:26 DEBUG [databricks.sdk:migrate_external_tables_sync] GET /api/2.0/workspace/export?path=/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/CLOUD_ENV_storage_account_info.csv&direct_download=true < 200 OK < [raw stream] 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SHUCY_migrate_inventory.grants] fetching grants inventory 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SELECT * FROM hive_metastore.ucx_SHUCY_migrate_inventory.grants 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SHUCY_migrate_inventory.grants] crawling new batch for grants 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW DATABASES FROM hive_metastore 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SHUCY_migrate_inventory.tables] fetching tables inventory 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SELECT * FROM hive_metastore.ucx_SHUCY_migrate_inventory.tables 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SHUCY_migrate_inventory.tables] crawling new batch for tables 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW DATABASES 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.TEST_SCHEMA] listing tables 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW TABLES FROM hive_metastore.TEST_SCHEMA 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext] listing tables 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW TABLES FROM hive_metastore.test_ext 07:26 DEBUG [databricks.labs.blueprint.parallel:migrate_external_tables_sync] Starting 5 tasks in 8 threads 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.rectangles] fetching table metadata 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.rectangles 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student] fetching table metadata 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student_comment] fetching table metadata 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_comment 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student_copy] fetching table metadata 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_copy 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student_partitioned] fetching table metadata 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_partitioned 07:26 INFO [databricks.labs.blueprint.parallel:migrate_external_tables_sync] listing tables in hive_metastore 5/5, rps: 0.164/sec 07:26 INFO [databricks.labs.blueprint.parallel:migrate_external_tables_sync] Finished 'listing tables in hive_metastore' tasks: 100% results available (5/5). Took 0:00:30.562036 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SHUCY_migrate_inventory.tables] found 5 new records for tables 07:26 ERROR [databricks.labs.ucx:migrate_external_tables_sync] Execute `databricks workspace export //Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-622117178196563-0/migrate_external_tables_sync.log` locally to troubleshoot with more details. [SCHEMA_NOT_FOUND] The schema `ucx_shucy_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 07:26 DEBUG [databricks:migrate_external_tables_sync] Task crash details Traceback (most recent call last): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/runtime.py", line 86, in trigger current_task(ctx) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/workflows.py", line 18, in migrate_external_tables_sync ctx.tables_migrator.migrate_tables(what=What.EXTERNAL_SYNC, acl_strategy=[AclMigrationWhat.LEGACY_TACL]) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/table_migrate.py", line 64, in migrate_tables all_grants_to_migrate = None if acl_strategy is None else self._gc.snapshot() File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 185, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 116, in _snapshot loaded_records = list(loader()) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 228, in _crawl for table in self._tc.snapshot(): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/tables.py", line 202, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 117, in _snapshot self._append_records(loaded_records) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 122, in _append_records self._backend.save_table(self.full_name, items, self._klass, mode="append") File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/lsql/backends.py", line 223, in save_table df.write.saveAsTable(full_name, mode=mode) File "/databricks/spark/python/pyspark/sql/connect/readwriter.py", line 702, in saveAsTable self._spark.client.execute_command( File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1139, in execute_command data, _, _, _, properties = self._execute_and_fetch(req, observations or {}) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1515, in _execute_and_fetch for response in self._execute_and_fetch_as_iterator(req, observations): File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1493, in _execute_and_fetch_as_iterator self._handle_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1799, in _handle_error self._handle_rpc_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1874, in _handle_rpc_error raise convert_exception( pyspark.errors.exceptions.connect.AnalysisException: [SCHEMA_NOT_FOUND] The schema `ucx_shucy_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 07:26 INFO [databricks.labs.ucx:migrate_dbfs_root_delta_tables] UCX v0.21.1+6120240413071215 After job finishes, see debug logs at /Workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-622117178196563-0/migrate_dbfs_root_delta_tables.log 07:26 DEBUG [databricks.sdk:migrate_dbfs_root_delta_tables] GET /api/2.0/clusters/list < 200 OK < { < "clusters": [ < { < "autotermination_minutes": 60, < "CLOUD_ENV_attributes": { < "availability": "SPOT_WITH_FALLBACK_AZURE", < "first_on_demand": 2147483647, < "spot_bid_max_price": -1.0 < }, < "cluster_cores": 4.0, < "cluster_id": "TEST_DEFAULT_CLUSTER_ID", < "cluster_memory_mb": 16384, < "cluster_name": "DEFAULT Test Cluster (Single Node, No Isolation)", < "cluster_source": "UI", < "creator_user_name": "serge.smertin@databricks.com", < "custom_tags": { < "ResourceClass": "SingleNode" < }, < "TEST_SCHEMA_tags": { < "Budget": "opex.sales.labs", < "ClusterId": "TEST_DEFAULT_CLUSTER_ID", < "ClusterName": "DEFAULT Test Cluster (Single Node, No Isolation)", < "Creator": "serge.smertin@databricks.com", < "DatabricksInstanceGroupId": "-9023997030385719732", < "DatabricksInstancePoolCreatorId": "4183391249163402", < "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID", < "Owner": "labs-oss@databricks.com", < "Vendor": "Databricks" < }, < "disk_spec": {}, < "driver": { < "host_private_ip": "10.139.0.15", < "instance_id": "ea792f3a215c45ebb43a12403106a88c", < "node_attributes": { < "is_spot": false < }, < "node_id": "250d38974776499ab48674ba6cab3312", < "private_ip": "10.139.64.15", < "public_dns": "20.36.172.63", < "start_timestamp": 1712991965051 < }, < "driver_healthy": true, < "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID", < "driver_instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "driver_node_type_id": "Standard_D4s_v3", < "effective_spark_version": "15.0.x-scala2.12", < "enable_elastic_disk": true, < "enable_local_disk_encryption": false, < "init_scripts_safe_mode": false, < "instance_pool_id": "TEST_INSTANCE_POOL_ID", < "instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "jdbc_port": 10000, < "last_activity_time": 1712991999372, < "last_restarted_time": 1712992044424, < "last_state_loss_time": 1712992044397, < "node_type_id": "Standard_D4s_v3", < "num_workers": 0, < "pinned_by_user_name": "4183391249163402", < "spark_conf": { < "spark.databricks.cluster.profile": "singleNode", < "spark.master": "local[*]" < }, < "spark_context_id": 7105520327459125546, < "spark_version": "15.0.x-scala2.12", < "start_time": 1711403415696, < "state": "RUNNING", < "state_message": "" < }, < "... (8 additional elements)" < ] < } 07:26 DEBUG [databricks.sdk:migrate_dbfs_root_delta_tables] GET /api/2.1/unity-catalog/external-locations < 200 OK < { < "external_locations": [ < { < "created_at": 1712317369786, < "created_by": "serge.smertin@databricks.com", < "credential_id": "462bd121-a3ff-4f51-899f-236868f3d2ab", < "credential_name": "TEST_STORAGE_CREDENTIAL", < "full_name": "TEST_A_LOCATION", < "id": "98c25265-fb0f-4b15-a727-63855b7f78a7", < "isolation_mode": "ISOLATION_MODE_OPEN", < "metastore_id": "8952c1e3-b265-4adf-98c3-6f755e2e1453", < "name": "TEST_A_LOCATION", < "owner": "labs.scope.account-admin", < "read_only": false, < "securable_kind": "EXTERNAL_LOCATION_STANDARD", < "securable_type": "EXTERNAL_LOCATION", < "updated_at": 1712566812808, < "updated_by": "serge.smertin@databricks.com", < "url": "TEST_MOUNT_CONTAINER/a" < }, < "... (3 additional elements)" < ] < } 07:26 DEBUG [databricks.labs.blueprint.installation:migrate_dbfs_root_delta_tables] Loading list from CLOUD_ENV_storage_account_info.csv 07:26 DEBUG [databricks.sdk:migrate_dbfs_root_delta_tables] GET /api/2.0/workspace/export?path=/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/CLOUD_ENV_storage_account_info.csv&direct_download=true < 200 OK < [raw stream] 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SHUCY_migrate_inventory.grants] fetching grants inventory 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SELECT * FROM hive_metastore.ucx_SHUCY_migrate_inventory.grants 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SHUCY_migrate_inventory.grants] crawling new batch for grants 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW DATABASES FROM hive_metastore 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SHUCY_migrate_inventory.tables] fetching tables inventory 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SELECT * FROM hive_metastore.ucx_SHUCY_migrate_inventory.tables 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SHUCY_migrate_inventory.tables] crawling new batch for tables 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW DATABASES 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.TEST_SCHEMA] listing tables 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW TABLES FROM hive_metastore.TEST_SCHEMA 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext] listing tables 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW TABLES FROM hive_metastore.test_ext 07:26 DEBUG [databricks.labs.blueprint.parallel:migrate_dbfs_root_delta_tables] Starting 5 tasks in 8 threads 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.rectangles] fetching table metadata 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student] fetching table metadata 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.rectangles 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student_comment] fetching table metadata 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_comment 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student_copy] fetching table metadata 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_copy 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student_partitioned] fetching table metadata 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_partitioned 07:26 INFO [databricks.labs.blueprint.parallel:migrate_dbfs_root_delta_tables] listing tables in hive_metastore 5/5, rps: 0.165/sec 07:26 INFO [databricks.labs.blueprint.parallel:migrate_dbfs_root_delta_tables] Finished 'listing tables in hive_metastore' tasks: 100% results available (5/5). Took 0:00:30.393317 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SHUCY_migrate_inventory.tables] found 5 new records for tables 07:26 ERROR [databricks.labs.ucx:migrate_dbfs_root_delta_tables] Execute `databricks workspace export //Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-622117178196563-0/migrate_dbfs_root_delta_tables.log` locally to troubleshoot with more details. [SCHEMA_NOT_FOUND] The schema `ucx_shucy_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 07:26 DEBUG [databricks:migrate_dbfs_root_delta_tables] Task crash details Traceback (most recent call last): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/runtime.py", line 86, in trigger current_task(ctx) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/workflows.py", line 27, in migrate_dbfs_root_delta_tables ctx.tables_migrator.migrate_tables(what=What.DBFS_ROOT_DELTA, acl_strategy=[AclMigrationWhat.LEGACY_TACL]) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/table_migrate.py", line 64, in migrate_tables all_grants_to_migrate = None if acl_strategy is None else self._gc.snapshot() File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 185, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 116, in _snapshot loaded_records = list(loader()) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 228, in _crawl for table in self._tc.snapshot(): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/tables.py", line 202, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 117, in _snapshot self._append_records(loaded_records) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 122, in _append_records self._backend.save_table(self.full_name, items, self._klass, mode="append") File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/lsql/backends.py", line 223, in save_table df.write.saveAsTable(full_name, mode=mode) File "/databricks/spark/python/pyspark/sql/connect/readwriter.py", line 702, in saveAsTable self._spark.client.execute_command( File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1139, in execute_command data, _, _, _, properties = self._execute_and_fetch(req, observations or {}) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1515, in _execute_and_fetch for response in self._execute_and_fetch_as_iterator(req, observations): File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1493, in _execute_and_fetch_as_iterator self._handle_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1799, in _handle_error self._handle_rpc_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1874, in _handle_rpc_error raise convert_exception( pyspark.errors.exceptions.connect.AnalysisException: [SCHEMA_NOT_FOUND] The schema `ucx_shucy_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 07:26 INFO [databricks.labs.ucx.installer.workflows] ---------- END REMOTE LOGS ---------- 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Schema hive_metastore.migrate_iqycf: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_iqycf 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_iqycf', metastore_id=None, name='migrate_iqycf', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_iqycf.ucx_tfckx: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_iqycf/ucx_tfckx 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_iqycf.ucx_tfckx', metastore_id=None, name='ucx_tfckx', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_iqycf', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_iqycf/ucx_tfckx', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added make_dbfs_data_copy fixture: dbfs:/mnt/TEST_MOUNT_NAME/a/b/Nhro 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_iqycf.ucx_tcefi: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_iqycf/ucx_tcefi 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_iqycf.ucx_tcefi', metastore_id=None, name='ucx_tcefi', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_iqycf', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/Nhro', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_iqycf.ucx_tnxmo: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_iqycf/ucx_tnxmo 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_iqycf.ucx_tnxmo', metastore_id=None, name='ucx_tnxmo', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_iqycf', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_iqycf.ucx_tfckx', view_dependencies=None) 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_iqycf.ucx_t47z5: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_iqycf/ucx_t47z5 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_iqycf.ucx_t47z5', metastore_id=None, name='ucx_t47z5', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_iqycf', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_iqycf.ucx_tnxmo', view_dependencies=None) 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1712991941731, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cjb4e', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_cjb4e', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1712991941731, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:05 INFO [databricks.labs.ucx.mixins.fixtures] Schema ucx_cjb4e.migrate_iqycf: https://DATABRICKS_HOST/explore/data/ucx_cjb4e/migrate_iqycf 07:05 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_cjb4e', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cjb4e.migrate_iqycf', metastore_id=None, name='migrate_iqycf', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:05 DEBUG [tests.integration.test_installation] Creating new installation... 07:05 DEBUG [tests.integration.test_installation] Waiting for clusters to start... 07:11 DEBUG [tests.integration.test_installation] Waiting for clusters to start... 07:11 DEBUG [databricks.labs.ucx.install] Cannot find previous installation: Path (/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/config.yml) doesn't exist. 07:11 INFO [databricks.labs.ucx.install] Please answer a couple of questions to configure Unity Catalog migration 07:11 INFO [databricks.labs.ucx.installer.hms_lineage] HMS Lineage feature creates one system table named system.hms_to_uc_migration.table_access and helps in your migration process from HMS to UC by allowing you to programmatically query HMS lineage data. 07:11 INFO [databricks.labs.ucx.install] Fetching installations... 07:11 INFO [databricks.labs.ucx.installer.policy] Setting up an external metastore 07:11 INFO [databricks.labs.ucx.installer.policy] Creating UCX cluster policy. 07:11 INFO [databricks.labs.ucx.install] Installing UCX v0.21.1+6120240413071147 07:11 INFO [databricks.labs.ucx.install] Creating ucx schemas... 07:12 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups-experimental 07:12 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=failing 07:12 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=validate-groups-permissions 07:12 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-tables 07:12 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups 07:12 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-tables-in-mounts-experimental 07:12 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=remove-workspace-local-backup-groups 07:12 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=assessment 07:12 INFO [databricks.labs.ucx.install] Installation completed successfully! Please refer to the https://DATABRICKS_HOST/#workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/README for the next steps. 07:12 DEBUG [databricks.labs.ucx.installer.workflows] starting migrate-tables job: https://DATABRICKS_HOST#job/200542435545718 07:26 INFO [databricks.labs.ucx.installer.workflows] ---------- REMOTE LOGS -------------- 07:26 INFO [databricks.labs.ucx:migrate_external_tables_sync] UCX v0.21.1+6120240413071215 After job finishes, see debug logs at /Workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-622117178196563-0/migrate_external_tables_sync.log 07:26 DEBUG [databricks.sdk:migrate_external_tables_sync] GET /api/2.0/clusters/list < 200 OK < { < "clusters": [ < { < "autotermination_minutes": 60, < "CLOUD_ENV_attributes": { < "availability": "SPOT_WITH_FALLBACK_AZURE", < "first_on_demand": 2147483647, < "spot_bid_max_price": -1.0 < }, < "cluster_cores": 4.0, < "cluster_id": "TEST_DEFAULT_CLUSTER_ID", < "cluster_memory_mb": 16384, < "cluster_name": "DEFAULT Test Cluster (Single Node, No Isolation)", < "cluster_source": "UI", < "creator_user_name": "serge.smertin@databricks.com", < "custom_tags": { < "ResourceClass": "SingleNode" < }, < "TEST_SCHEMA_tags": { < "Budget": "opex.sales.labs", < "ClusterId": "TEST_DEFAULT_CLUSTER_ID", < "ClusterName": "DEFAULT Test Cluster (Single Node, No Isolation)", < "Creator": "serge.smertin@databricks.com", < "DatabricksInstanceGroupId": "-9023997030385719732", < "DatabricksInstancePoolCreatorId": "4183391249163402", < "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID", < "Owner": "labs-oss@databricks.com", < "Vendor": "Databricks" < }, < "disk_spec": {}, < "driver": { < "host_private_ip": "10.139.0.15", < "instance_id": "ea792f3a215c45ebb43a12403106a88c", < "node_attributes": { < "is_spot": false < }, < "node_id": "250d38974776499ab48674ba6cab3312", < "private_ip": "10.139.64.15", < "public_dns": "20.36.172.63", < "start_timestamp": 1712991965051 < }, < "driver_healthy": true, < "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID", < "driver_instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "driver_node_type_id": "Standard_D4s_v3", < "effective_spark_version": "15.0.x-scala2.12", < "enable_elastic_disk": true, < "enable_local_disk_encryption": false, < "init_scripts_safe_mode": false, < "instance_pool_id": "TEST_INSTANCE_POOL_ID", < "instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "jdbc_port": 10000, < "last_activity_time": 1712991999372, < "last_restarted_time": 1712992044424, < "last_state_loss_time": 1712992044397, < "node_type_id": "Standard_D4s_v3", < "num_workers": 0, < "pinned_by_user_name": "4183391249163402", < "spark_conf": { < "spark.databricks.cluster.profile": "singleNode", < "spark.master": "local[*]" < }, < "spark_context_id": 7105520327459125546, < "spark_version": "15.0.x-scala2.12", < "start_time": 1711403415696, < "state": "RUNNING", < "state_message": "" < }, < "... (8 additional elements)" < ] < } 07:26 DEBUG [databricks.sdk:migrate_external_tables_sync] GET /api/2.1/unity-catalog/external-locations < 200 OK < { < "external_locations": [ < { < "created_at": 1712317369786, < "created_by": "serge.smertin@databricks.com", < "credential_id": "462bd121-a3ff-4f51-899f-236868f3d2ab", < "credential_name": "TEST_STORAGE_CREDENTIAL", < "full_name": "TEST_A_LOCATION", < "id": "98c25265-fb0f-4b15-a727-63855b7f78a7", < "isolation_mode": "ISOLATION_MODE_OPEN", < "metastore_id": "8952c1e3-b265-4adf-98c3-6f755e2e1453", < "name": "TEST_A_LOCATION", < "owner": "labs.scope.account-admin", < "read_only": false, < "securable_kind": "EXTERNAL_LOCATION_STANDARD", < "securable_type": "EXTERNAL_LOCATION", < "updated_at": 1712566812808, < "updated_by": "serge.smertin@databricks.com", < "url": "TEST_MOUNT_CONTAINER/a" < }, < "... (3 additional elements)" < ] < } 07:26 DEBUG [databricks.labs.blueprint.installation:migrate_external_tables_sync] Loading list from CLOUD_ENV_storage_account_info.csv 07:26 DEBUG [databricks.sdk:migrate_external_tables_sync] GET /api/2.0/workspace/export?path=/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/CLOUD_ENV_storage_account_info.csv&direct_download=true < 200 OK < [raw stream] 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SHUCY_migrate_inventory.grants] fetching grants inventory 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SELECT * FROM hive_metastore.ucx_SHUCY_migrate_inventory.grants 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SHUCY_migrate_inventory.grants] crawling new batch for grants 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW DATABASES FROM hive_metastore 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SHUCY_migrate_inventory.tables] fetching tables inventory 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SELECT * FROM hive_metastore.ucx_SHUCY_migrate_inventory.tables 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SHUCY_migrate_inventory.tables] crawling new batch for tables 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW DATABASES 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.TEST_SCHEMA] listing tables 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW TABLES FROM hive_metastore.TEST_SCHEMA 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext] listing tables 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW TABLES FROM hive_metastore.test_ext 07:26 DEBUG [databricks.labs.blueprint.parallel:migrate_external_tables_sync] Starting 5 tasks in 8 threads 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.rectangles] fetching table metadata 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.rectangles 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student] fetching table metadata 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student_comment] fetching table metadata 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_comment 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student_copy] fetching table metadata 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_copy 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student_partitioned] fetching table metadata 07:26 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_partitioned 07:26 INFO [databricks.labs.blueprint.parallel:migrate_external_tables_sync] listing tables in hive_metastore 5/5, rps: 0.164/sec 07:26 INFO [databricks.labs.blueprint.parallel:migrate_external_tables_sync] Finished 'listing tables in hive_metastore' tasks: 100% results available (5/5). Took 0:00:30.562036 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SHUCY_migrate_inventory.tables] found 5 new records for tables 07:26 ERROR [databricks.labs.ucx:migrate_external_tables_sync] Execute `databricks workspace export //Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-622117178196563-0/migrate_external_tables_sync.log` locally to troubleshoot with more details. [SCHEMA_NOT_FOUND] The schema `ucx_shucy_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 07:26 DEBUG [databricks:migrate_external_tables_sync] Task crash details Traceback (most recent call last): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/runtime.py", line 86, in trigger current_task(ctx) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/workflows.py", line 18, in migrate_external_tables_sync ctx.tables_migrator.migrate_tables(what=What.EXTERNAL_SYNC, acl_strategy=[AclMigrationWhat.LEGACY_TACL]) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/table_migrate.py", line 64, in migrate_tables all_grants_to_migrate = None if acl_strategy is None else self._gc.snapshot() File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 185, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 116, in _snapshot loaded_records = list(loader()) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 228, in _crawl for table in self._tc.snapshot(): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/tables.py", line 202, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 117, in _snapshot self._append_records(loaded_records) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 122, in _append_records self._backend.save_table(self.full_name, items, self._klass, mode="append") File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/lsql/backends.py", line 223, in save_table df.write.saveAsTable(full_name, mode=mode) File "/databricks/spark/python/pyspark/sql/connect/readwriter.py", line 702, in saveAsTable self._spark.client.execute_command( File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1139, in execute_command data, _, _, _, properties = self._execute_and_fetch(req, observations or {}) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1515, in _execute_and_fetch for response in self._execute_and_fetch_as_iterator(req, observations): File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1493, in _execute_and_fetch_as_iterator self._handle_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1799, in _handle_error self._handle_rpc_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1874, in _handle_rpc_error raise convert_exception( pyspark.errors.exceptions.connect.AnalysisException: [SCHEMA_NOT_FOUND] The schema `ucx_shucy_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 07:26 INFO [databricks.labs.ucx:migrate_dbfs_root_delta_tables] UCX v0.21.1+6120240413071215 After job finishes, see debug logs at /Workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-622117178196563-0/migrate_dbfs_root_delta_tables.log 07:26 DEBUG [databricks.sdk:migrate_dbfs_root_delta_tables] GET /api/2.0/clusters/list < 200 OK < { < "clusters": [ < { < "autotermination_minutes": 60, < "CLOUD_ENV_attributes": { < "availability": "SPOT_WITH_FALLBACK_AZURE", < "first_on_demand": 2147483647, < "spot_bid_max_price": -1.0 < }, < "cluster_cores": 4.0, < "cluster_id": "TEST_DEFAULT_CLUSTER_ID", < "cluster_memory_mb": 16384, < "cluster_name": "DEFAULT Test Cluster (Single Node, No Isolation)", < "cluster_source": "UI", < "creator_user_name": "serge.smertin@databricks.com", < "custom_tags": { < "ResourceClass": "SingleNode" < }, < "TEST_SCHEMA_tags": { < "Budget": "opex.sales.labs", < "ClusterId": "TEST_DEFAULT_CLUSTER_ID", < "ClusterName": "DEFAULT Test Cluster (Single Node, No Isolation)", < "Creator": "serge.smertin@databricks.com", < "DatabricksInstanceGroupId": "-9023997030385719732", < "DatabricksInstancePoolCreatorId": "4183391249163402", < "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID", < "Owner": "labs-oss@databricks.com", < "Vendor": "Databricks" < }, < "disk_spec": {}, < "driver": { < "host_private_ip": "10.139.0.15", < "instance_id": "ea792f3a215c45ebb43a12403106a88c", < "node_attributes": { < "is_spot": false < }, < "node_id": "250d38974776499ab48674ba6cab3312", < "private_ip": "10.139.64.15", < "public_dns": "20.36.172.63", < "start_timestamp": 1712991965051 < }, < "driver_healthy": true, < "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID", < "driver_instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "driver_node_type_id": "Standard_D4s_v3", < "effective_spark_version": "15.0.x-scala2.12", < "enable_elastic_disk": true, < "enable_local_disk_encryption": false, < "init_scripts_safe_mode": false, < "instance_pool_id": "TEST_INSTANCE_POOL_ID", < "instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "jdbc_port": 10000, < "last_activity_time": 1712991999372, < "last_restarted_time": 1712992044424, < "last_state_loss_time": 1712992044397, < "node_type_id": "Standard_D4s_v3", < "num_workers": 0, < "pinned_by_user_name": "4183391249163402", < "spark_conf": { < "spark.databricks.cluster.profile": "singleNode", < "spark.master": "local[*]" < }, < "spark_context_id": 7105520327459125546, < "spark_version": "15.0.x-scala2.12", < "start_time": 1711403415696, < "state": "RUNNING", < "state_message": "" < }, < "... (8 additional elements)" < ] < } 07:26 DEBUG [databricks.sdk:migrate_dbfs_root_delta_tables] GET /api/2.1/unity-catalog/external-locations < 200 OK < { < "external_locations": [ < { < "created_at": 1712317369786, < "created_by": "serge.smertin@databricks.com", < "credential_id": "462bd121-a3ff-4f51-899f-236868f3d2ab", < "credential_name": "TEST_STORAGE_CREDENTIAL", < "full_name": "TEST_A_LOCATION", < "id": "98c25265-fb0f-4b15-a727-63855b7f78a7", < "isolation_mode": "ISOLATION_MODE_OPEN", < "metastore_id": "8952c1e3-b265-4adf-98c3-6f755e2e1453", < "name": "TEST_A_LOCATION", < "owner": "labs.scope.account-admin", < "read_only": false, < "securable_kind": "EXTERNAL_LOCATION_STANDARD", < "securable_type": "EXTERNAL_LOCATION", < "updated_at": 1712566812808, < "updated_by": "serge.smertin@databricks.com", < "url": "TEST_MOUNT_CONTAINER/a" < }, < "... (3 additional elements)" < ] < } 07:26 DEBUG [databricks.labs.blueprint.installation:migrate_dbfs_root_delta_tables] Loading list from CLOUD_ENV_storage_account_info.csv 07:26 DEBUG [databricks.sdk:migrate_dbfs_root_delta_tables] GET /api/2.0/workspace/export?path=/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/CLOUD_ENV_storage_account_info.csv&direct_download=true < 200 OK < [raw stream] 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SHUCY_migrate_inventory.grants] fetching grants inventory 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SELECT * FROM hive_metastore.ucx_SHUCY_migrate_inventory.grants 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SHUCY_migrate_inventory.grants] crawling new batch for grants 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW DATABASES FROM hive_metastore 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SHUCY_migrate_inventory.tables] fetching tables inventory 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SELECT * FROM hive_metastore.ucx_SHUCY_migrate_inventory.tables 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SHUCY_migrate_inventory.tables] crawling new batch for tables 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW DATABASES 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.TEST_SCHEMA] listing tables 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW TABLES FROM hive_metastore.TEST_SCHEMA 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext] listing tables 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW TABLES FROM hive_metastore.test_ext 07:26 DEBUG [databricks.labs.blueprint.parallel:migrate_dbfs_root_delta_tables] Starting 5 tasks in 8 threads 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.rectangles] fetching table metadata 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student] fetching table metadata 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.rectangles 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student_comment] fetching table metadata 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_comment 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student_copy] fetching table metadata 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_copy 07:26 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student_partitioned] fetching table metadata 07:26 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_partitioned 07:26 INFO [databricks.labs.blueprint.parallel:migrate_dbfs_root_delta_tables] listing tables in hive_metastore 5/5, rps: 0.165/sec 07:26 INFO [databricks.labs.blueprint.parallel:migrate_dbfs_root_delta_tables] Finished 'listing tables in hive_metastore' tasks: 100% results available (5/5). Took 0:00:30.393317 07:26 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SHUCY_migrate_inventory.tables] found 5 new records for tables 07:26 ERROR [databricks.labs.ucx:migrate_dbfs_root_delta_tables] Execute `databricks workspace export //Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-622117178196563-0/migrate_dbfs_root_delta_tables.log` locally to troubleshoot with more details. [SCHEMA_NOT_FOUND] The schema `ucx_shucy_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 07:26 DEBUG [databricks:migrate_dbfs_root_delta_tables] Task crash details Traceback (most recent call last): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/runtime.py", line 86, in trigger current_task(ctx) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/workflows.py", line 27, in migrate_dbfs_root_delta_tables ctx.tables_migrator.migrate_tables(what=What.DBFS_ROOT_DELTA, acl_strategy=[AclMigrationWhat.LEGACY_TACL]) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/table_migrate.py", line 64, in migrate_tables all_grants_to_migrate = None if acl_strategy is None else self._gc.snapshot() File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 185, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 116, in _snapshot loaded_records = list(loader()) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 228, in _crawl for table in self._tc.snapshot(): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/tables.py", line 202, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 117, in _snapshot self._append_records(loaded_records) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 122, in _append_records self._backend.save_table(self.full_name, items, self._klass, mode="append") File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/lsql/backends.py", line 223, in save_table df.write.saveAsTable(full_name, mode=mode) File "/databricks/spark/python/pyspark/sql/connect/readwriter.py", line 702, in saveAsTable self._spark.client.execute_command( File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1139, in execute_command data, _, _, _, properties = self._execute_and_fetch(req, observations or {}) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1515, in _execute_and_fetch for response in self._execute_and_fetch_as_iterator(req, observations): File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1493, in _execute_and_fetch_as_iterator self._handle_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1799, in _handle_error self._handle_rpc_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1874, in _handle_rpc_error raise convert_exception( pyspark.errors.exceptions.connect.AnalysisException: [SCHEMA_NOT_FOUND] The schema `ucx_shucy_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 07:26 INFO [databricks.labs.ucx.installer.workflows] ---------- END REMOTE LOGS ---------- 07:26 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 1 make_dbfs_data_copy fixtures 07:26 DEBUG [databricks.labs.ucx.mixins.fixtures] removing make_dbfs_data_copy fixture: dbfs:/mnt/TEST_MOUNT_NAME/a/b/Nhro 07:26 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 4 table fixtures 07:26 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_iqycf.ucx_tfckx', metastore_id=None, name='ucx_tfckx', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_iqycf', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_iqycf/ucx_tfckx', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:26 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_iqycf.ucx_tcefi', metastore_id=None, name='ucx_tcefi', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_iqycf', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/Nhro', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:26 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_iqycf.ucx_tnxmo', metastore_id=None, name='ucx_tnxmo', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_iqycf', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_iqycf.ucx_tfckx', view_dependencies=None) 07:26 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_iqycf.ucx_t47z5', metastore_id=None, name='ucx_t47z5', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_iqycf', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_iqycf.ucx_tnxmo', view_dependencies=None) 07:26 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 2 schema fixtures 07:26 DEBUG [databricks.labs.ucx.mixins.fixtures] removing schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_iqycf', metastore_id=None, name='migrate_iqycf', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:26 DEBUG [databricks.labs.ucx.mixins.fixtures] removing schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_cjb4e', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cjb4e.migrate_iqycf', metastore_id=None, name='migrate_iqycf', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:26 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 1 catalog fixtures 07:26 DEBUG [databricks.labs.ucx.mixins.fixtures] removing catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1712991941731, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cjb4e', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_cjb4e', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1712991941731, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:26 DEBUG [databricks.labs.ucx.mixins.fixtures] ignoring error while catalog CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1712991941731, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cjb4e', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_cjb4e', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1712991941731, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') teardown: Catalog 'ucx_cjb4e' does not exist. 07:26 INFO [databricks.labs.ucx.install] Deleting UCX v0.21.1+6120240413072626 from https://DATABRICKS_HOST 07:26 INFO [databricks.labs.ucx.install] Deleting inventory database ucx_SHUCY_migrate_inventory 07:26 INFO [databricks.labs.ucx.install] Deleting jobs 07:26 INFO [databricks.labs.ucx.install] Deleting migrate-groups-experimental job_id=430291167166001. 07:26 INFO [databricks.labs.ucx.install] Deleting failing job_id=603515887891668. 07:26 INFO [databricks.labs.ucx.install] Deleting validate-groups-permissions job_id=755826045116893. 07:26 INFO [databricks.labs.ucx.install] Deleting migrate-tables job_id=200542435545718. 07:26 INFO [databricks.labs.ucx.install] Deleting migrate-groups job_id=841244627565018. 07:26 INFO [databricks.labs.ucx.install] Deleting migrate-tables-in-mounts-experimental job_id=310259823304975. 07:26 INFO [databricks.labs.ucx.install] Deleting remove-workspace-local-backup-groups job_id=1111680659122794. 07:26 INFO [databricks.labs.ucx.install] Deleting assessment job_id=140337534126761. 07:26 INFO [databricks.labs.ucx.install] Deleting cluster policy 07:26 INFO [databricks.labs.ucx.install] Deleting secret scope 07:26 INFO [databricks.labs.ucx.install] UnInstalling UCX complete [gw6] linux -- Python 3.10.14 /home/runner/work/ucx/ucx/.venv/bin/python ```

Running from nightly #26

github-actions[bot] commented 6 months ago
❌ test_table_migration_job: databricks.labs.blueprint.parallel.ManyError: Detected 3 failures: NotFound: The schema `ucx_sdlhb_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. (26m20.713s) ``` databricks.labs.blueprint.parallel.ManyError: Detected 3 failures: NotFound: The schema `ucx_sdlhb_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. [gw5] linux -- Python 3.10.14 /home/runner/work/ucx/ucx/.venv/bin/python 07:43 INFO [databricks.labs.ucx.mixins.fixtures] Schema hive_metastore.migrate_gxrlo: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_gxrlo 07:43 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_gxrlo', metastore_id=None, name='migrate_gxrlo', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:43 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_gxrlo.ucx_tzayt: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_gxrlo/ucx_tzayt 07:43 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_gxrlo.ucx_tzayt', metastore_id=None, name='ucx_tzayt', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_gxrlo', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_gxrlo/ucx_tzayt', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:43 DEBUG [databricks.labs.ucx.mixins.fixtures] added make_dbfs_data_copy fixture: dbfs:/mnt/TEST_MOUNT_NAME/a/b/qYIF 07:43 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_gxrlo.ucx_t6pbs: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_gxrlo/ucx_t6pbs 07:43 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_gxrlo.ucx_t6pbs', metastore_id=None, name='ucx_t6pbs', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_gxrlo', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/qYIF', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:43 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_gxrlo.ucx_tttvy: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_gxrlo/ucx_tttvy 07:43 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_gxrlo.ucx_tttvy', metastore_id=None, name='ucx_tttvy', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_gxrlo', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_gxrlo.ucx_tzayt', view_dependencies=None) 07:43 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_gxrlo.ucx_t9cqk: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_gxrlo/ucx_t9cqk 07:43 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_gxrlo.ucx_t9cqk', metastore_id=None, name='ucx_t9cqk', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_gxrlo', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_gxrlo.ucx_tttvy', view_dependencies=None) 07:43 DEBUG [databricks.labs.ucx.mixins.fixtures] added catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1713080586569, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_clr0s', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_clr0s', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1713080586569, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:43 INFO [databricks.labs.ucx.mixins.fixtures] Schema ucx_clr0s.migrate_gxrlo: https://DATABRICKS_HOST/explore/data/ucx_clr0s/migrate_gxrlo 07:43 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_clr0s', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_clr0s.migrate_gxrlo', metastore_id=None, name='migrate_gxrlo', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:43 DEBUG [tests.integration.test_installation] Creating new installation... 07:43 DEBUG [tests.integration.test_installation] Waiting for clusters to start... 07:52 DEBUG [tests.integration.test_installation] Waiting for clusters to start... 07:52 DEBUG [databricks.labs.ucx.install] Cannot find previous installation: Path (/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/config.yml) doesn't exist. 07:52 INFO [databricks.labs.ucx.install] Please answer a couple of questions to configure Unity Catalog migration 07:52 INFO [databricks.labs.ucx.installer.hms_lineage] HMS Lineage feature creates one system table named system.hms_to_uc_migration.table_access and helps in your migration process from HMS to UC by allowing you to programmatically query HMS lineage data. 07:52 INFO [databricks.labs.ucx.install] Fetching installations... 07:52 INFO [databricks.labs.ucx.installer.policy] Setting up an external metastore 07:52 INFO [databricks.labs.ucx.installer.policy] Creating UCX cluster policy. 07:52 INFO [databricks.labs.ucx.install] Installing UCX v0.21.1+6120240414075227 07:52 INFO [databricks.labs.ucx.install] Creating ucx schemas... 07:52 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=remove-workspace-local-backup-groups 07:52 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups 07:53 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-tables 07:53 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=assessment 07:53 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=failing 07:53 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-tables-in-mounts-experimental 07:53 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups-experimental 07:53 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=validate-groups-permissions 07:53 INFO [databricks.labs.ucx.install] Installation completed successfully! Please refer to the https://DATABRICKS_HOST/#workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/README for the next steps. 07:53 DEBUG [databricks.labs.ucx.installer.workflows] starting migrate-tables job: https://DATABRICKS_HOST#job/238051370661469 08:09 INFO [databricks.labs.ucx.installer.workflows] ---------- REMOTE LOGS -------------- 08:09 INFO [databricks.labs.ucx:migrate_dbfs_root_delta_tables] UCX v0.21.1+6120240414075254 After job finishes, see debug logs at /Workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-143750417495878-0/migrate_dbfs_root_delta_tables.log 08:09 DEBUG [databricks.sdk:migrate_dbfs_root_delta_tables] GET /api/2.0/clusters/list < 200 OK < { < "clusters": [ < { < "autotermination_minutes": 60, < "CLOUD_ENV_attributes": { < "availability": "SPOT_WITH_FALLBACK_AZURE", < "first_on_demand": 2147483647, < "spot_bid_max_price": -1.0 < }, < "cluster_cores": 4.0, < "cluster_id": "DATABRICKS_CLUSTER_ID", < "cluster_memory_mb": 16384, < "cluster_name": "DEFAULT Test Cluster (Single Node, No Isolation)", < "cluster_source": "UI", < "creator_user_name": "serge.smertin@databricks.com", < "custom_tags": { < "ResourceClass": "SingleNode" < }, < "TEST_SCHEMA_tags": { < "Budget": "opex.sales.labs", < "ClusterId": "DATABRICKS_CLUSTER_ID", < "ClusterName": "DEFAULT Test Cluster (Single Node, No Isolation)", < "Creator": "serge.smertin@databricks.com", < "DatabricksInstanceGroupId": "-9023997030385719732", < "DatabricksInstancePoolCreatorId": "4183391249163402", < "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID", < "Owner": "labs-oss@databricks.com", < "Vendor": "Databricks" < }, < "disk_spec": {}, < "driver": { < "host_private_ip": "10.139.0.10", < "instance_id": "1d41eee280144b86ab6692322d1f30e2", < "node_attributes": { < "is_spot": false < }, < "node_id": "fb2940a7ba1f444198eaae7b46b52ab8", < "private_ip": "10.139.64.10", < "public_dns": "52.177.202.90", < "start_timestamp": 1713080597746 < }, < "driver_healthy": true, < "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID", < "driver_instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "driver_node_type_id": "Standard_D4s_v3", < "effective_spark_version": "15.0.x-scala2.12", < "enable_elastic_disk": true, < "enable_local_disk_encryption": false, < "init_scripts_safe_mode": false, < "instance_pool_id": "TEST_INSTANCE_POOL_ID", < "instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "jdbc_port": 10000, < "last_activity_time": 1713080703049, < "last_restarted_time": 1713080699963, < "last_state_loss_time": 1713080699934, < "node_type_id": "Standard_D4s_v3", < "num_workers": 0, < "pinned_by_user_name": "4183391249163402", < "spark_conf": { < "spark.databricks.cluster.profile": "singleNode", < "spark.master": "local[*]" < }, < "spark_context_id": 6707330719408299063, < "spark_version": "15.0.x-scala2.12", < "start_time": 1711403415696, < "state": "RUNNING", < "state_message": "" < }, < "... (8 additional elements)" < ] < } 08:09 DEBUG [databricks.sdk:migrate_dbfs_root_delta_tables] GET /api/2.1/unity-catalog/external-locations < 200 OK < { < "external_locations": [ < { < "created_at": 1712317369786, < "created_by": "serge.smertin@databricks.com", < "credential_id": "462bd121-a3ff-4f51-899f-236868f3d2ab", < "credential_name": "TEST_STORAGE_CREDENTIAL", < "full_name": "TEST_A_LOCATION", < "id": "98c25265-fb0f-4b15-a727-63855b7f78a7", < "isolation_mode": "ISOLATION_MODE_OPEN", < "metastore_id": "8952c1e3-b265-4adf-98c3-6f755e2e1453", < "name": "TEST_A_LOCATION", < "owner": "labs.scope.account-admin", < "read_only": false, < "securable_kind": "EXTERNAL_LOCATION_STANDARD", < "securable_type": "EXTERNAL_LOCATION", < "updated_at": 1712566812808, < "updated_by": "serge.smertin@databricks.com", < "url": "TEST_MOUNT_CONTAINER/a" < }, < "... (3 additional elements)" < ] < } 08:09 DEBUG [databricks.labs.blueprint.installation:migrate_dbfs_root_delta_tables] Loading list from CLOUD_ENV_storage_account_info.csv 08:09 DEBUG [databricks.sdk:migrate_dbfs_root_delta_tables] GET /api/2.0/workspace/export?path=/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/CLOUD_ENV_storage_account_info.csv&direct_download=true < 200 OK < [raw stream] 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SdLHB_migrate_inventory.grants] fetching grants inventory 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SELECT * FROM hive_metastore.ucx_SdLHB_migrate_inventory.grants 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SdLHB_migrate_inventory.grants] crawling new batch for grants 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW DATABASES FROM hive_metastore 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SdLHB_migrate_inventory.tables] fetching tables inventory 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SELECT * FROM hive_metastore.ucx_SdLHB_migrate_inventory.tables 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SdLHB_migrate_inventory.tables] crawling new batch for tables 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW DATABASES 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.TEST_SCHEMA] listing tables 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW TABLES FROM hive_metastore.TEST_SCHEMA 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext] listing tables 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW TABLES FROM hive_metastore.test_ext 08:09 DEBUG [databricks.labs.blueprint.parallel:migrate_dbfs_root_delta_tables] Starting 5 tasks in 8 threads 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.rectangles] fetching table metadata 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.rectangles 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student] fetching table metadata 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student_comment] fetching table metadata 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student_copy] fetching table metadata 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_copy 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student_partitioned] fetching table metadata 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_comment 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_partitioned 08:09 INFO [databricks.labs.blueprint.parallel:migrate_dbfs_root_delta_tables] listing tables in hive_metastore 5/5, rps: 0.182/sec 08:09 INFO [databricks.labs.blueprint.parallel:migrate_dbfs_root_delta_tables] Finished 'listing tables in hive_metastore' tasks: 100% results available (5/5). Took 0:00:27.493255 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SdLHB_migrate_inventory.tables] found 5 new records for tables 08:09 ERROR [databricks.labs.ucx:migrate_dbfs_root_delta_tables] Execute `databricks workspace export //Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-143750417495878-0/migrate_dbfs_root_delta_tables.log` locally to troubleshoot with more details. [SCHEMA_NOT_FOUND] The schema `ucx_sdlhb_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 08:09 DEBUG [databricks:migrate_dbfs_root_delta_tables] Task crash details Traceback (most recent call last): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/runtime.py", line 86, in trigger current_task(ctx) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/workflows.py", line 27, in migrate_dbfs_root_delta_tables ctx.tables_migrator.migrate_tables(what=What.DBFS_ROOT_DELTA, acl_strategy=[AclMigrationWhat.LEGACY_TACL]) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/table_migrate.py", line 64, in migrate_tables all_grants_to_migrate = None if acl_strategy is None else self._gc.snapshot() File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 185, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 116, in _snapshot loaded_records = list(loader()) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 228, in _crawl for table in self._tc.snapshot(): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/tables.py", line 202, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 117, in _snapshot self._append_records(loaded_records) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 122, in _append_records self._backend.save_table(self.full_name, items, self._klass, mode="append") File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/lsql/backends.py", line 223, in save_table df.write.saveAsTable(full_name, mode=mode) File "/databricks/spark/python/pyspark/sql/connect/readwriter.py", line 702, in saveAsTable self._spark.client.execute_command( File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1139, in execute_command data, _, _, _, properties = self._execute_and_fetch(req, observations or {}) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1515, in _execute_and_fetch for response in self._execute_and_fetch_as_iterator(req, observations): File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1493, in _execute_and_fetch_as_iterator self._handle_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1799, in _handle_error self._handle_rpc_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1874, in _handle_rpc_error raise convert_exception( pyspark.errors.exceptions.connect.AnalysisException: [SCHEMA_NOT_FOUND] The schema `ucx_sdlhb_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 08:09 INFO [databricks.labs.ucx:migrate_external_tables_sync] UCX v0.21.1+6120240414075254 After job finishes, see debug logs at /Workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-143750417495878-0/migrate_external_tables_sync.log 08:09 DEBUG [databricks.sdk:migrate_external_tables_sync] GET /api/2.0/clusters/list < 200 OK < { < "clusters": [ < { < "autotermination_minutes": 60, < "CLOUD_ENV_attributes": { < "availability": "SPOT_WITH_FALLBACK_AZURE", < "first_on_demand": 2147483647, < "spot_bid_max_price": -1.0 < }, < "cluster_cores": 4.0, < "cluster_id": "DATABRICKS_CLUSTER_ID", < "cluster_memory_mb": 16384, < "cluster_name": "DEFAULT Test Cluster (Single Node, No Isolation)", < "cluster_source": "UI", < "creator_user_name": "serge.smertin@databricks.com", < "custom_tags": { < "ResourceClass": "SingleNode" < }, < "TEST_SCHEMA_tags": { < "Budget": "opex.sales.labs", < "ClusterId": "DATABRICKS_CLUSTER_ID", < "ClusterName": "DEFAULT Test Cluster (Single Node, No Isolation)", < "Creator": "serge.smertin@databricks.com", < "DatabricksInstanceGroupId": "-9023997030385719732", < "DatabricksInstancePoolCreatorId": "4183391249163402", < "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID", < "Owner": "labs-oss@databricks.com", < "Vendor": "Databricks" < }, < "disk_spec": {}, < "driver": { < "host_private_ip": "10.139.0.10", < "instance_id": "1d41eee280144b86ab6692322d1f30e2", < "node_attributes": { < "is_spot": false < }, < "node_id": "fb2940a7ba1f444198eaae7b46b52ab8", < "private_ip": "10.139.64.10", < "public_dns": "52.177.202.90", < "start_timestamp": 1713080597746 < }, < "driver_healthy": true, < "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID", < "driver_instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "driver_node_type_id": "Standard_D4s_v3", < "effective_spark_version": "15.0.x-scala2.12", < "enable_elastic_disk": true, < "enable_local_disk_encryption": false, < "init_scripts_safe_mode": false, < "instance_pool_id": "TEST_INSTANCE_POOL_ID", < "instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "jdbc_port": 10000, < "last_activity_time": 1713080703049, < "last_restarted_time": 1713080699963, < "last_state_loss_time": 1713080699934, < "node_type_id": "Standard_D4s_v3", < "num_workers": 0, < "pinned_by_user_name": "4183391249163402", < "spark_conf": { < "spark.databricks.cluster.profile": "singleNode", < "spark.master": "local[*]" < }, < "spark_context_id": 6707330719408299063, < "spark_version": "15.0.x-scala2.12", < "start_time": 1711403415696, < "state": "RUNNING", < "state_message": "" < }, < "... (8 additional elements)" < ] < } 08:09 DEBUG [databricks.sdk:migrate_external_tables_sync] GET /api/2.1/unity-catalog/external-locations < 200 OK < { < "external_locations": [ < { < "created_at": 1712317369786, < "created_by": "serge.smertin@databricks.com", < "credential_id": "462bd121-a3ff-4f51-899f-236868f3d2ab", < "credential_name": "TEST_STORAGE_CREDENTIAL", < "full_name": "TEST_A_LOCATION", < "id": "98c25265-fb0f-4b15-a727-63855b7f78a7", < "isolation_mode": "ISOLATION_MODE_OPEN", < "metastore_id": "8952c1e3-b265-4adf-98c3-6f755e2e1453", < "name": "TEST_A_LOCATION", < "owner": "labs.scope.account-admin", < "read_only": false, < "securable_kind": "EXTERNAL_LOCATION_STANDARD", < "securable_type": "EXTERNAL_LOCATION", < "updated_at": 1712566812808, < "updated_by": "serge.smertin@databricks.com", < "url": "TEST_MOUNT_CONTAINER/a" < }, < "... (3 additional elements)" < ] < } 08:09 DEBUG [databricks.labs.blueprint.installation:migrate_external_tables_sync] Loading list from CLOUD_ENV_storage_account_info.csv 08:09 DEBUG [databricks.sdk:migrate_external_tables_sync] GET /api/2.0/workspace/export?path=/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/CLOUD_ENV_storage_account_info.csv&direct_download=true < 200 OK < [raw stream] 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SdLHB_migrate_inventory.grants] fetching grants inventory 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SELECT * FROM hive_metastore.ucx_SdLHB_migrate_inventory.grants 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SdLHB_migrate_inventory.grants] crawling new batch for grants 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW DATABASES FROM hive_metastore 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SdLHB_migrate_inventory.tables] fetching tables inventory 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SELECT * FROM hive_metastore.ucx_SdLHB_migrate_inventory.tables 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SdLHB_migrate_inventory.tables] crawling new batch for tables 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW DATABASES 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.TEST_SCHEMA] listing tables 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW TABLES FROM hive_metastore.TEST_SCHEMA 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext] listing tables 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW TABLES FROM hive_metastore.test_ext 08:09 DEBUG [databricks.labs.blueprint.parallel:migrate_external_tables_sync] Starting 5 tasks in 8 threads 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.rectangles] fetching table metadata 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student] fetching table metadata 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.rectangles 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student_comment] fetching table metadata 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student_copy] fetching table metadata 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_copy 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_comment 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student_partitioned] fetching table metadata 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_partitioned 08:09 INFO [databricks.labs.blueprint.parallel:migrate_external_tables_sync] listing tables in hive_metastore 5/5, rps: 0.182/sec 08:09 INFO [databricks.labs.blueprint.parallel:migrate_external_tables_sync] Finished 'listing tables in hive_metastore' tasks: 100% results available (5/5). Took 0:00:27.520902 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SdLHB_migrate_inventory.tables] found 5 new records for tables 08:09 ERROR [databricks.labs.ucx:migrate_external_tables_sync] Execute `databricks workspace export //Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-143750417495878-0/migrate_external_tables_sync.log` locally to troubleshoot with more details. [SCHEMA_NOT_FOUND] The schema `ucx_sdlhb_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 08:09 DEBUG [databricks:migrate_external_tables_sync] Task crash details Traceback (most recent call last): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/runtime.py", line 86, in trigger current_task(ctx) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/workflows.py", line 18, in migrate_external_tables_sync ctx.tables_migrator.migrate_tables(what=What.EXTERNAL_SYNC, acl_strategy=[AclMigrationWhat.LEGACY_TACL]) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/table_migrate.py", line 64, in migrate_tables all_grants_to_migrate = None if acl_strategy is None else self._gc.snapshot() File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 185, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 116, in _snapshot loaded_records = list(loader()) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 228, in _crawl for table in self._tc.snapshot(): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/tables.py", line 202, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 117, in _snapshot self._append_records(loaded_records) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 122, in _append_records self._backend.save_table(self.full_name, items, self._klass, mode="append") File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/lsql/backends.py", line 223, in save_table df.write.saveAsTable(full_name, mode=mode) File "/databricks/spark/python/pyspark/sql/connect/readwriter.py", line 702, in saveAsTable self._spark.client.execute_command( File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1139, in execute_command data, _, _, _, properties = self._execute_and_fetch(req, observations or {}) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1515, in _execute_and_fetch for response in self._execute_and_fetch_as_iterator(req, observations): File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1493, in _execute_and_fetch_as_iterator self._handle_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1799, in _handle_error self._handle_rpc_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1874, in _handle_rpc_error raise convert_exception( pyspark.errors.exceptions.connect.AnalysisException: [SCHEMA_NOT_FOUND] The schema `ucx_sdlhb_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 08:09 INFO [databricks.labs.ucx.installer.workflows] ---------- END REMOTE LOGS ---------- 07:43 INFO [databricks.labs.ucx.mixins.fixtures] Schema hive_metastore.migrate_gxrlo: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_gxrlo 07:43 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_gxrlo', metastore_id=None, name='migrate_gxrlo', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:43 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_gxrlo.ucx_tzayt: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_gxrlo/ucx_tzayt 07:43 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_gxrlo.ucx_tzayt', metastore_id=None, name='ucx_tzayt', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_gxrlo', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_gxrlo/ucx_tzayt', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:43 DEBUG [databricks.labs.ucx.mixins.fixtures] added make_dbfs_data_copy fixture: dbfs:/mnt/TEST_MOUNT_NAME/a/b/qYIF 07:43 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_gxrlo.ucx_t6pbs: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_gxrlo/ucx_t6pbs 07:43 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_gxrlo.ucx_t6pbs', metastore_id=None, name='ucx_t6pbs', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_gxrlo', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/qYIF', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:43 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_gxrlo.ucx_tttvy: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_gxrlo/ucx_tttvy 07:43 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_gxrlo.ucx_tttvy', metastore_id=None, name='ucx_tttvy', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_gxrlo', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_gxrlo.ucx_tzayt', view_dependencies=None) 07:43 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_gxrlo.ucx_t9cqk: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_gxrlo/ucx_t9cqk 07:43 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_gxrlo.ucx_t9cqk', metastore_id=None, name='ucx_t9cqk', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_gxrlo', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_gxrlo.ucx_tttvy', view_dependencies=None) 07:43 DEBUG [databricks.labs.ucx.mixins.fixtures] added catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1713080586569, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_clr0s', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_clr0s', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1713080586569, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:43 INFO [databricks.labs.ucx.mixins.fixtures] Schema ucx_clr0s.migrate_gxrlo: https://DATABRICKS_HOST/explore/data/ucx_clr0s/migrate_gxrlo 07:43 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_clr0s', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_clr0s.migrate_gxrlo', metastore_id=None, name='migrate_gxrlo', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:43 DEBUG [tests.integration.test_installation] Creating new installation... 07:43 DEBUG [tests.integration.test_installation] Waiting for clusters to start... 07:52 DEBUG [tests.integration.test_installation] Waiting for clusters to start... 07:52 DEBUG [databricks.labs.ucx.install] Cannot find previous installation: Path (/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/config.yml) doesn't exist. 07:52 INFO [databricks.labs.ucx.install] Please answer a couple of questions to configure Unity Catalog migration 07:52 INFO [databricks.labs.ucx.installer.hms_lineage] HMS Lineage feature creates one system table named system.hms_to_uc_migration.table_access and helps in your migration process from HMS to UC by allowing you to programmatically query HMS lineage data. 07:52 INFO [databricks.labs.ucx.install] Fetching installations... 07:52 INFO [databricks.labs.ucx.installer.policy] Setting up an external metastore 07:52 INFO [databricks.labs.ucx.installer.policy] Creating UCX cluster policy. 07:52 INFO [databricks.labs.ucx.install] Installing UCX v0.21.1+6120240414075227 07:52 INFO [databricks.labs.ucx.install] Creating ucx schemas... 07:52 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=remove-workspace-local-backup-groups 07:52 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups 07:53 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-tables 07:53 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=assessment 07:53 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=failing 07:53 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-tables-in-mounts-experimental 07:53 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups-experimental 07:53 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=validate-groups-permissions 07:53 INFO [databricks.labs.ucx.install] Installation completed successfully! Please refer to the https://DATABRICKS_HOST/#workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/README for the next steps. 07:53 DEBUG [databricks.labs.ucx.installer.workflows] starting migrate-tables job: https://DATABRICKS_HOST#job/238051370661469 08:09 INFO [databricks.labs.ucx.installer.workflows] ---------- REMOTE LOGS -------------- 08:09 INFO [databricks.labs.ucx:migrate_dbfs_root_delta_tables] UCX v0.21.1+6120240414075254 After job finishes, see debug logs at /Workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-143750417495878-0/migrate_dbfs_root_delta_tables.log 08:09 DEBUG [databricks.sdk:migrate_dbfs_root_delta_tables] GET /api/2.0/clusters/list < 200 OK < { < "clusters": [ < { < "autotermination_minutes": 60, < "CLOUD_ENV_attributes": { < "availability": "SPOT_WITH_FALLBACK_AZURE", < "first_on_demand": 2147483647, < "spot_bid_max_price": -1.0 < }, < "cluster_cores": 4.0, < "cluster_id": "DATABRICKS_CLUSTER_ID", < "cluster_memory_mb": 16384, < "cluster_name": "DEFAULT Test Cluster (Single Node, No Isolation)", < "cluster_source": "UI", < "creator_user_name": "serge.smertin@databricks.com", < "custom_tags": { < "ResourceClass": "SingleNode" < }, < "TEST_SCHEMA_tags": { < "Budget": "opex.sales.labs", < "ClusterId": "DATABRICKS_CLUSTER_ID", < "ClusterName": "DEFAULT Test Cluster (Single Node, No Isolation)", < "Creator": "serge.smertin@databricks.com", < "DatabricksInstanceGroupId": "-9023997030385719732", < "DatabricksInstancePoolCreatorId": "4183391249163402", < "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID", < "Owner": "labs-oss@databricks.com", < "Vendor": "Databricks" < }, < "disk_spec": {}, < "driver": { < "host_private_ip": "10.139.0.10", < "instance_id": "1d41eee280144b86ab6692322d1f30e2", < "node_attributes": { < "is_spot": false < }, < "node_id": "fb2940a7ba1f444198eaae7b46b52ab8", < "private_ip": "10.139.64.10", < "public_dns": "52.177.202.90", < "start_timestamp": 1713080597746 < }, < "driver_healthy": true, < "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID", < "driver_instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "driver_node_type_id": "Standard_D4s_v3", < "effective_spark_version": "15.0.x-scala2.12", < "enable_elastic_disk": true, < "enable_local_disk_encryption": false, < "init_scripts_safe_mode": false, < "instance_pool_id": "TEST_INSTANCE_POOL_ID", < "instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "jdbc_port": 10000, < "last_activity_time": 1713080703049, < "last_restarted_time": 1713080699963, < "last_state_loss_time": 1713080699934, < "node_type_id": "Standard_D4s_v3", < "num_workers": 0, < "pinned_by_user_name": "4183391249163402", < "spark_conf": { < "spark.databricks.cluster.profile": "singleNode", < "spark.master": "local[*]" < }, < "spark_context_id": 6707330719408299063, < "spark_version": "15.0.x-scala2.12", < "start_time": 1711403415696, < "state": "RUNNING", < "state_message": "" < }, < "... (8 additional elements)" < ] < } 08:09 DEBUG [databricks.sdk:migrate_dbfs_root_delta_tables] GET /api/2.1/unity-catalog/external-locations < 200 OK < { < "external_locations": [ < { < "created_at": 1712317369786, < "created_by": "serge.smertin@databricks.com", < "credential_id": "462bd121-a3ff-4f51-899f-236868f3d2ab", < "credential_name": "TEST_STORAGE_CREDENTIAL", < "full_name": "TEST_A_LOCATION", < "id": "98c25265-fb0f-4b15-a727-63855b7f78a7", < "isolation_mode": "ISOLATION_MODE_OPEN", < "metastore_id": "8952c1e3-b265-4adf-98c3-6f755e2e1453", < "name": "TEST_A_LOCATION", < "owner": "labs.scope.account-admin", < "read_only": false, < "securable_kind": "EXTERNAL_LOCATION_STANDARD", < "securable_type": "EXTERNAL_LOCATION", < "updated_at": 1712566812808, < "updated_by": "serge.smertin@databricks.com", < "url": "TEST_MOUNT_CONTAINER/a" < }, < "... (3 additional elements)" < ] < } 08:09 DEBUG [databricks.labs.blueprint.installation:migrate_dbfs_root_delta_tables] Loading list from CLOUD_ENV_storage_account_info.csv 08:09 DEBUG [databricks.sdk:migrate_dbfs_root_delta_tables] GET /api/2.0/workspace/export?path=/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/CLOUD_ENV_storage_account_info.csv&direct_download=true < 200 OK < [raw stream] 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SdLHB_migrate_inventory.grants] fetching grants inventory 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SELECT * FROM hive_metastore.ucx_SdLHB_migrate_inventory.grants 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SdLHB_migrate_inventory.grants] crawling new batch for grants 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW DATABASES FROM hive_metastore 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SdLHB_migrate_inventory.tables] fetching tables inventory 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SELECT * FROM hive_metastore.ucx_SdLHB_migrate_inventory.tables 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SdLHB_migrate_inventory.tables] crawling new batch for tables 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW DATABASES 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.TEST_SCHEMA] listing tables 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW TABLES FROM hive_metastore.TEST_SCHEMA 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext] listing tables 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW TABLES FROM hive_metastore.test_ext 08:09 DEBUG [databricks.labs.blueprint.parallel:migrate_dbfs_root_delta_tables] Starting 5 tasks in 8 threads 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.rectangles] fetching table metadata 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.rectangles 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student] fetching table metadata 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student_comment] fetching table metadata 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student_copy] fetching table metadata 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_copy 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student_partitioned] fetching table metadata 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_comment 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student 08:09 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_partitioned 08:09 INFO [databricks.labs.blueprint.parallel:migrate_dbfs_root_delta_tables] listing tables in hive_metastore 5/5, rps: 0.182/sec 08:09 INFO [databricks.labs.blueprint.parallel:migrate_dbfs_root_delta_tables] Finished 'listing tables in hive_metastore' tasks: 100% results available (5/5). Took 0:00:27.493255 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SdLHB_migrate_inventory.tables] found 5 new records for tables 08:09 ERROR [databricks.labs.ucx:migrate_dbfs_root_delta_tables] Execute `databricks workspace export //Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-143750417495878-0/migrate_dbfs_root_delta_tables.log` locally to troubleshoot with more details. [SCHEMA_NOT_FOUND] The schema `ucx_sdlhb_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 08:09 DEBUG [databricks:migrate_dbfs_root_delta_tables] Task crash details Traceback (most recent call last): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/runtime.py", line 86, in trigger current_task(ctx) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/workflows.py", line 27, in migrate_dbfs_root_delta_tables ctx.tables_migrator.migrate_tables(what=What.DBFS_ROOT_DELTA, acl_strategy=[AclMigrationWhat.LEGACY_TACL]) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/table_migrate.py", line 64, in migrate_tables all_grants_to_migrate = None if acl_strategy is None else self._gc.snapshot() File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 185, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 116, in _snapshot loaded_records = list(loader()) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 228, in _crawl for table in self._tc.snapshot(): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/tables.py", line 202, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 117, in _snapshot self._append_records(loaded_records) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 122, in _append_records self._backend.save_table(self.full_name, items, self._klass, mode="append") File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/lsql/backends.py", line 223, in save_table df.write.saveAsTable(full_name, mode=mode) File "/databricks/spark/python/pyspark/sql/connect/readwriter.py", line 702, in saveAsTable self._spark.client.execute_command( File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1139, in execute_command data, _, _, _, properties = self._execute_and_fetch(req, observations or {}) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1515, in _execute_and_fetch for response in self._execute_and_fetch_as_iterator(req, observations): File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1493, in _execute_and_fetch_as_iterator self._handle_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1799, in _handle_error self._handle_rpc_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1874, in _handle_rpc_error raise convert_exception( pyspark.errors.exceptions.connect.AnalysisException: [SCHEMA_NOT_FOUND] The schema `ucx_sdlhb_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 08:09 INFO [databricks.labs.ucx:migrate_external_tables_sync] UCX v0.21.1+6120240414075254 After job finishes, see debug logs at /Workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-143750417495878-0/migrate_external_tables_sync.log 08:09 DEBUG [databricks.sdk:migrate_external_tables_sync] GET /api/2.0/clusters/list < 200 OK < { < "clusters": [ < { < "autotermination_minutes": 60, < "CLOUD_ENV_attributes": { < "availability": "SPOT_WITH_FALLBACK_AZURE", < "first_on_demand": 2147483647, < "spot_bid_max_price": -1.0 < }, < "cluster_cores": 4.0, < "cluster_id": "DATABRICKS_CLUSTER_ID", < "cluster_memory_mb": 16384, < "cluster_name": "DEFAULT Test Cluster (Single Node, No Isolation)", < "cluster_source": "UI", < "creator_user_name": "serge.smertin@databricks.com", < "custom_tags": { < "ResourceClass": "SingleNode" < }, < "TEST_SCHEMA_tags": { < "Budget": "opex.sales.labs", < "ClusterId": "DATABRICKS_CLUSTER_ID", < "ClusterName": "DEFAULT Test Cluster (Single Node, No Isolation)", < "Creator": "serge.smertin@databricks.com", < "DatabricksInstanceGroupId": "-9023997030385719732", < "DatabricksInstancePoolCreatorId": "4183391249163402", < "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID", < "Owner": "labs-oss@databricks.com", < "Vendor": "Databricks" < }, < "disk_spec": {}, < "driver": { < "host_private_ip": "10.139.0.10", < "instance_id": "1d41eee280144b86ab6692322d1f30e2", < "node_attributes": { < "is_spot": false < }, < "node_id": "fb2940a7ba1f444198eaae7b46b52ab8", < "private_ip": "10.139.64.10", < "public_dns": "52.177.202.90", < "start_timestamp": 1713080597746 < }, < "driver_healthy": true, < "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID", < "driver_instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "driver_node_type_id": "Standard_D4s_v3", < "effective_spark_version": "15.0.x-scala2.12", < "enable_elastic_disk": true, < "enable_local_disk_encryption": false, < "init_scripts_safe_mode": false, < "instance_pool_id": "TEST_INSTANCE_POOL_ID", < "instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "jdbc_port": 10000, < "last_activity_time": 1713080703049, < "last_restarted_time": 1713080699963, < "last_state_loss_time": 1713080699934, < "node_type_id": "Standard_D4s_v3", < "num_workers": 0, < "pinned_by_user_name": "4183391249163402", < "spark_conf": { < "spark.databricks.cluster.profile": "singleNode", < "spark.master": "local[*]" < }, < "spark_context_id": 6707330719408299063, < "spark_version": "15.0.x-scala2.12", < "start_time": 1711403415696, < "state": "RUNNING", < "state_message": "" < }, < "... (8 additional elements)" < ] < } 08:09 DEBUG [databricks.sdk:migrate_external_tables_sync] GET /api/2.1/unity-catalog/external-locations < 200 OK < { < "external_locations": [ < { < "created_at": 1712317369786, < "created_by": "serge.smertin@databricks.com", < "credential_id": "462bd121-a3ff-4f51-899f-236868f3d2ab", < "credential_name": "TEST_STORAGE_CREDENTIAL", < "full_name": "TEST_A_LOCATION", < "id": "98c25265-fb0f-4b15-a727-63855b7f78a7", < "isolation_mode": "ISOLATION_MODE_OPEN", < "metastore_id": "8952c1e3-b265-4adf-98c3-6f755e2e1453", < "name": "TEST_A_LOCATION", < "owner": "labs.scope.account-admin", < "read_only": false, < "securable_kind": "EXTERNAL_LOCATION_STANDARD", < "securable_type": "EXTERNAL_LOCATION", < "updated_at": 1712566812808, < "updated_by": "serge.smertin@databricks.com", < "url": "TEST_MOUNT_CONTAINER/a" < }, < "... (3 additional elements)" < ] < } 08:09 DEBUG [databricks.labs.blueprint.installation:migrate_external_tables_sync] Loading list from CLOUD_ENV_storage_account_info.csv 08:09 DEBUG [databricks.sdk:migrate_external_tables_sync] GET /api/2.0/workspace/export?path=/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/CLOUD_ENV_storage_account_info.csv&direct_download=true < 200 OK < [raw stream] 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SdLHB_migrate_inventory.grants] fetching grants inventory 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SELECT * FROM hive_metastore.ucx_SdLHB_migrate_inventory.grants 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SdLHB_migrate_inventory.grants] crawling new batch for grants 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW DATABASES FROM hive_metastore 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SdLHB_migrate_inventory.tables] fetching tables inventory 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SELECT * FROM hive_metastore.ucx_SdLHB_migrate_inventory.tables 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SdLHB_migrate_inventory.tables] crawling new batch for tables 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW DATABASES 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.TEST_SCHEMA] listing tables 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW TABLES FROM hive_metastore.TEST_SCHEMA 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext] listing tables 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW TABLES FROM hive_metastore.test_ext 08:09 DEBUG [databricks.labs.blueprint.parallel:migrate_external_tables_sync] Starting 5 tasks in 8 threads 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.rectangles] fetching table metadata 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student] fetching table metadata 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.rectangles 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student_comment] fetching table metadata 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student_copy] fetching table metadata 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_copy 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_comment 08:09 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student_partitioned] fetching table metadata 08:09 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_partitioned 08:09 INFO [databricks.labs.blueprint.parallel:migrate_external_tables_sync] listing tables in hive_metastore 5/5, rps: 0.182/sec 08:09 INFO [databricks.labs.blueprint.parallel:migrate_external_tables_sync] Finished 'listing tables in hive_metastore' tasks: 100% results available (5/5). Took 0:00:27.520902 08:09 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SdLHB_migrate_inventory.tables] found 5 new records for tables 08:09 ERROR [databricks.labs.ucx:migrate_external_tables_sync] Execute `databricks workspace export //Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-143750417495878-0/migrate_external_tables_sync.log` locally to troubleshoot with more details. [SCHEMA_NOT_FOUND] The schema `ucx_sdlhb_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 08:09 DEBUG [databricks:migrate_external_tables_sync] Task crash details Traceback (most recent call last): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/runtime.py", line 86, in trigger current_task(ctx) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/workflows.py", line 18, in migrate_external_tables_sync ctx.tables_migrator.migrate_tables(what=What.EXTERNAL_SYNC, acl_strategy=[AclMigrationWhat.LEGACY_TACL]) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/table_migrate.py", line 64, in migrate_tables all_grants_to_migrate = None if acl_strategy is None else self._gc.snapshot() File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 185, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 116, in _snapshot loaded_records = list(loader()) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 228, in _crawl for table in self._tc.snapshot(): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/tables.py", line 202, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 117, in _snapshot self._append_records(loaded_records) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 122, in _append_records self._backend.save_table(self.full_name, items, self._klass, mode="append") File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/lsql/backends.py", line 223, in save_table df.write.saveAsTable(full_name, mode=mode) File "/databricks/spark/python/pyspark/sql/connect/readwriter.py", line 702, in saveAsTable self._spark.client.execute_command( File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1139, in execute_command data, _, _, _, properties = self._execute_and_fetch(req, observations or {}) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1515, in _execute_and_fetch for response in self._execute_and_fetch_as_iterator(req, observations): File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1493, in _execute_and_fetch_as_iterator self._handle_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1799, in _handle_error self._handle_rpc_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1874, in _handle_rpc_error raise convert_exception( pyspark.errors.exceptions.connect.AnalysisException: [SCHEMA_NOT_FOUND] The schema `ucx_sdlhb_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 08:09 INFO [databricks.labs.ucx.installer.workflows] ---------- END REMOTE LOGS ---------- 08:09 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 1 make_dbfs_data_copy fixtures 08:09 DEBUG [databricks.labs.ucx.mixins.fixtures] removing make_dbfs_data_copy fixture: dbfs:/mnt/TEST_MOUNT_NAME/a/b/qYIF 08:09 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 4 table fixtures 08:09 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_gxrlo.ucx_tzayt', metastore_id=None, name='ucx_tzayt', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_gxrlo', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_gxrlo/ucx_tzayt', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 08:09 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_gxrlo.ucx_t6pbs', metastore_id=None, name='ucx_t6pbs', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_gxrlo', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/qYIF', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 08:09 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_gxrlo.ucx_tttvy', metastore_id=None, name='ucx_tttvy', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_gxrlo', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_gxrlo.ucx_tzayt', view_dependencies=None) 08:09 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_gxrlo.ucx_t9cqk', metastore_id=None, name='ucx_t9cqk', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_gxrlo', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_gxrlo.ucx_tttvy', view_dependencies=None) 08:09 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 2 schema fixtures 08:09 DEBUG [databricks.labs.ucx.mixins.fixtures] removing schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_gxrlo', metastore_id=None, name='migrate_gxrlo', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 08:09 DEBUG [databricks.labs.ucx.mixins.fixtures] removing schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_clr0s', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_clr0s.migrate_gxrlo', metastore_id=None, name='migrate_gxrlo', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 08:09 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 1 catalog fixtures 08:09 DEBUG [databricks.labs.ucx.mixins.fixtures] removing catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1713080586569, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_clr0s', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_clr0s', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1713080586569, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 08:09 DEBUG [databricks.labs.ucx.mixins.fixtures] ignoring error while catalog CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1713080586569, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_clr0s', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_clr0s', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1713080586569, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') teardown: Catalog 'ucx_clr0s' does not exist. 08:09 INFO [databricks.labs.ucx.install] Deleting UCX v0.21.1+6120240414080915 from https://DATABRICKS_HOST 08:09 INFO [databricks.labs.ucx.install] Deleting inventory database ucx_SdLHB_migrate_inventory 08:09 INFO [databricks.labs.ucx.install] Deleting jobs 08:09 INFO [databricks.labs.ucx.install] Deleting remove-workspace-local-backup-groups job_id=694568484866629. 08:09 INFO [databricks.labs.ucx.install] Deleting migrate-groups job_id=775333200715919. 08:09 INFO [databricks.labs.ucx.install] Deleting migrate-tables job_id=238051370661469. 08:09 INFO [databricks.labs.ucx.install] Deleting assessment job_id=713037663901748. 08:09 INFO [databricks.labs.ucx.install] Deleting failing job_id=739737347766339. 08:09 INFO [databricks.labs.ucx.install] Deleting migrate-tables-in-mounts-experimental job_id=918072840478652. 08:09 INFO [databricks.labs.ucx.install] Deleting migrate-groups-experimental job_id=674553216712181. 08:09 INFO [databricks.labs.ucx.install] Deleting validate-groups-permissions job_id=949157142890511. 08:09 INFO [databricks.labs.ucx.install] Deleting cluster policy 08:09 INFO [databricks.labs.ucx.install] Deleting secret scope 08:09 INFO [databricks.labs.ucx.install] UnInstalling UCX complete [gw5] linux -- Python 3.10.14 /home/runner/work/ucx/ucx/.venv/bin/python ```

Running from nightly #27

github-actions[bot] commented 6 months ago
❌ test_table_migration_job: databricks.labs.blueprint.parallel.ManyError: Detected 3 failures: NotFound: The schema `ucx_soeqx_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. (24m6.4s) ``` databricks.labs.blueprint.parallel.ManyError: Detected 3 failures: NotFound: The schema `ucx_soeqx_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. [gw4] linux -- Python 3.10.14 /home/runner/work/ucx/ucx/.venv/bin/python 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Schema hive_metastore.migrate_9e5ty: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_9e5ty 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_9e5ty', metastore_id=None, name='migrate_9e5ty', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_9e5ty.ucx_tmccv: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_9e5ty/ucx_tmccv 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_9e5ty.ucx_tmccv', metastore_id=None, name='ucx_tmccv', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_9e5ty', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_9e5ty/ucx_tmccv', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added make_dbfs_data_copy fixture: dbfs:/mnt/TEST_MOUNT_NAME/a/b/ie03 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_9e5ty.ucx_tut22: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_9e5ty/ucx_tut22 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_9e5ty.ucx_tut22', metastore_id=None, name='ucx_tut22', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_9e5ty', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/ie03', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_9e5ty.ucx_t5tsr: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_9e5ty/ucx_t5tsr 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_9e5ty.ucx_t5tsr', metastore_id=None, name='ucx_t5tsr', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_9e5ty', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_9e5ty.ucx_tmccv', view_dependencies=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_9e5ty.ucx_tiwsg: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_9e5ty/ucx_tiwsg 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_9e5ty.ucx_tiwsg', metastore_id=None, name='ucx_tiwsg', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_9e5ty', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_9e5ty.ucx_t5tsr', view_dependencies=None) 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1713164677253, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_c6ysb', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_c6ysb', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1713164677253, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Schema ucx_c6ysb.migrate_9e5ty: https://DATABRICKS_HOST/explore/data/ucx_c6ysb/migrate_9e5ty 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_c6ysb', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_c6ysb.migrate_9e5ty', metastore_id=None, name='migrate_9e5ty', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:04 DEBUG [tests.integration.test_installation] Creating new installation... 07:04 DEBUG [tests.integration.test_installation] Waiting for clusters to start... 07:13 DEBUG [tests.integration.test_installation] Waiting for clusters to start... 07:13 DEBUG [databricks.labs.ucx.install] Cannot find previous installation: Path (/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/config.yml) doesn't exist. 07:13 INFO [databricks.labs.ucx.install] Please answer a couple of questions to configure Unity Catalog migration 07:13 INFO [databricks.labs.ucx.installer.hms_lineage] HMS Lineage feature creates one system table named system.hms_to_uc_migration.table_access and helps in your migration process from HMS to UC by allowing you to programmatically query HMS lineage data. 07:13 INFO [databricks.labs.ucx.install] Fetching installations... 07:13 INFO [databricks.labs.ucx.installer.policy] Setting up an external metastore 07:13 INFO [databricks.labs.ucx.installer.policy] Creating UCX cluster policy. 07:13 INFO [databricks.labs.ucx.install] Installing UCX v0.21.1+6220240415071319 07:13 INFO [databricks.labs.ucx.install] Creating ucx schemas... 07:13 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-tables-in-mounts-experimental 07:13 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups-experimental 07:13 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups 07:13 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=validate-groups-permissions 07:13 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=assessment 07:13 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=remove-workspace-local-backup-groups 07:13 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=failing 07:13 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-tables 07:13 INFO [databricks.labs.ucx.install] Installation completed successfully! Please refer to the https://DATABRICKS_HOST/#workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/README for the next steps. 07:14 DEBUG [databricks.labs.ucx.installer.workflows] starting migrate-tables job: https://DATABRICKS_HOST#job/148700357619491 07:28 INFO [databricks.labs.ucx.installer.workflows] ---------- REMOTE LOGS -------------- 07:28 INFO [databricks.labs.ucx:migrate_external_tables_sync] UCX v0.21.1+6220240415071347 After job finishes, see debug logs at /Workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-697385663699722-0/migrate_external_tables_sync.log 07:28 DEBUG [databricks.sdk:migrate_external_tables_sync] GET /api/2.0/clusters/list < 200 OK < { < "clusters": [ < { < "autotermination_minutes": 60, < "CLOUD_ENV_attributes": { < "availability": "SPOT_WITH_FALLBACK_AZURE", < "first_on_demand": 2147483647, < "spot_bid_max_price": -1.0 < }, < "cluster_cores": 4.0, < "cluster_id": "DATABRICKS_CLUSTER_ID", < "cluster_memory_mb": 16384, < "cluster_name": "DEFAULT Test Cluster (Single Node, No Isolation)", < "cluster_source": "UI", < "creator_user_name": "serge.smertin@databricks.com", < "custom_tags": { < "ResourceClass": "SingleNode" < }, < "TEST_SCHEMA_tags": { < "Budget": "opex.sales.labs", < "ClusterId": "DATABRICKS_CLUSTER_ID", < "ClusterName": "DEFAULT Test Cluster (Single Node, No Isolation)", < "Creator": "serge.smertin@databricks.com", < "DatabricksInstanceGroupId": "-9023997030385719732", < "DatabricksInstancePoolCreatorId": "4183391249163402", < "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID", < "Owner": "labs-oss@databricks.com", < "Vendor": "Databricks" < }, < "disk_spec": {}, < "driver": { < "host_private_ip": "10.139.0.15", < "instance_id": "691616e3d8674f848cf370d8ddf99720", < "node_attributes": { < "is_spot": false < }, < "node_id": "1c8eb39191c74e2f91ac9b2816a4a138", < "private_ip": "10.139.64.15", < "public_dns": "52.252.7.114", < "start_timestamp": 1713164805077 < }, < "driver_healthy": true, < "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID", < "driver_instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "driver_node_type_id": "Standard_D4s_v3", < "effective_spark_version": "15.0.x-scala2.12", < "enable_elastic_disk": true, < "enable_local_disk_encryption": false, < "init_scripts_safe_mode": false, < "instance_pool_id": "TEST_INSTANCE_POOL_ID", < "instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "jdbc_port": 10000, < "last_activity_time": 1713164860916, < "last_restarted_time": 1713164912670, < "last_state_loss_time": 1713164912642, < "node_type_id": "Standard_D4s_v3", < "num_workers": 0, < "pinned_by_user_name": "4183391249163402", < "spark_conf": { < "spark.databricks.cluster.profile": "singleNode", < "spark.master": "local[*]" < }, < "spark_context_id": 7907915319392029151, < "spark_version": "15.0.x-scala2.12", < "start_time": 1711403415696, < "state": "RUNNING", < "state_message": "" < }, < "... (8 additional elements)" < ] < } 07:28 DEBUG [databricks.sdk:migrate_external_tables_sync] GET /api/2.1/unity-catalog/external-locations < 200 OK < { < "external_locations": [ < { < "created_at": 1712317369786, < "created_by": "serge.smertin@databricks.com", < "credential_id": "462bd121-a3ff-4f51-899f-236868f3d2ab", < "credential_name": "TEST_STORAGE_CREDENTIAL", < "full_name": "TEST_A_LOCATION", < "id": "98c25265-fb0f-4b15-a727-63855b7f78a7", < "isolation_mode": "ISOLATION_MODE_OPEN", < "metastore_id": "8952c1e3-b265-4adf-98c3-6f755e2e1453", < "name": "TEST_A_LOCATION", < "owner": "labs.scope.account-admin", < "read_only": false, < "securable_kind": "EXTERNAL_LOCATION_STANDARD", < "securable_type": "EXTERNAL_LOCATION", < "updated_at": 1712566812808, < "updated_by": "serge.smertin@databricks.com", < "url": "TEST_MOUNT_CONTAINER/a" < }, < "... (3 additional elements)" < ] < } 07:28 DEBUG [databricks.labs.blueprint.installation:migrate_external_tables_sync] Loading list from CLOUD_ENV_storage_account_info.csv 07:28 DEBUG [databricks.sdk:migrate_external_tables_sync] GET /api/2.0/workspace/export?path=/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/CLOUD_ENV_storage_account_info.csv&direct_download=true < 200 OK < [raw stream] 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SOEqx_migrate_inventory.grants] fetching grants inventory 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SELECT * FROM hive_metastore.ucx_SOEqx_migrate_inventory.grants 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SOEqx_migrate_inventory.grants] crawling new batch for grants 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW DATABASES FROM hive_metastore 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SOEqx_migrate_inventory.tables] fetching tables inventory 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SELECT * FROM hive_metastore.ucx_SOEqx_migrate_inventory.tables 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SOEqx_migrate_inventory.tables] crawling new batch for tables 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW DATABASES 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.TEST_SCHEMA] listing tables 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW TABLES FROM hive_metastore.TEST_SCHEMA 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext] listing tables 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW TABLES FROM hive_metastore.test_ext 07:28 DEBUG [databricks.labs.blueprint.parallel:migrate_external_tables_sync] Starting 5 tasks in 8 threads 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.rectangles] fetching table metadata 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.rectangles 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student] fetching table metadata 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student_comment] fetching table metadata 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student_copy] fetching table metadata 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_copy 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_comment 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student_partitioned] fetching table metadata 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_partitioned 07:28 INFO [databricks.labs.blueprint.parallel:migrate_external_tables_sync] listing tables in hive_metastore 5/5, rps: 0.154/sec 07:28 INFO [databricks.labs.blueprint.parallel:migrate_external_tables_sync] Finished 'listing tables in hive_metastore' tasks: 100% results available (5/5). Took 0:00:32.420576 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SOEqx_migrate_inventory.tables] found 5 new records for tables 07:28 ERROR [databricks.labs.ucx:migrate_external_tables_sync] Execute `databricks workspace export //Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-697385663699722-0/migrate_external_tables_sync.log` locally to troubleshoot with more details. [SCHEMA_NOT_FOUND] The schema `ucx_soeqx_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 07:28 DEBUG [databricks:migrate_external_tables_sync] Task crash details Traceback (most recent call last): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/runtime.py", line 86, in trigger current_task(ctx) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/workflows.py", line 18, in migrate_external_tables_sync ctx.tables_migrator.migrate_tables(what=What.EXTERNAL_SYNC, acl_strategy=[AclMigrationWhat.LEGACY_TACL]) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/table_migrate.py", line 64, in migrate_tables all_grants_to_migrate = None if acl_strategy is None else self._gc.snapshot() File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 185, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 116, in _snapshot loaded_records = list(loader()) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 228, in _crawl for table in self._tc.snapshot(): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/tables.py", line 202, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 117, in _snapshot self._append_records(loaded_records) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 122, in _append_records self._backend.save_table(self.full_name, items, self._klass, mode="append") File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/lsql/backends.py", line 223, in save_table df.write.saveAsTable(full_name, mode=mode) File "/databricks/spark/python/pyspark/sql/connect/readwriter.py", line 702, in saveAsTable self._spark.client.execute_command( File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1139, in execute_command data, _, _, _, properties = self._execute_and_fetch(req, observations or {}) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1515, in _execute_and_fetch for response in self._execute_and_fetch_as_iterator(req, observations): File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1493, in _execute_and_fetch_as_iterator self._handle_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1799, in _handle_error self._handle_rpc_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1874, in _handle_rpc_error raise convert_exception( pyspark.errors.exceptions.connect.AnalysisException: [SCHEMA_NOT_FOUND] The schema `ucx_soeqx_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 07:28 INFO [databricks.labs.ucx:migrate_dbfs_root_delta_tables] UCX v0.21.1+6220240415071347 After job finishes, see debug logs at /Workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-697385663699722-0/migrate_dbfs_root_delta_tables.log 07:28 DEBUG [databricks.sdk:migrate_dbfs_root_delta_tables] GET /api/2.0/clusters/list < 200 OK < { < "clusters": [ < { < "autotermination_minutes": 60, < "CLOUD_ENV_attributes": { < "availability": "SPOT_WITH_FALLBACK_AZURE", < "first_on_demand": 2147483647, < "spot_bid_max_price": -1.0 < }, < "cluster_cores": 4.0, < "cluster_id": "DATABRICKS_CLUSTER_ID", < "cluster_memory_mb": 16384, < "cluster_name": "DEFAULT Test Cluster (Single Node, No Isolation)", < "cluster_source": "UI", < "creator_user_name": "serge.smertin@databricks.com", < "custom_tags": { < "ResourceClass": "SingleNode" < }, < "TEST_SCHEMA_tags": { < "Budget": "opex.sales.labs", < "ClusterId": "DATABRICKS_CLUSTER_ID", < "ClusterName": "DEFAULT Test Cluster (Single Node, No Isolation)", < "Creator": "serge.smertin@databricks.com", < "DatabricksInstanceGroupId": "-9023997030385719732", < "DatabricksInstancePoolCreatorId": "4183391249163402", < "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID", < "Owner": "labs-oss@databricks.com", < "Vendor": "Databricks" < }, < "disk_spec": {}, < "driver": { < "host_private_ip": "10.139.0.15", < "instance_id": "691616e3d8674f848cf370d8ddf99720", < "node_attributes": { < "is_spot": false < }, < "node_id": "1c8eb39191c74e2f91ac9b2816a4a138", < "private_ip": "10.139.64.15", < "public_dns": "52.252.7.114", < "start_timestamp": 1713164805077 < }, < "driver_healthy": true, < "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID", < "driver_instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "driver_node_type_id": "Standard_D4s_v3", < "effective_spark_version": "15.0.x-scala2.12", < "enable_elastic_disk": true, < "enable_local_disk_encryption": false, < "init_scripts_safe_mode": false, < "instance_pool_id": "TEST_INSTANCE_POOL_ID", < "instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "jdbc_port": 10000, < "last_activity_time": 1713164860916, < "last_restarted_time": 1713164912670, < "last_state_loss_time": 1713164912642, < "node_type_id": "Standard_D4s_v3", < "num_workers": 0, < "pinned_by_user_name": "4183391249163402", < "spark_conf": { < "spark.databricks.cluster.profile": "singleNode", < "spark.master": "local[*]" < }, < "spark_context_id": 7907915319392029151, < "spark_version": "15.0.x-scala2.12", < "start_time": 1711403415696, < "state": "RUNNING", < "state_message": "" < }, < "... (8 additional elements)" < ] < } 07:28 DEBUG [databricks.sdk:migrate_dbfs_root_delta_tables] GET /api/2.1/unity-catalog/external-locations < 200 OK < { < "external_locations": [ < { < "created_at": 1712317369786, < "created_by": "serge.smertin@databricks.com", < "credential_id": "462bd121-a3ff-4f51-899f-236868f3d2ab", < "credential_name": "TEST_STORAGE_CREDENTIAL", < "full_name": "TEST_A_LOCATION", < "id": "98c25265-fb0f-4b15-a727-63855b7f78a7", < "isolation_mode": "ISOLATION_MODE_OPEN", < "metastore_id": "8952c1e3-b265-4adf-98c3-6f755e2e1453", < "name": "TEST_A_LOCATION", < "owner": "labs.scope.account-admin", < "read_only": false, < "securable_kind": "EXTERNAL_LOCATION_STANDARD", < "securable_type": "EXTERNAL_LOCATION", < "updated_at": 1712566812808, < "updated_by": "serge.smertin@databricks.com", < "url": "TEST_MOUNT_CONTAINER/a" < }, < "... (3 additional elements)" < ] < } 07:28 DEBUG [databricks.labs.blueprint.installation:migrate_dbfs_root_delta_tables] Loading list from CLOUD_ENV_storage_account_info.csv 07:28 DEBUG [databricks.sdk:migrate_dbfs_root_delta_tables] GET /api/2.0/workspace/export?path=/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/CLOUD_ENV_storage_account_info.csv&direct_download=true < 200 OK < [raw stream] 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SOEqx_migrate_inventory.grants] fetching grants inventory 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SELECT * FROM hive_metastore.ucx_SOEqx_migrate_inventory.grants 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SOEqx_migrate_inventory.grants] crawling new batch for grants 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW DATABASES FROM hive_metastore 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SOEqx_migrate_inventory.tables] fetching tables inventory 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SELECT * FROM hive_metastore.ucx_SOEqx_migrate_inventory.tables 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SOEqx_migrate_inventory.tables] crawling new batch for tables 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW DATABASES 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.TEST_SCHEMA] listing tables 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW TABLES FROM hive_metastore.TEST_SCHEMA 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext] listing tables 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW TABLES FROM hive_metastore.test_ext 07:28 DEBUG [databricks.labs.blueprint.parallel:migrate_dbfs_root_delta_tables] Starting 5 tasks in 8 threads 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.rectangles] fetching table metadata 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.rectangles 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student] fetching table metadata 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student_comment] fetching table metadata 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_comment 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student_copy] fetching table metadata 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_copy 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student_partitioned] fetching table metadata 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_partitioned 07:28 INFO [databricks.labs.blueprint.parallel:migrate_dbfs_root_delta_tables] listing tables in hive_metastore 5/5, rps: 0.152/sec 07:28 INFO [databricks.labs.blueprint.parallel:migrate_dbfs_root_delta_tables] Finished 'listing tables in hive_metastore' tasks: 100% results available (5/5). Took 0:00:32.859968 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SOEqx_migrate_inventory.tables] found 5 new records for tables 07:28 ERROR [databricks.labs.ucx:migrate_dbfs_root_delta_tables] Execute `databricks workspace export //Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-697385663699722-0/migrate_dbfs_root_delta_tables.log` locally to troubleshoot with more details. [SCHEMA_NOT_FOUND] The schema `ucx_soeqx_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 07:28 DEBUG [databricks:migrate_dbfs_root_delta_tables] Task crash details Traceback (most recent call last): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/runtime.py", line 86, in trigger current_task(ctx) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/workflows.py", line 27, in migrate_dbfs_root_delta_tables ctx.tables_migrator.migrate_tables(what=What.DBFS_ROOT_DELTA, acl_strategy=[AclMigrationWhat.LEGACY_TACL]) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/table_migrate.py", line 64, in migrate_tables all_grants_to_migrate = None if acl_strategy is None else self._gc.snapshot() File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 185, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 116, in _snapshot loaded_records = list(loader()) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 228, in _crawl for table in self._tc.snapshot(): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/tables.py", line 202, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 117, in _snapshot self._append_records(loaded_records) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 122, in _append_records self._backend.save_table(self.full_name, items, self._klass, mode="append") File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/lsql/backends.py", line 223, in save_table df.write.saveAsTable(full_name, mode=mode) File "/databricks/spark/python/pyspark/sql/connect/readwriter.py", line 702, in saveAsTable self._spark.client.execute_command( File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1139, in execute_command data, _, _, _, properties = self._execute_and_fetch(req, observations or {}) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1515, in _execute_and_fetch for response in self._execute_and_fetch_as_iterator(req, observations): File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1493, in _execute_and_fetch_as_iterator self._handle_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1799, in _handle_error self._handle_rpc_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1874, in _handle_rpc_error raise convert_exception( pyspark.errors.exceptions.connect.AnalysisException: [SCHEMA_NOT_FOUND] The schema `ucx_soeqx_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 07:28 INFO [databricks.labs.ucx.installer.workflows] ---------- END REMOTE LOGS ---------- 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Schema hive_metastore.migrate_9e5ty: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_9e5ty 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_9e5ty', metastore_id=None, name='migrate_9e5ty', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_9e5ty.ucx_tmccv: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_9e5ty/ucx_tmccv 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_9e5ty.ucx_tmccv', metastore_id=None, name='ucx_tmccv', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_9e5ty', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_9e5ty/ucx_tmccv', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added make_dbfs_data_copy fixture: dbfs:/mnt/TEST_MOUNT_NAME/a/b/ie03 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_9e5ty.ucx_tut22: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_9e5ty/ucx_tut22 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_9e5ty.ucx_tut22', metastore_id=None, name='ucx_tut22', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_9e5ty', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/ie03', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_9e5ty.ucx_t5tsr: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_9e5ty/ucx_t5tsr 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_9e5ty.ucx_t5tsr', metastore_id=None, name='ucx_t5tsr', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_9e5ty', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_9e5ty.ucx_tmccv', view_dependencies=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_9e5ty.ucx_tiwsg: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_9e5ty/ucx_tiwsg 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_9e5ty.ucx_tiwsg', metastore_id=None, name='ucx_tiwsg', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_9e5ty', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_9e5ty.ucx_t5tsr', view_dependencies=None) 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1713164677253, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_c6ysb', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_c6ysb', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1713164677253, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Schema ucx_c6ysb.migrate_9e5ty: https://DATABRICKS_HOST/explore/data/ucx_c6ysb/migrate_9e5ty 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_c6ysb', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_c6ysb.migrate_9e5ty', metastore_id=None, name='migrate_9e5ty', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:04 DEBUG [tests.integration.test_installation] Creating new installation... 07:04 DEBUG [tests.integration.test_installation] Waiting for clusters to start... 07:13 DEBUG [tests.integration.test_installation] Waiting for clusters to start... 07:13 DEBUG [databricks.labs.ucx.install] Cannot find previous installation: Path (/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/config.yml) doesn't exist. 07:13 INFO [databricks.labs.ucx.install] Please answer a couple of questions to configure Unity Catalog migration 07:13 INFO [databricks.labs.ucx.installer.hms_lineage] HMS Lineage feature creates one system table named system.hms_to_uc_migration.table_access and helps in your migration process from HMS to UC by allowing you to programmatically query HMS lineage data. 07:13 INFO [databricks.labs.ucx.install] Fetching installations... 07:13 INFO [databricks.labs.ucx.installer.policy] Setting up an external metastore 07:13 INFO [databricks.labs.ucx.installer.policy] Creating UCX cluster policy. 07:13 INFO [databricks.labs.ucx.install] Installing UCX v0.21.1+6220240415071319 07:13 INFO [databricks.labs.ucx.install] Creating ucx schemas... 07:13 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-tables-in-mounts-experimental 07:13 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups-experimental 07:13 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-groups 07:13 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=validate-groups-permissions 07:13 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=assessment 07:13 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=remove-workspace-local-backup-groups 07:13 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=failing 07:13 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=migrate-tables 07:13 INFO [databricks.labs.ucx.install] Installation completed successfully! Please refer to the https://DATABRICKS_HOST/#workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/README for the next steps. 07:14 DEBUG [databricks.labs.ucx.installer.workflows] starting migrate-tables job: https://DATABRICKS_HOST#job/148700357619491 07:28 INFO [databricks.labs.ucx.installer.workflows] ---------- REMOTE LOGS -------------- 07:28 INFO [databricks.labs.ucx:migrate_external_tables_sync] UCX v0.21.1+6220240415071347 After job finishes, see debug logs at /Workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-697385663699722-0/migrate_external_tables_sync.log 07:28 DEBUG [databricks.sdk:migrate_external_tables_sync] GET /api/2.0/clusters/list < 200 OK < { < "clusters": [ < { < "autotermination_minutes": 60, < "CLOUD_ENV_attributes": { < "availability": "SPOT_WITH_FALLBACK_AZURE", < "first_on_demand": 2147483647, < "spot_bid_max_price": -1.0 < }, < "cluster_cores": 4.0, < "cluster_id": "DATABRICKS_CLUSTER_ID", < "cluster_memory_mb": 16384, < "cluster_name": "DEFAULT Test Cluster (Single Node, No Isolation)", < "cluster_source": "UI", < "creator_user_name": "serge.smertin@databricks.com", < "custom_tags": { < "ResourceClass": "SingleNode" < }, < "TEST_SCHEMA_tags": { < "Budget": "opex.sales.labs", < "ClusterId": "DATABRICKS_CLUSTER_ID", < "ClusterName": "DEFAULT Test Cluster (Single Node, No Isolation)", < "Creator": "serge.smertin@databricks.com", < "DatabricksInstanceGroupId": "-9023997030385719732", < "DatabricksInstancePoolCreatorId": "4183391249163402", < "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID", < "Owner": "labs-oss@databricks.com", < "Vendor": "Databricks" < }, < "disk_spec": {}, < "driver": { < "host_private_ip": "10.139.0.15", < "instance_id": "691616e3d8674f848cf370d8ddf99720", < "node_attributes": { < "is_spot": false < }, < "node_id": "1c8eb39191c74e2f91ac9b2816a4a138", < "private_ip": "10.139.64.15", < "public_dns": "52.252.7.114", < "start_timestamp": 1713164805077 < }, < "driver_healthy": true, < "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID", < "driver_instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "driver_node_type_id": "Standard_D4s_v3", < "effective_spark_version": "15.0.x-scala2.12", < "enable_elastic_disk": true, < "enable_local_disk_encryption": false, < "init_scripts_safe_mode": false, < "instance_pool_id": "TEST_INSTANCE_POOL_ID", < "instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "jdbc_port": 10000, < "last_activity_time": 1713164860916, < "last_restarted_time": 1713164912670, < "last_state_loss_time": 1713164912642, < "node_type_id": "Standard_D4s_v3", < "num_workers": 0, < "pinned_by_user_name": "4183391249163402", < "spark_conf": { < "spark.databricks.cluster.profile": "singleNode", < "spark.master": "local[*]" < }, < "spark_context_id": 7907915319392029151, < "spark_version": "15.0.x-scala2.12", < "start_time": 1711403415696, < "state": "RUNNING", < "state_message": "" < }, < "... (8 additional elements)" < ] < } 07:28 DEBUG [databricks.sdk:migrate_external_tables_sync] GET /api/2.1/unity-catalog/external-locations < 200 OK < { < "external_locations": [ < { < "created_at": 1712317369786, < "created_by": "serge.smertin@databricks.com", < "credential_id": "462bd121-a3ff-4f51-899f-236868f3d2ab", < "credential_name": "TEST_STORAGE_CREDENTIAL", < "full_name": "TEST_A_LOCATION", < "id": "98c25265-fb0f-4b15-a727-63855b7f78a7", < "isolation_mode": "ISOLATION_MODE_OPEN", < "metastore_id": "8952c1e3-b265-4adf-98c3-6f755e2e1453", < "name": "TEST_A_LOCATION", < "owner": "labs.scope.account-admin", < "read_only": false, < "securable_kind": "EXTERNAL_LOCATION_STANDARD", < "securable_type": "EXTERNAL_LOCATION", < "updated_at": 1712566812808, < "updated_by": "serge.smertin@databricks.com", < "url": "TEST_MOUNT_CONTAINER/a" < }, < "... (3 additional elements)" < ] < } 07:28 DEBUG [databricks.labs.blueprint.installation:migrate_external_tables_sync] Loading list from CLOUD_ENV_storage_account_info.csv 07:28 DEBUG [databricks.sdk:migrate_external_tables_sync] GET /api/2.0/workspace/export?path=/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/CLOUD_ENV_storage_account_info.csv&direct_download=true < 200 OK < [raw stream] 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SOEqx_migrate_inventory.grants] fetching grants inventory 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SELECT * FROM hive_metastore.ucx_SOEqx_migrate_inventory.grants 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SOEqx_migrate_inventory.grants] crawling new batch for grants 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW DATABASES FROM hive_metastore 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SOEqx_migrate_inventory.tables] fetching tables inventory 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SELECT * FROM hive_metastore.ucx_SOEqx_migrate_inventory.tables 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SOEqx_migrate_inventory.tables] crawling new batch for tables 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW DATABASES 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.TEST_SCHEMA] listing tables 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW TABLES FROM hive_metastore.TEST_SCHEMA 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext] listing tables 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] SHOW TABLES FROM hive_metastore.test_ext 07:28 DEBUG [databricks.labs.blueprint.parallel:migrate_external_tables_sync] Starting 5 tasks in 8 threads 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.rectangles] fetching table metadata 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.rectangles 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student] fetching table metadata 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student_comment] fetching table metadata 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student_copy] fetching table metadata 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_copy 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_comment 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_external_tables_sync] [hive_metastore.test_ext.student_partitioned] fetching table metadata 07:28 DEBUG [databricks.labs.lsql.backends:migrate_external_tables_sync] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_partitioned 07:28 INFO [databricks.labs.blueprint.parallel:migrate_external_tables_sync] listing tables in hive_metastore 5/5, rps: 0.154/sec 07:28 INFO [databricks.labs.blueprint.parallel:migrate_external_tables_sync] Finished 'listing tables in hive_metastore' tasks: 100% results available (5/5). Took 0:00:32.420576 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_external_tables_sync] [hive_metastore.ucx_SOEqx_migrate_inventory.tables] found 5 new records for tables 07:28 ERROR [databricks.labs.ucx:migrate_external_tables_sync] Execute `databricks workspace export //Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-697385663699722-0/migrate_external_tables_sync.log` locally to troubleshoot with more details. [SCHEMA_NOT_FOUND] The schema `ucx_soeqx_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 07:28 DEBUG [databricks:migrate_external_tables_sync] Task crash details Traceback (most recent call last): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/runtime.py", line 86, in trigger current_task(ctx) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/workflows.py", line 18, in migrate_external_tables_sync ctx.tables_migrator.migrate_tables(what=What.EXTERNAL_SYNC, acl_strategy=[AclMigrationWhat.LEGACY_TACL]) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/table_migrate.py", line 64, in migrate_tables all_grants_to_migrate = None if acl_strategy is None else self._gc.snapshot() File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 185, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 116, in _snapshot loaded_records = list(loader()) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 228, in _crawl for table in self._tc.snapshot(): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/tables.py", line 202, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 117, in _snapshot self._append_records(loaded_records) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 122, in _append_records self._backend.save_table(self.full_name, items, self._klass, mode="append") File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/lsql/backends.py", line 223, in save_table df.write.saveAsTable(full_name, mode=mode) File "/databricks/spark/python/pyspark/sql/connect/readwriter.py", line 702, in saveAsTable self._spark.client.execute_command( File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1139, in execute_command data, _, _, _, properties = self._execute_and_fetch(req, observations or {}) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1515, in _execute_and_fetch for response in self._execute_and_fetch_as_iterator(req, observations): File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1493, in _execute_and_fetch_as_iterator self._handle_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1799, in _handle_error self._handle_rpc_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1874, in _handle_rpc_error raise convert_exception( pyspark.errors.exceptions.connect.AnalysisException: [SCHEMA_NOT_FOUND] The schema `ucx_soeqx_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 07:28 INFO [databricks.labs.ucx:migrate_dbfs_root_delta_tables] UCX v0.21.1+6220240415071347 After job finishes, see debug logs at /Workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-697385663699722-0/migrate_dbfs_root_delta_tables.log 07:28 DEBUG [databricks.sdk:migrate_dbfs_root_delta_tables] GET /api/2.0/clusters/list < 200 OK < { < "clusters": [ < { < "autotermination_minutes": 60, < "CLOUD_ENV_attributes": { < "availability": "SPOT_WITH_FALLBACK_AZURE", < "first_on_demand": 2147483647, < "spot_bid_max_price": -1.0 < }, < "cluster_cores": 4.0, < "cluster_id": "DATABRICKS_CLUSTER_ID", < "cluster_memory_mb": 16384, < "cluster_name": "DEFAULT Test Cluster (Single Node, No Isolation)", < "cluster_source": "UI", < "creator_user_name": "serge.smertin@databricks.com", < "custom_tags": { < "ResourceClass": "SingleNode" < }, < "TEST_SCHEMA_tags": { < "Budget": "opex.sales.labs", < "ClusterId": "DATABRICKS_CLUSTER_ID", < "ClusterName": "DEFAULT Test Cluster (Single Node, No Isolation)", < "Creator": "serge.smertin@databricks.com", < "DatabricksInstanceGroupId": "-9023997030385719732", < "DatabricksInstancePoolCreatorId": "4183391249163402", < "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID", < "Owner": "labs-oss@databricks.com", < "Vendor": "Databricks" < }, < "disk_spec": {}, < "driver": { < "host_private_ip": "10.139.0.15", < "instance_id": "691616e3d8674f848cf370d8ddf99720", < "node_attributes": { < "is_spot": false < }, < "node_id": "1c8eb39191c74e2f91ac9b2816a4a138", < "private_ip": "10.139.64.15", < "public_dns": "52.252.7.114", < "start_timestamp": 1713164805077 < }, < "driver_healthy": true, < "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID", < "driver_instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "driver_node_type_id": "Standard_D4s_v3", < "effective_spark_version": "15.0.x-scala2.12", < "enable_elastic_disk": true, < "enable_local_disk_encryption": false, < "init_scripts_safe_mode": false, < "instance_pool_id": "TEST_INSTANCE_POOL_ID", < "instance_source": { < "instance_pool_id": "TEST_INSTANCE_POOL_ID" < }, < "jdbc_port": 10000, < "last_activity_time": 1713164860916, < "last_restarted_time": 1713164912670, < "last_state_loss_time": 1713164912642, < "node_type_id": "Standard_D4s_v3", < "num_workers": 0, < "pinned_by_user_name": "4183391249163402", < "spark_conf": { < "spark.databricks.cluster.profile": "singleNode", < "spark.master": "local[*]" < }, < "spark_context_id": 7907915319392029151, < "spark_version": "15.0.x-scala2.12", < "start_time": 1711403415696, < "state": "RUNNING", < "state_message": "" < }, < "... (8 additional elements)" < ] < } 07:28 DEBUG [databricks.sdk:migrate_dbfs_root_delta_tables] GET /api/2.1/unity-catalog/external-locations < 200 OK < { < "external_locations": [ < { < "created_at": 1712317369786, < "created_by": "serge.smertin@databricks.com", < "credential_id": "462bd121-a3ff-4f51-899f-236868f3d2ab", < "credential_name": "TEST_STORAGE_CREDENTIAL", < "full_name": "TEST_A_LOCATION", < "id": "98c25265-fb0f-4b15-a727-63855b7f78a7", < "isolation_mode": "ISOLATION_MODE_OPEN", < "metastore_id": "8952c1e3-b265-4adf-98c3-6f755e2e1453", < "name": "TEST_A_LOCATION", < "owner": "labs.scope.account-admin", < "read_only": false, < "securable_kind": "EXTERNAL_LOCATION_STANDARD", < "securable_type": "EXTERNAL_LOCATION", < "updated_at": 1712566812808, < "updated_by": "serge.smertin@databricks.com", < "url": "TEST_MOUNT_CONTAINER/a" < }, < "... (3 additional elements)" < ] < } 07:28 DEBUG [databricks.labs.blueprint.installation:migrate_dbfs_root_delta_tables] Loading list from CLOUD_ENV_storage_account_info.csv 07:28 DEBUG [databricks.sdk:migrate_dbfs_root_delta_tables] GET /api/2.0/workspace/export?path=/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/CLOUD_ENV_storage_account_info.csv&direct_download=true < 200 OK < [raw stream] 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SOEqx_migrate_inventory.grants] fetching grants inventory 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SELECT * FROM hive_metastore.ucx_SOEqx_migrate_inventory.grants 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SOEqx_migrate_inventory.grants] crawling new batch for grants 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW DATABASES FROM hive_metastore 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SOEqx_migrate_inventory.tables] fetching tables inventory 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SELECT * FROM hive_metastore.ucx_SOEqx_migrate_inventory.tables 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SOEqx_migrate_inventory.tables] crawling new batch for tables 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW DATABASES 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.TEST_SCHEMA] listing tables 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW TABLES FROM hive_metastore.TEST_SCHEMA 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext] listing tables 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] SHOW TABLES FROM hive_metastore.test_ext 07:28 DEBUG [databricks.labs.blueprint.parallel:migrate_dbfs_root_delta_tables] Starting 5 tasks in 8 threads 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.rectangles] fetching table metadata 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.rectangles 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student] fetching table metadata 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student_comment] fetching table metadata 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_comment 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student_copy] fetching table metadata 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_copy 07:28 DEBUG [databricks.labs.ucx.hive_metastore.tables:migrate_dbfs_root_delta_tables] [hive_metastore.test_ext.student_partitioned] fetching table metadata 07:28 DEBUG [databricks.labs.lsql.backends:migrate_dbfs_root_delta_tables] [spark][fetch] DESCRIBE TABLE EXTENDED hive_metastore.test_ext.student_partitioned 07:28 INFO [databricks.labs.blueprint.parallel:migrate_dbfs_root_delta_tables] listing tables in hive_metastore 5/5, rps: 0.152/sec 07:28 INFO [databricks.labs.blueprint.parallel:migrate_dbfs_root_delta_tables] Finished 'listing tables in hive_metastore' tasks: 100% results available (5/5). Took 0:00:32.859968 07:28 DEBUG [databricks.labs.ucx.framework.crawlers:migrate_dbfs_root_delta_tables] [hive_metastore.ucx_SOEqx_migrate_inventory.tables] found 5 new records for tables 07:28 ERROR [databricks.labs.ucx:migrate_dbfs_root_delta_tables] Execute `databricks workspace export //Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/logs/migrate-tables/run-697385663699722-0/migrate_dbfs_root_delta_tables.log` locally to troubleshoot with more details. [SCHEMA_NOT_FOUND] The schema `ucx_soeqx_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 07:28 DEBUG [databricks:migrate_dbfs_root_delta_tables] Task crash details Traceback (most recent call last): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/runtime.py", line 86, in trigger current_task(ctx) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/workflows.py", line 27, in migrate_dbfs_root_delta_tables ctx.tables_migrator.migrate_tables(what=What.DBFS_ROOT_DELTA, acl_strategy=[AclMigrationWhat.LEGACY_TACL]) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/table_migrate.py", line 64, in migrate_tables all_grants_to_migrate = None if acl_strategy is None else self._gc.snapshot() File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 185, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 116, in _snapshot loaded_records = list(loader()) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/grants.py", line 228, in _crawl for table in self._tc.snapshot(): File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/hive_metastore/tables.py", line 202, in snapshot return self._snapshot(partial(self._try_load), partial(self._crawl)) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 117, in _snapshot self._append_records(loaded_records) File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/ucx/framework/crawlers.py", line 122, in _append_records self._backend.save_table(self.full_name, items, self._klass, mode="append") File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/databricks/labs/lsql/backends.py", line 223, in save_table df.write.saveAsTable(full_name, mode=mode) File "/databricks/spark/python/pyspark/sql/connect/readwriter.py", line 702, in saveAsTable self._spark.client.execute_command( File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1139, in execute_command data, _, _, _, properties = self._execute_and_fetch(req, observations or {}) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1515, in _execute_and_fetch for response in self._execute_and_fetch_as_iterator(req, observations): File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1493, in _execute_and_fetch_as_iterator self._handle_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1799, in _handle_error self._handle_rpc_error(error) File "/databricks/spark/python/pyspark/sql/connect/client/core.py", line 1874, in _handle_rpc_error raise convert_exception( pyspark.errors.exceptions.connect.AnalysisException: [SCHEMA_NOT_FOUND] The schema `ucx_soeqx_migrate_inventory` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS. SQLSTATE: 42704 JVM stacktrace: org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.requireDbExists(SessionCatalog.scala:763) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.getDatabaseMetadata(SessionCatalog.scala:836) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePathImpl(SessionCatalog.scala:1210) at org.apache.spark.sql.catalyst.catalog.SessionCatalogImpl.TEST_SCHEMATablePath(SessionCatalog.scala:1223) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePathImpl(ManagedCatalogSessionCatalog.scala:1003) at com.databricks.sql.managedcatalog.ManagedCatalogSessionCatalog.TEST_SCHEMATablePath(ManagedCatalogSessionCatalog.scala:1012) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$11(DeltaCatalog.scala:230) at scala.Option.getOrElse(Option.scala:189) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$createDeltaTable$1(DeltaCatalog.scala:230) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.com$databricks$sql$transaction$tahoe$catalog$DeltaCatalog$$createDeltaTable(DeltaCatalog.scala:157) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.$anonfun$commitStagedChanges$1(DeltaCatalog.scala:1116) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:268) at com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:266) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:116) at com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog$StagedDeltaTableV2.commitStagedChanges(DeltaCatalog.scala:1075) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$2(WriteToDataSourceV2Exec.scala:674) at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1546) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.$anonfun$writeToTable$1(WriteToDataSourceV2Exec.scala:661) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable(WriteToDataSourceV2Exec.scala:679) at org.apache.spark.sql.execution.datasources.v2.V2CreateTableAsSelectBaseExec.writeToTable$(WriteToDataSourceV2Exec.scala:655) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.writeToTable(WriteToDataSourceV2Exec.scala:108) at org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:145) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$2(V2CommandExec.scala:48) at org.apache.spark.sql.execution.SparkPlan.runCommandWithAetherOff(SparkPlan.scala:178) at org.apache.spark.sql.execution.SparkPlan.runCommandInAetherOrSpark(SparkPlan.scala:189) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.$anonfun$result$1(V2CommandExec.scala:48) at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:47) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:45) at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:56) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$4(QueryExecution.scala:346) at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:166) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$3(QueryExecution.scala:346) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$9(SQLExecution.scala:376) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:654) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId0$1(SQLExecution.scala:265) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId0(SQLExecution.scala:162) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:596) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$2(QueryExecution.scala:345) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1070) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.$anonfun$applyOrElse$1(QueryExecution.scala:341) at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$withMVTagsIfNecessary(QueryExecution.scala:300) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:338) at org.apache.spark.sql.execution.QueryExecution$$anonfun$$nestedInanonfun$eagerlyExecuteCommands$1$1.applyOrElse(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:477) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:83) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:477) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:343) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:339) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:39) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:453) at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$1(QueryExecution.scala:322) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:400) at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:322) at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:259) at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:256) at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:411) at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:1040) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:746) at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:677) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.handleWriteOperation(SparkConnectPlanner.scala:3005) at org.apache.spark.sql.connect.planner.SparkConnectPlanner.process(SparkConnectPlanner.scala:2668) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.handleCommand(ExecuteThreadRunner.scala:279) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1(ExecuteThreadRunner.scala:223) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.$anonfun$executeInternal$1$adapted(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$2(SessionHolder.scala:301) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1175) at org.apache.spark.sql.connect.service.SessionHolder.$anonfun$withSession$1(SessionHolder.scala:301) at org.apache.spark.JobArtifactSet$.withActiveJobArtifactState(JobArtifactSet.scala:97) at org.apache.spark.sql.artifact.ArtifactManager.$anonfun$withResources$1(ArtifactManager.scala:84) at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:234) at org.apache.spark.sql.artifact.ArtifactManager.withResources(ArtifactManager.scala:83) at org.apache.spark.sql.connect.service.SessionHolder.withSession(SessionHolder.scala:300) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.executeInternal(ExecuteThreadRunner.scala:161) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner.org$apache$spark$sql$connect$execution$ExecuteThreadRunner$$execute(ExecuteThreadRunner.scala:112) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.$anonfun$run$1(ExecuteThreadRunner.scala:342) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:103) at com.databricks.unity.HandleImpl.$anonfun$runWithAndClose$1(UCSHandle.scala:108) at scala.util.Using$.resource(Using.scala:269) at com.databricks.unity.HandleImpl.runWithAndClose(UCSHandle.scala:107) at org.apache.spark.sql.connect.execution.ExecuteThreadRunner$ExecutionThread.run(ExecuteThreadRunner.scala:339) 07:28 INFO [databricks.labs.ucx.installer.workflows] ---------- END REMOTE LOGS ---------- 07:28 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 1 make_dbfs_data_copy fixtures 07:28 DEBUG [databricks.labs.ucx.mixins.fixtures] removing make_dbfs_data_copy fixture: dbfs:/mnt/TEST_MOUNT_NAME/a/b/ie03 07:28 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 4 table fixtures 07:28 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_9e5ty.ucx_tmccv', metastore_id=None, name='ucx_tmccv', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_9e5ty', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_9e5ty/ucx_tmccv', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:28 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_9e5ty.ucx_tut22', metastore_id=None, name='ucx_tut22', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_9e5ty', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/ie03', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:28 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_9e5ty.ucx_t5tsr', metastore_id=None, name='ucx_t5tsr', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_9e5ty', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_9e5ty.ucx_tmccv', view_dependencies=None) 07:28 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_9e5ty.ucx_tiwsg', metastore_id=None, name='ucx_tiwsg', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_9e5ty', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_9e5ty.ucx_t5tsr', view_dependencies=None) 07:28 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 2 schema fixtures 07:28 DEBUG [databricks.labs.ucx.mixins.fixtures] removing schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_9e5ty', metastore_id=None, name='migrate_9e5ty', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:28 DEBUG [databricks.labs.ucx.mixins.fixtures] removing schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_c6ysb', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_c6ysb.migrate_9e5ty', metastore_id=None, name='migrate_9e5ty', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:28 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 1 catalog fixtures 07:28 DEBUG [databricks.labs.ucx.mixins.fixtures] removing catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1713164677253, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_c6ysb', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_c6ysb', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1713164677253, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:28 DEBUG [databricks.labs.ucx.mixins.fixtures] ignoring error while catalog CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1713164677253, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_c6ysb', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_c6ysb', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1713164677253, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') teardown: Catalog 'ucx_c6ysb' does not exist. 07:28 INFO [databricks.labs.ucx.install] Deleting UCX v0.21.1+6220240415072831 from https://DATABRICKS_HOST 07:28 INFO [databricks.labs.ucx.install] Deleting inventory database ucx_SOEqx_migrate_inventory 07:28 INFO [databricks.labs.ucx.install] Deleting jobs 07:28 INFO [databricks.labs.ucx.install] Deleting migrate-tables-in-mounts-experimental job_id=833432791351905. 07:28 INFO [databricks.labs.ucx.install] Deleting migrate-groups-experimental job_id=852772018287313. 07:28 INFO [databricks.labs.ucx.install] Deleting migrate-groups job_id=779763030266678. 07:28 INFO [databricks.labs.ucx.install] Deleting validate-groups-permissions job_id=64327024068937. 07:28 INFO [databricks.labs.ucx.install] Deleting assessment job_id=886518640039428. 07:28 INFO [databricks.labs.ucx.install] Deleting remove-workspace-local-backup-groups job_id=137715092539621. 07:28 INFO [databricks.labs.ucx.install] Deleting failing job_id=1003417593124718. 07:28 INFO [databricks.labs.ucx.install] Deleting migrate-tables job_id=148700357619491. 07:28 INFO [databricks.labs.ucx.install] Deleting cluster policy 07:28 INFO [databricks.labs.ucx.install] Deleting secret scope 07:28 INFO [databricks.labs.ucx.install] UnInstalling UCX complete [gw4] linux -- Python 3.10.14 /home/runner/work/ucx/ucx/.venv/bin/python ```

Running from nightly #28

github-actions[bot] commented 6 months ago
❌ test_table_migration_job: databricks.sdk.errors.sdk.OperationFailed: failed to reach TERMINATED or SKIPPED, got RunLifeCycleState.INTERNAL_ERROR: Task migrate_external_tables_sync failed with message: Workload failed, see run output for details. (3m11.013s) ``` databricks.sdk.errors.sdk.OperationFailed: failed to reach TERMINATED or SKIPPED, got RunLifeCycleState.INTERNAL_ERROR: Task migrate_external_tables_sync failed with message: Workload failed, see run output for details. [gw3] linux -- Python 3.10.14 /home/runner/work/ucx/ucx/.venv/bin/python 07:12 INFO [databricks.labs.ucx.mixins.fixtures] Schema hive_metastore.migrate_dwvvi: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_dwvvi 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_dwvvi', metastore_id=None, name='migrate_dwvvi', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:12 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_dwvvi.ucx_thvqc: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_dwvvi/ucx_thvqc 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_dwvvi.ucx_thvqc', metastore_id=None, name='ucx_thvqc', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_dwvvi', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_dwvvi/ucx_thvqc', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:12 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_dwvvi.ucx_tj3eo: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_dwvvi/ucx_tj3eo 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_dwvvi.ucx_tj3eo', metastore_id=None, name='ucx_tj3eo', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_dwvvi', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/c', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:12 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_dwvvi.ucx_tgy3n: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_dwvvi/ucx_tgy3n 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_dwvvi.ucx_tgy3n', metastore_id=None, name='ucx_tgy3n', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_dwvvi', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_dwvvi.ucx_thvqc', view_dependencies=None) 07:12 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_dwvvi.ucx_t6tds: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_dwvvi/ucx_t6tds 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_dwvvi.ucx_t6tds', metastore_id=None, name='ucx_t6tds', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_dwvvi', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_dwvvi.ucx_tgy3n', view_dependencies=None) 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] added catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1713251529910, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cudh6', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_cudh6', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1713251529910, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:12 INFO [databricks.labs.ucx.mixins.fixtures] Schema ucx_cudh6.migrate_dwvvi: https://DATABRICKS_HOST/explore/data/ucx_cudh6/migrate_dwvvi 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_cudh6', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cudh6.migrate_dwvvi', metastore_id=None, name='migrate_dwvvi', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:12 DEBUG [tests.integration.test_installation] Creating new installation... 07:12 DEBUG [tests.integration.test_installation] Waiting for clusters to start... 07:12 DEBUG [tests.integration.test_installation] Waiting for clusters to start... 07:12 INFO [databricks.labs.ucx.install] UCX v0.21.1+6920240416071211 is already installed on this workspace 07:12 INFO [databricks.labs.ucx.install] Installing UCX v0.21.1+6920240416071212 07:12 INFO [databricks.labs.ucx.install] Creating ucx schemas... 07:12 INFO [databricks.labs.ucx.installer.workflows] Updating configuration for step=migrate-groups job_id=916999018485455 07:12 INFO [databricks.labs.ucx.installer.workflows] Updating configuration for step=assessment job_id=793276919754211 07:12 INFO [databricks.labs.ucx.installer.workflows] Updating configuration for step=failing job_id=443716258337992 07:12 INFO [databricks.labs.ucx.installer.workflows] Updating configuration for step=remove-workspace-local-backup-groups job_id=958979456753225 07:12 INFO [databricks.labs.ucx.installer.workflows] Updating configuration for step=migrate-tables-in-mounts-experimental job_id=854622517752989 07:12 INFO [databricks.labs.ucx.installer.workflows] Updating configuration for step=migrate-groups-experimental job_id=910790788242368 07:12 INFO [databricks.labs.ucx.installer.workflows] Updating configuration for step=migrate-tables job_id=571334734085549 07:12 INFO [databricks.labs.ucx.installer.workflows] Updating configuration for step=validate-groups-permissions job_id=913364402183663 07:12 INFO [databricks.labs.ucx.install] Installation completed successfully! Please refer to the https://DATABRICKS_HOST/#workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/README for the next steps. 07:12 DEBUG [databricks.labs.ucx.installer.workflows] starting migrate-tables job: https://DATABRICKS_HOST#job/571334734085549 07:12 DEBUG [databricks.labs.ucx.installer.workflows] Validating migrate-tables workflow: https://DATABRICKS_HOST#job/571334734085549 07:12 INFO [databricks.labs.ucx.installer.workflows] Identified a run in progress waiting for run completion 07:12 INFO [databricks.labs.ucx.mixins.fixtures] Schema hive_metastore.migrate_dwvvi: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_dwvvi 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_dwvvi', metastore_id=None, name='migrate_dwvvi', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:12 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_dwvvi.ucx_thvqc: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_dwvvi/ucx_thvqc 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_dwvvi.ucx_thvqc', metastore_id=None, name='ucx_thvqc', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_dwvvi', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_dwvvi/ucx_thvqc', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:12 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_dwvvi.ucx_tj3eo: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_dwvvi/ucx_tj3eo 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_dwvvi.ucx_tj3eo', metastore_id=None, name='ucx_tj3eo', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_dwvvi', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/c', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:12 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_dwvvi.ucx_tgy3n: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_dwvvi/ucx_tgy3n 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_dwvvi.ucx_tgy3n', metastore_id=None, name='ucx_tgy3n', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_dwvvi', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_dwvvi.ucx_thvqc', view_dependencies=None) 07:12 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_dwvvi.ucx_t6tds: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_dwvvi/ucx_t6tds 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_dwvvi.ucx_t6tds', metastore_id=None, name='ucx_t6tds', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_dwvvi', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_dwvvi.ucx_tgy3n', view_dependencies=None) 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] added catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1713251529910, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cudh6', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_cudh6', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1713251529910, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:12 INFO [databricks.labs.ucx.mixins.fixtures] Schema ucx_cudh6.migrate_dwvvi: https://DATABRICKS_HOST/explore/data/ucx_cudh6/migrate_dwvvi 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_cudh6', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cudh6.migrate_dwvvi', metastore_id=None, name='migrate_dwvvi', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:12 DEBUG [tests.integration.test_installation] Creating new installation... 07:12 DEBUG [tests.integration.test_installation] Waiting for clusters to start... 07:12 DEBUG [tests.integration.test_installation] Waiting for clusters to start... 07:12 INFO [databricks.labs.ucx.install] UCX v0.21.1+6920240416071211 is already installed on this workspace 07:12 INFO [databricks.labs.ucx.install] Installing UCX v0.21.1+6920240416071212 07:12 INFO [databricks.labs.ucx.install] Creating ucx schemas... 07:12 INFO [databricks.labs.ucx.installer.workflows] Updating configuration for step=migrate-groups job_id=916999018485455 07:12 INFO [databricks.labs.ucx.installer.workflows] Updating configuration for step=assessment job_id=793276919754211 07:12 INFO [databricks.labs.ucx.installer.workflows] Updating configuration for step=failing job_id=443716258337992 07:12 INFO [databricks.labs.ucx.installer.workflows] Updating configuration for step=remove-workspace-local-backup-groups job_id=958979456753225 07:12 INFO [databricks.labs.ucx.installer.workflows] Updating configuration for step=migrate-tables-in-mounts-experimental job_id=854622517752989 07:12 INFO [databricks.labs.ucx.installer.workflows] Updating configuration for step=migrate-groups-experimental job_id=910790788242368 07:12 INFO [databricks.labs.ucx.installer.workflows] Updating configuration for step=migrate-tables job_id=571334734085549 07:12 INFO [databricks.labs.ucx.installer.workflows] Updating configuration for step=validate-groups-permissions job_id=913364402183663 07:12 INFO [databricks.labs.ucx.install] Installation completed successfully! Please refer to the https://DATABRICKS_HOST/#workspace/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx/README for the next steps. 07:12 DEBUG [databricks.labs.ucx.installer.workflows] starting migrate-tables job: https://DATABRICKS_HOST#job/571334734085549 07:12 DEBUG [databricks.labs.ucx.installer.workflows] Validating migrate-tables workflow: https://DATABRICKS_HOST#job/571334734085549 07:12 INFO [databricks.labs.ucx.installer.workflows] Identified a run in progress waiting for run completion 07:14 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 0 make_dbfs_data_copy fixtures 07:14 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 4 table fixtures 07:14 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_dwvvi.ucx_thvqc', metastore_id=None, name='ucx_thvqc', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_dwvvi', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_dwvvi/ucx_thvqc', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:14 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_dwvvi.ucx_tj3eo', metastore_id=None, name='ucx_tj3eo', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_dwvvi', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/c', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:14 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_dwvvi.ucx_tgy3n', metastore_id=None, name='ucx_tgy3n', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_dwvvi', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_dwvvi.ucx_thvqc', view_dependencies=None) 07:14 DEBUG [databricks.labs.ucx.mixins.fixtures] ignoring error while table TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_dwvvi.ucx_tgy3n', metastore_id=None, name='ucx_tgy3n', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_dwvvi', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_dwvvi.ucx_thvqc', view_dependencies=None) teardown: [WRONG_COMMAND_FOR_OBJECT_TYPE] The operation DROP TABLE requires a EXTERNAL or MANAGED. But hive_metastore.migrate_dwvvi.ucx_tgy3n is a VIEW. Use DROP VIEW instead. 07:14 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_dwvvi.ucx_t6tds', metastore_id=None, name='ucx_t6tds', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_dwvvi', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_dwvvi.ucx_tgy3n', view_dependencies=None) 07:14 DEBUG [databricks.labs.ucx.mixins.fixtures] ignoring error while table TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_dwvvi.ucx_t6tds', metastore_id=None, name='ucx_t6tds', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_dwvvi', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_dwvvi.ucx_tgy3n', view_dependencies=None) teardown: [WRONG_COMMAND_FOR_OBJECT_TYPE] The operation DROP TABLE requires a EXTERNAL or MANAGED. But hive_metastore.migrate_dwvvi.ucx_t6tds is a VIEW. Use DROP VIEW instead. 07:14 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 2 schema fixtures 07:14 DEBUG [databricks.labs.ucx.mixins.fixtures] removing schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_dwvvi', metastore_id=None, name='migrate_dwvvi', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:14 DEBUG [databricks.labs.ucx.mixins.fixtures] removing schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_cudh6', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cudh6.migrate_dwvvi', metastore_id=None, name='migrate_dwvvi', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:14 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 1 catalog fixtures 07:14 DEBUG [databricks.labs.ucx.mixins.fixtures] removing catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1713251529910, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cudh6', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_cudh6', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1713251529910, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:14 INFO [databricks.labs.ucx.install] Deleting UCX v0.21.1+6920240416071449 from https://DATABRICKS_HOST 07:14 ERROR [databricks.labs.ucx.install] Check if /Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.ucx is present [gw3] linux -- Python 3.10.14 /home/runner/work/ucx/ucx/.venv/bin/python ```

Running from nightly #29

github-actions[bot] commented 6 months ago
❌ test_table_migration_job: databricks.sdk.errors.platform.InvalidParameterValue: Cluster validation error: Can't find a cluster policy with id: 001CAB9D99B92AB1. (8m3.857s) ``` databricks.sdk.errors.platform.InvalidParameterValue: Cluster validation error: Can't find a cluster policy with id: 001CAB9D99B92AB1. 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Schema hive_metastore.migrate_yjbk2: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_yjbk2 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_yjbk2', metastore_id=None, name='migrate_yjbk2', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_yjbk2.ucx_tso4u: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_yjbk2/ucx_tso4u 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_yjbk2.ucx_tso4u', metastore_id=None, name='ucx_tso4u', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_yjbk2', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_yjbk2/ucx_tso4u', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_yjbk2.ucx_tgdw5: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_yjbk2/ucx_tgdw5 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_yjbk2.ucx_tgdw5', metastore_id=None, name='ucx_tgdw5', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_yjbk2', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/c', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_yjbk2.ucx_txyya: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_yjbk2/ucx_txyya 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_yjbk2.ucx_txyya', metastore_id=None, name='ucx_txyya', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_yjbk2', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_yjbk2.ucx_tso4u', view_dependencies=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_yjbk2.ucx_tdwvq: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_yjbk2/ucx_tdwvq 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_yjbk2.ucx_tdwvq', metastore_id=None, name='ucx_tdwvq', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_yjbk2', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_yjbk2.ucx_txyya', view_dependencies=None) 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1713337487811, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cqkwt', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_cqkwt', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1713337487811, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Schema ucx_cqkwt.migrate_yjbk2: https://DATABRICKS_HOST/explore/data/ucx_cqkwt/migrate_yjbk2 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_cqkwt', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cqkwt.migrate_yjbk2', metastore_id=None, name='migrate_yjbk2', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Schema hive_metastore.ucx_sf7bf: https://DATABRICKS_HOST/explore/data/hive_metastore/ucx_sf7bf 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.ucx_sf7bf', metastore_id=None, name='ucx_sf7bf', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) [gw8] linux -- Python 3.10.14 /home/runner/work/ucx/ucx/.venv/bin/python 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Schema hive_metastore.migrate_yjbk2: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_yjbk2 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_yjbk2', metastore_id=None, name='migrate_yjbk2', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_yjbk2.ucx_tso4u: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_yjbk2/ucx_tso4u 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_yjbk2.ucx_tso4u', metastore_id=None, name='ucx_tso4u', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_yjbk2', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_yjbk2/ucx_tso4u', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_yjbk2.ucx_tgdw5: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_yjbk2/ucx_tgdw5 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_yjbk2.ucx_tgdw5', metastore_id=None, name='ucx_tgdw5', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_yjbk2', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/c', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_yjbk2.ucx_txyya: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_yjbk2/ucx_txyya 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_yjbk2.ucx_txyya', metastore_id=None, name='ucx_txyya', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_yjbk2', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_yjbk2.ucx_tso4u', view_dependencies=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_yjbk2.ucx_tdwvq: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_yjbk2/ucx_tdwvq 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_yjbk2.ucx_tdwvq', metastore_id=None, name='ucx_tdwvq', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_yjbk2', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_yjbk2.ucx_txyya', view_dependencies=None) 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1713337487811, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cqkwt', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_cqkwt', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1713337487811, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Schema ucx_cqkwt.migrate_yjbk2: https://DATABRICKS_HOST/explore/data/ucx_cqkwt/migrate_yjbk2 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_cqkwt', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cqkwt.migrate_yjbk2', metastore_id=None, name='migrate_yjbk2', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Schema hive_metastore.ucx_sf7bf: https://DATABRICKS_HOST/explore/data/hive_metastore/ucx_sf7bf 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.ucx_sf7bf', metastore_id=None, name='ucx_sf7bf', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:05 DEBUG [databricks.labs.ucx.install] Cannot find previous installation: Path (/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.EJ7Z/config.yml) doesn't exist. 07:05 INFO [databricks.labs.ucx.install] Please answer a couple of questions to configure Unity Catalog migration 07:05 INFO [databricks.labs.ucx.installer.hms_lineage] HMS Lineage feature creates one system table named system.hms_to_uc_migration.table_access and helps in your migration process from HMS to UC by allowing you to programmatically query HMS lineage data. 07:05 INFO [databricks.labs.ucx.install] Fetching installations... 07:05 INFO [databricks.labs.ucx.installer.policy] Creating UCX cluster policy. 07:05 DEBUG [tests.integration.conftest] Waiting for clusters to start... 07:12 DEBUG [tests.integration.conftest] Waiting for clusters to start... 07:12 INFO [databricks.labs.ucx.install] Installing UCX v0.21.1+7720240417071206 07:12 INFO [databricks.labs.ucx.install] Creating ucx schemas... 07:12 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=remove-workspace-local-backup-groups 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Schema hive_metastore.migrate_yjbk2: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_yjbk2 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_yjbk2', metastore_id=None, name='migrate_yjbk2', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_yjbk2.ucx_tso4u: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_yjbk2/ucx_tso4u 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_yjbk2.ucx_tso4u', metastore_id=None, name='ucx_tso4u', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_yjbk2', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_yjbk2/ucx_tso4u', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_yjbk2.ucx_tgdw5: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_yjbk2/ucx_tgdw5 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_yjbk2.ucx_tgdw5', metastore_id=None, name='ucx_tgdw5', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_yjbk2', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/c', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_yjbk2.ucx_txyya: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_yjbk2/ucx_txyya 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_yjbk2.ucx_txyya', metastore_id=None, name='ucx_txyya', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_yjbk2', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_yjbk2.ucx_tso4u', view_dependencies=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Table hive_metastore.migrate_yjbk2.ucx_tdwvq: https://DATABRICKS_HOST/explore/data/hive_metastore/migrate_yjbk2/ucx_tdwvq 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_yjbk2.ucx_tdwvq', metastore_id=None, name='ucx_tdwvq', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_yjbk2', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_yjbk2.ucx_txyya', view_dependencies=None) 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1713337487811, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cqkwt', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_cqkwt', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1713337487811, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Schema ucx_cqkwt.migrate_yjbk2: https://DATABRICKS_HOST/explore/data/ucx_cqkwt/migrate_yjbk2 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_cqkwt', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cqkwt.migrate_yjbk2', metastore_id=None, name='migrate_yjbk2', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:04 INFO [databricks.labs.ucx.mixins.fixtures] Schema hive_metastore.ucx_sf7bf: https://DATABRICKS_HOST/explore/data/hive_metastore/ucx_sf7bf 07:04 DEBUG [databricks.labs.ucx.mixins.fixtures] added schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.ucx_sf7bf', metastore_id=None, name='ucx_sf7bf', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:05 DEBUG [databricks.labs.ucx.install] Cannot find previous installation: Path (/Users/0a330eb5-dd51-4d97-b6e4-c474356b1d5d/.EJ7Z/config.yml) doesn't exist. 07:05 INFO [databricks.labs.ucx.install] Please answer a couple of questions to configure Unity Catalog migration 07:05 INFO [databricks.labs.ucx.installer.hms_lineage] HMS Lineage feature creates one system table named system.hms_to_uc_migration.table_access and helps in your migration process from HMS to UC by allowing you to programmatically query HMS lineage data. 07:05 INFO [databricks.labs.ucx.install] Fetching installations... 07:05 INFO [databricks.labs.ucx.installer.policy] Creating UCX cluster policy. 07:05 DEBUG [tests.integration.conftest] Waiting for clusters to start... 07:12 DEBUG [tests.integration.conftest] Waiting for clusters to start... 07:12 INFO [databricks.labs.ucx.install] Installing UCX v0.21.1+7720240417071206 07:12 INFO [databricks.labs.ucx.install] Creating ucx schemas... 07:12 INFO [databricks.labs.ucx.installer.workflows] Creating new job configuration for step=remove-workspace-local-backup-groups 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 0 make_dbfs_data_copy fixtures 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 0 cluster fixtures 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 1 catalog fixtures 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] removing catalog fixture: CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1713337487811, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cqkwt', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_cqkwt', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1713337487811, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] ignoring error while catalog CatalogInfo(browse_only=False, catalog_type=, comment='', connection_name=None, created_at=1713337487811, created_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cqkwt', isolation_mode=, metastore_id='8952c1e3-b265-4adf-98c3-6f755e2e1453', name='ucx_cqkwt', options=None, owner='0a330eb5-dd51-4d97-b6e4-c474356b1d5d', properties=None, provider_name=None, provisioning_info=None, securable_kind=, securable_type='CATALOG', share_name=None, storage_location=None, storage_root=None, updated_at=1713337487811, updated_by='0a330eb5-dd51-4d97-b6e4-c474356b1d5d') teardown: Catalog 'ucx_cqkwt' does not exist. 07:12 INFO [databricks.labs.ucx.install] Deleting UCX v0.21.1+7720240417071239 from https://DATABRICKS_HOST 07:12 INFO [databricks.labs.ucx.install] Deleting inventory database ucx_sf7bf 07:12 INFO [databricks.labs.ucx.install] Deleting jobs 07:12 ERROR [databricks.labs.ucx.install] No jobs present or jobs already deleted 07:12 INFO [databricks.labs.ucx.install] Deleting cluster policy 07:12 ERROR [databricks.labs.ucx.install] UCX Policy already deleted 07:12 INFO [databricks.labs.ucx.install] Deleting secret scope 07:12 INFO [databricks.labs.ucx.install] UnInstalling UCX complete 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 0 workspace user fixtures 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 0 account group fixtures 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 0 workspace group fixtures 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 0 table fixtures 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 4 table fixtures 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_yjbk2.ucx_tso4u', metastore_id=None, name='ucx_tso4u', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_yjbk2', sql_path=None, storage_credential_name=None, storage_location='dbfs:/user/hive/warehouse/migrate_yjbk2/ucx_tso4u', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_yjbk2.ucx_tgdw5', metastore_id=None, name='ucx_tgdw5', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_yjbk2', sql_path=None, storage_credential_name=None, storage_location='dbfs:/mnt/TEST_MOUNT_NAME/a/b/c', table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition=None, view_dependencies=None) 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_yjbk2.ucx_txyya', metastore_id=None, name='ucx_txyya', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_yjbk2', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_yjbk2.ucx_tso4u', view_dependencies=None) 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] removing table fixture: TableInfo(access_point=None, browse_only=None, catalog_name='hive_metastore', columns=None, comment=None, created_at=None, created_by=None, data_access_configuration_id=None, data_source_format=None, deleted_at=None, delta_runtime_properties_kvpairs=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, encryption_details=None, full_name='hive_metastore.migrate_yjbk2.ucx_tdwvq', metastore_id=None, name='ucx_tdwvq', owner=None, pipeline_id=None, properties=None, row_filter=None, schema_name='migrate_yjbk2', sql_path=None, storage_credential_name=None, storage_location=None, table_constraints=None, table_id=None, table_type=, updated_at=None, updated_by=None, view_definition='SELECT * FROM hive_metastore.migrate_yjbk2.ucx_txyya', view_dependencies=None) 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] clearing 3 schema fixtures 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] removing schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.migrate_yjbk2', metastore_id=None, name='migrate_yjbk2', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] removing schema fixture: SchemaInfo(browse_only=None, catalog_name='ucx_cqkwt', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='ucx_cqkwt.migrate_yjbk2', metastore_id=None, name='migrate_yjbk2', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) 07:12 DEBUG [databricks.labs.ucx.mixins.fixtures] removing schema fixture: SchemaInfo(browse_only=None, catalog_name='hive_metastore', catalog_type=None, comment=None, created_at=None, created_by=None, effective_predictive_optimization_flag=None, enable_predictive_optimization=None, full_name='hive_metastore.ucx_sf7bf', metastore_id=None, name='ucx_sf7bf', owner=None, properties=None, storage_location=None, storage_root=None, updated_at=None, updated_by=None) [gw8] linux -- Python 3.10.14 /home/runner/work/ucx/ucx/.venv/bin/python ```

Running from nightly #31

JCZuurmond commented 6 months ago

Looks like a flaky test, when rerunning locally as single test or as part of the whole test suite the test did not fail. This morning in the nightly run, the test failed the first time and was rerun. No "flaky test" message was given, however, no comment is placed here and the second "migrate-tables" job run was succesful. Let's keep an eye out for this one.