dbt-labs / dbt-spark

dbt-spark contains all of the code enabling dbt to work with Apache Spark and Databricks
https://getdbt.com
Apache License 2.0
400 stars 227 forks source link

[ADAP 619][Bug] Unable to materialize table in anything else than delta #803

Closed irsath closed 10 months ago

irsath commented 1 year ago

Is this a new bug in dbt-spark?

Current Behavior

The file_format config is not taken into account. Table materialization is hard coded to be in delta. https://github.com/dbt-labs/dbt-spark/blob/e741034160444eb7aa06aef7550a366cdcacc913/dbt/include/spark/macros/materializations/table.sql#L101

My use case is to have AWS Glue as my global metastore. I have my databricks clusters linked to my glue metastore and I can successfully read / write tables in it. Unfortunately I'm not able to write anything else than delta table (as it is hardcoded)

Expected Behavior

The table is materialized in the format specified in the file_format config.

Steps To Reproduce

Execute this model:

from pyspark.sql import SparkSession

def model(dbt, session: SparkSession):
    dbt.config(
        file_format = "iceberg"
    )
    df =  .....my spark df....
    return df

Relevant log output

No response

Environment

- OS: Mac Os
- Python: Python 3.10.3
- dbt-core: 1.5.1
- dbt-spark: 1.5.0

Additional Context

No response

github-actions[bot] commented 11 months ago

This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please comment on the issue or else it will be closed in 7 days.

github-actions[bot] commented 10 months ago

Although we are closing this issue as stale, it's not gone forever. Issues can be reopened if there is renewed community interest. Just add a comment to notify the maintainers.