dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
dbt is the T in ELT. Organize, cleanse, denormalize, filter, rename, and pre-aggregate the raw data in your warehouse so that it's ready for analysis.
The dbt-spark
package contains all of the code enabling dbt to work with Apache Spark and Databricks. For
more information, consult the docs.
A docker-compose
environment starts a Spark Thrift server and a Postgres database as a Hive Metastore backend.
Note: dbt-spark now supports Spark 3.3.2.
The following command starts two docker containers:
docker-compose up -d
It will take a bit of time for the instance to start, you can check the logs of the two containers. If the instance doesn't start correctly, try the complete reset command listed below and then try start again.
Create a profile like this one:
spark_testing:
target: local
outputs:
local:
type: spark
method: thrift
host: 127.0.0.1
port: 10000
user: dbt
schema: analytics
connect_retries: 5
connect_timeout: 60
retry_all: true
Connecting to the local spark instance:
http://localhost:10000
and can be referenced with the Hive or Spark JDBC drivers using connection string jdbc:hive2://localhost:10000
and default credentials dbt
:dbt
Note that the Hive metastore data is persisted under ./.hive-metastore/
, and the Spark-produced data under ./.spark-warehouse/
. To completely reset you environment run the following:
docker-compose down
rm -rf ./.hive-metastore/
rm -rf ./.spark-warehouse/
If installing on MacOS, use homebrew
to install required dependencies.
brew install unixodbc
Everyone interacting in the dbt project's codebases, issue trackers, chat rooms, and mailing lists is expected to follow the PyPA Code of Conduct.
Everyone interacting in the dbt project's codebases, issue trackers, chat rooms, and mailing lists is expected to follow the dbt Code of Conduct.