Open vbatychko-modeln opened 11 months ago
Hello !
We kinda have the same issue:
We are currently experimenting with Cube Cloud after a month or so testing things out with Cube Core. We plan on using dbt
to generate our models.
We recently updated our process and now include columns' data_type in the generated YAML model files. We use dbt-codegen: generate_model_yaml to generate the model.
We use Databricks SQL Serverless as our data warehouse
The resulting manifest.json
mentions data types that are not supported by cube-dbt
:
"store_indice": {
"name": "store_indice",
"description": "",
"meta": {},
"data_type": "double",
"constraints": [],
"quote": null,
"tags": []
},
Here's a list of datatypes that we might encounter in the manifest
@igorlukanin Would you agree that adding those type aliases to cube-dbt is the way to go? If so I'd be happy to create a PR.
Thanks !
I'm not sure that this qualifies as an issue of the cube_dbt
codebase. Cube clarifies what data types are allowed, and the list is abysmally small. So, all that cube_dbt
is doing is supporting that list.
You should open an issue on https://github.com/cube-js/cube if you want them to support more data types.
Looking at this, I wonder about the mapping dict in column,py
. It is currently hardcoded. Could this be exposed as a configuration file?
DBT's data_type can have more broader set of types that are currently I see in package's source. The type can be any of supported in underlying database. For exple, PostgreSQL varchar type:
Package version: 0.6.0