Closed brandonpeebles closed 2 weeks ago
I checked dbt-adapters and dbt-osmosis code, it looks the fix is quite simple:
In DbtYamlManager.get_columns_meta it's required to replace c.dtype
with c.data_type
:
for c in self.adapter.get_columns_in_relation(table):
if any(re.match(pattern, c.name) for pattern in blacklist):
continue
columns[self.column_casing(c.name)] = ColumnMetadata(
name=self.column_casing(c.name),
type=c.data_type,
index=None,
comment=getattr(c, "comment", None),
)
data_type()
returns precision and scale if available:
@property
def data_type(self) -> str:
if self.is_string():
return self.string_type(self.string_size())
elif self.is_numeric():
return self.numeric_type(self.dtype, self.numeric_precision, self.numeric_scale)
else:
return self.dtype
@classmethod
def numeric_type(cls, dtype: str, precision: Any, scale: Any) -> str:
# This could be decimal(...), numeric(...), number(...)
# Just use whatever was fed in here -- don't try to get too clever
if precision is None or scale is None:
return dtype
else:
return "{}({},{})".format(dtype, precision, scale)
Fixed in 0.13.0
Issue:
If I set the data type precision/scale in YAML (example
NUMBER(38, 0)
, it gets overridden back to the defaultNUMBER
when I run dbt-osmosis.Impact:
This makes it difficult to maintain the new requirements dbt's model contracts feature which warn you to specify this for numeric columns
Their docs here: https://docs.getdbt.com/docs/collaborate/govern/model-contracts