Follow up to #924 . Once we have a second draft of a dag, it's time to implement! This has two main components
port over sql. This should happen in parallel - we can keep the sql folder as is, and move over and adjust relevant code and queries to create the new models as specified in our dag. This really has two parts, because one is much more complex than the other
[x] create staging tables. Again, reference our dbt practices section, but this is fairly straightforward - any table that's really a slight tweak of a source table should be a staging table. So if a column is simply renamed, there should probably be a stg__dcp_zoningdistricts that selects from dcp_zoningdistricts. Use products/green_fast_track as an example here.
[x] implement "int__zoningdistricts". This model should be equivalent to lotzoneperorder in area_zoningdistrict.sql. It may take an upstream intermediate model or two as well.
[x] implement remaining"target" int tables (the highest number of each one in your flow chart) one at a time. These tables can maybe stay long format, not worrying about ordering and updating dcp_zoningtaxlot
[ ] validate. Does the output match up with the output of the old sql? This is also a bit of a complex question to answer. We have some queries built in to ztl qa to address these sorts of changes luckily
Follow up to #924 . Once we have a second draft of a dag, it's time to implement! This has two main components
sql
folder as is, and move over and adjust relevant code and queries to create the new models as specified in our dag. This really has two parts, because one is much more complex than the otherstg__dcp_zoningdistricts
that selects fromdcp_zoningdistricts
. Useproducts/green_fast_track
as an example here.lotzoneperorder
inarea_zoningdistrict.sql
. It may take an upstream intermediate model or two as well.