Describe the task
Note: Deferring for now... going to take prototype a bit further.
As we've prototyped solutions for different features, we've learned a lot about what's possible with tools in the GIS domain such as GDAL and postgis. It's also led to some disparate data sources that are unclear on their boundaries and purpose explicitly.
For different use-cases we've stored data in different places and in different ways, currently:
S3 hosts the generated SFMS raster data that we read in on the fly for displaying raw HFI, slope, aspect, elevation from a raster server
Our tileserver database/API has tables for fire zone and fire centre polygons that we serve up for displaying / analyzing fire zones/centres across the province, as well as a HFI table that stores calculated HFI areas
Our wps API database stores advisory metadata based on offline advisory calculations depending on lookup intersections on the HFI table in the tileserver
Acceptance Criteria
[ ] Schedule Architectural discussion
[ ] Determine data source boundaries for types of workloads features (analysis database, raster database, S3 object store) and document it in our data architecture diagram and design doc
[ ] Agree if we need more explicit naming in our wps API database, and if we should pull the desired analysis tables into it's own database
[ ] Address any unclear conceptual understandings from developers so that we get ahead of mistakes that could affect velocity/understanding in moving advisory work forward
[ ] discuss idea of creating separate DB for ASA to increase resiliency
[ ] rethink how/where data for ASA gets stored in our regular (non-tileserv) DB
[ ] is there anything else we could/should have done to prevent this failure from happening?
Describe the task Note: Deferring for now... going to take prototype a bit further.
As we've prototyped solutions for different features, we've learned a lot about what's possible with tools in the GIS domain such as GDAL and postgis. It's also led to some disparate data sources that are unclear on their boundaries and purpose explicitly.
For different use-cases we've stored data in different places and in different ways, currently:
tileserver
database/API has tables for fire zone and fire centre polygons that we serve up for displaying / analyzing fire zones/centres across the province, as well as a HFI table that stores calculated HFI areaswps
API database stores advisory metadata based on offline advisory calculations depending on lookup intersections on the HFI table in thetileserver
Acceptance Criteria
[ ] Schedule Architectural discussion
[ ] Determine data source boundaries for types of workloads features (analysis database, raster database, S3 object store) and document it in our data architecture diagram and design doc
[ ] Agree if we need more explicit naming in our
wps
API database, and if we should pull the desired analysis tables into it's own database[ ] Address any unclear conceptual understandings from developers so that we get ahead of mistakes that could affect velocity/understanding in moving advisory work forward
[ ] discuss idea of creating separate DB for ASA to increase resiliency
[ ] rethink how/where data for ASA gets stored in our regular (non-tileserv) DB
[ ] is there anything else we could/should have done to prevent this failure from happening?
Additional context