Closed kgeographer closed 5 months ago
I think a more efficient answer will be to:
~Add a close_matches
@property to the Place model.~
~Extend the fetch_mapdata_ds
function in datasets.utils
so that if passed a close_matches=True
parameter each feature it returns (i.e. each place) will include the pid
s of close matches.~
~Adapt the mapAndTable.js
code to perform the grouping, headword computation, and aggregation currently envisaged for the various functions in collection.views.py
.~
~Write a separate fetch_mapdata
function for Dataset Collections, which aggregates the features into a single FeatureCollection, each feature having a matchset
UUID and its dataset id
.~
~Adapt the mapAndTable.js
code to group matching matchsets
and to perform the headword computation and aggregation currently envisaged for the various functions in collection.views.py
.~
This will much simplify dealing with geometry visualisation and events associated with the Datasets selector.
EDIT: no, make a single feature for each matchset, including a GeometryCollection (including a centroid representative point), headword, and aggregated types.
Designing a major modification to the existing feature, 'dataset collection' (DC), began walking through its implementation and bogged down at the required changes in mapAndTable.js. The implied changes to backend and frontend are closely intertwined, and I am unable to determine what the front end plausibly needs, nor am I able to refactor the frontend javascript.
The scenario being supported:
Use case steps
create a new empty DC (dc1), with a name and minimal metadata
add an indexed dataset (ds1) to dc1: a record is created in collection_colldataset (id, collection_id, dataset_id, date_added)
at this stage, ds_collection_browse (dc_browse) displays only the place records from ds1
add a 2nd dataset (ds2)
connected_components()
function finds any groups of records in ds1 and ds2 that have been matched during their prior indexing step (i.e. they appear as an edge in close_matches);now the table in dc1's browse screen must either:
as it stands, dc_browse receives a 'ds_list' context object containing only metadata for each dataset in the collection: ds_list.keys() = ['id', 'label', 'bounds', 'title', 'dl_est', 'numrows', 'modified']
code in mapAndTable.js constructs a 'data' object for each dataset in the collection; these are aggregated into allFeatures[], which is used to a) render the table rows, and b) interact with the collection's map tileset features
challenges The refactor bogs down there -- allFeatures[] is generated from lists of individual features (Place records) in datasets. Now, each item in allFeatures[] needs to comprise a summary of data from a set of >=1 Place instances, not from a single Place instance
On the backend, the addition of a dataset to a DC must generate a new list of that collection's place sets. With the addition of a dataset, what was {1234, 5678} may now be {1234, 5678, 7654}, and the summary of that "AggPlace" needs to be reflected in the collection's dc_browse screen map and table.
If there is to be an AggPlace model/table, it would have at minimum fields (id, collection_id, pid_set), and before regenerating records upon a dataset_add() function, previous records for that collection would be deleted.
So...the dc_browse template would have to fetch a collection.agg_places() context object (not ds_list), and deliver that to mapAndTable.js, which constructs allFeatures[] from it.
Or...the generation of new AggPlace rows could include building the composite/summary info needed for dc_browse
Simple, eh? (not) Figures below for options 1 and 2 (#1 was favored by the team so far)