Closed billdirks closed 4 years ago
I've been making progress on this front and have a semi-presentable version working.
I have added a summary
option to the config with a list of human readable titles as attributes. These each default to true
, but you can swap them over to false
.
...
"summary": {
"Unique Vehicles": true,
"Active Vehicles": true,
"Total Trips": true,
"Total Trip Distance": true,
"Distance Per Vehicle": true,
"Vehicle Utilization": true,
"Trips Per Active Vehicle": true,
"Avg Trip Distance": true,
"Avg Trip Duration": true
},
...
This object is now rendered to the report template, just like we do for the report's data blob. I have this working by then mapping the human readable names to the div ids, then I simply find the respective divs and delete the elements at render time. This works fine and you can see an example with 3 disabled fields here:
I'm happy with the general config flow and how we are rendering this in the UI, except for one issue. If you delete metric elements across rows, it can lead to some funny layouts. The grid system adapts nicely horizontally but does not adapt vertically. Here's an example of that behavior with more of the metrics disabled:
What I am planning to try next is to walk through the metrics list and render the layout like this:
rows = []
row = 0
for metric in metricConfig:
if metric == true:
if rows[rows.length-1] < 3:
rows[row].push(metric)
else:
rows.push([])
row++
rows[row].push(metric)
This would mean that the rows will "fill" with a max of 3 metrics per row, until all the enabled metrics have been rendered. The last row, wherever it lands, will center the metrics horizontally as seen in the existing implementation above.
Shipped in v4.0
When using mobility metrics we often want to configure which summary statistics we want to show. That is, we may be interested in seeing
active vehicles
but we may not be interested inunique vehicles
. The motivation behind this request is we compute some metrics outside of mobility-metrics with similar names and meanings from the raw data but our algorithm is different, and we don't want to present 2 versions of almost the same metric.