Open jpmckinney opened 1 year ago
An alternative is to list which fields each indicator uses. (This would make it more an exercise for the reader.)
If we do list each field, then maybe #42 is also relevant.
After discussing with @camilamila, we should report the application count, pass count, fail count and total (like Pelican does) – so that users have an idea of the indicator's coverage.
Note: Whereas in Pelican, no data is excluded, with indicators, we do exclude contracting processes (e.g. direct procedures if the indicator is specific to open procedures). So, we need to count exclusions separately.
For example, Pelican reports "pass", "fail" and "not applicable" for quality checks.
Cardinal presently only reports "fail" for red flags.
It might be useful to be able to review the N/A results.
This would involve, at minimum, storing:
Option<bool>
Pelican also stores other metadata, like
application_count
andpass_count
for checks that operate on arrays, and then easily accessible metadata to understand why the check failed (e.g. the paths to the fields that caused the failure).