Open godfoder opened 6 years ago
TestField | Value |
---|---|
GUID | 45fb49eb-4a1b-4b49-876f-15d5034dfc73 |
Label | MEASURE_VALIDATIONTESTS_COMPLIANT |
Description | Measures the number of distinct VALIDATION tests that have a Response.status="COMPLIANT" for a given record. |
TestType | Measure |
Darwin Core Class | bdq:Validation |
Information Elements ActedUpon | |
Information Elements Consulted | bdq:AllValidationTestsRunOnSingleRecord |
Expected Response | INTERNAL_PREREQUISITES_NOT_MET if no tests of type VALIDATION were attempted to be run; Report the number of tests of output type VALIDATION run against the record that were COMPLIANT (passed) |
Data Quality Dimension | Reliability |
Term-Actions | VALIDATIONTESTS_COMPLIANT |
Parameter(s) | |
Source Authority | |
Specification Last Updated | 2024-08-18 |
Examples | [Response.status=RUN_HAS_RESULT, Response.result="7", Response.comment="7 VALIDATION tests had Response.status="COMPLIANT" ] |
Source | TG2-Gainesville |
References | |
Example Implementations (Mechanisms) | |
Link to Specification Source Code | |
Notes | We have three individual measures for pass (MEASURE_VALIDATIONTESTS_COMPLIANT (45fb49eb-4a1b-4b49-876f-15d5034dfc73)), fail (MEASURE_VALIDATIONTESTS_NOTCOMPLIANT (453844ae-9df4-439f-8e24-c52498eca84a)), and PREREQUISITES_NOT_MET (49a94636-a562-4e6b-803c-665c80628a3d). To get the total number of tests that were attempted, add all three measures. To get the total number of tests that ran, add NOT_COMPLIANT (fail) and COMPLIANT (pass). |
I suggest the Description:
'The number of distinct VALIDATION tests that have a Response.status="COMPLIANT" for a given record.'
in place of:
'The number of VALIDATION type tests run on a record that have a Response.status="COMPLIANT".'
From Zoom meeting 30th May 2022, changed Expected Response
INTERNAL_PREREQUISITES_NOT_MET if no tests of type VALIDATION were attempted to be run; REPORT of the number of tests of output type VALIDATION run against the record that were COMPLIANT (passed); otherwise NOT_REPORTED
to
INTERNAL_PREREQUISITES_NOT_MET if no tests of type VALIDATION were attempted to be run; Report of the number of tests of output type VALIDATION run against the record that were COMPLIANT (passed)
That should have been
INTERNAL_PREREQUISITES_NOT_MET if no tests of type VALIDATION were attempted to be run; Report the number of tests of output type VALIDATION run against the record that were COMPLIANT (passed)
Changed Notes to remove internal GitHub References from
We have three individual measures for COMPLIANT (pass #135), NOT_COMPLIANT (fail #31), and PREREQUISITES_NOT_MET #134). To get the total number of tests that were attempted, add all three measures. To get the total number of tests that ran, add NOT_COMPLIANT (fail) and COMPLIANT (pass).
to
We have three individual measures for COMPLIANT (pass MEASURE_VALIDATIONTESTS_COMPLIANT (45fb49eb-4a1b-4b49-876f-15d5034dfc73)), NOT_COMPLIANT (fail #MEASURE_VALIDATIONSTESTS_NOTCOMPLIANT (453844ae-9df4-439f-8e24-c52498eca84a)), and PREREQUISITES_NOT_MET (49a94636-a562-4e6b-803c-665c80628a3d). To get the total number of tests that were attempted, add all three measures. To get the total number of tests that ran, add NOT_COMPLIANT (fail) and COMPLIANT (pass).
Updated wording of Notes to be consistent with wording of #134
Splitting bdqffdq:Information Elements into "Information Elements ActedUpon" and "Information Elements Consulted". This MEASURE I am unsure about: I opted for "Consulted"
Also changed "Field" to "TestField", "Output Type" to "TestType" and updated "Specification Last Updated"
AllDarwinCoreTerms needs to be replaced by a list of relevant validations, in the form (used in the multirecord measures):
bdq:VALIDATION_BASISOFRECORD_NOTEMPTY.Response, as it is the results of validations on the single records that are the information elements for this test, not the darwin core terms.
We should probably also split this test into one test for each use case, with information elements matching the validations found in that use case.
Good pickup @chicoreus but I wonder about splitting on use case. Test - Use Case is a many to many relationship.
We need to get back to basics on what these few tests are for. They are different to the MultiRecord tests. Basically we are looking at somone doing a test on their database where they may run 50 tests (irregardless of Use Case, although a Use Case of "Data Managament" etc. applies). So I have run 50 tests on my data and I want to document that I ran 50 VALIDATION tests and 48 had a Response.result=COMPLIANT etc. I then check why two failed, and I correct those and run this test again. I see them as largely internal management tests.
Perhaps this needs discussing further in Seattle next week.
I agree @chicoreus , but there are two issue. First up, we probably should not list CORE VALIDATION tests as these may change depending on context. Second, we don't agree about splitting on use case.
I have changed Information Element Consulted from "All DarinCoreTerms" to "All CORE tests of type VALIDATION that were run"
Changed Information Elements Consulted to "bdq:AllValidationTestRunOnSingleRecord"