Closed david-simons closed 2 years ago
Note that I am not using HAPI at all, just the hl7validator CLI, standalone :)
for the record:
Just to chime in on the discussion; we're using the validator in the QA workflow for developing new profiles, and we solved this issue by creating a wrapper and documenting each "known issue" in a YAML file that can be read by the wrapper (the wrapper serves several other purposes as well btw). There's a hard requirement that each suppressed issue is accompanied by a reason, and there's a check for issues that are documented but don't actually occur. The choice for YAML was made because it's well suited to read for humans, so we can deliver it as part of our publication of FHIR profiles (although ultimately this should be probably be part of an IG, but we're not there yet).
Side note: filtering out errors in production is unfortunately needed as the errors reported by the validator are not always reliable, especially when it comes to terminology. Sometimes we just need to say: "Yeah, we know you think a required slice is missing, but that's only because you think you re this SNOMED code is wrong, and that's only because you don't know the Dutch SNOMED extensions. Please ignore this, we know this is actually correct."
thank you @pieter-edelman-nictiz for chiming in! will have a look at your approach - sounds like it solve our need.
Hi @david-simons, not sure if you're talking about the approach or the wrapper itself, but if you mean the latter, you can use the file analyze_results.py as a standalone wrapper.
so I hope we know the dutch snomed codes now. Make a separate issue if there's still a problem there. But is this still a feature request? It's pretty.... specific... and we don't usually so stuff like this (at least gratis). I'm.. bothered.. by the idea of suppressing warnings with XML comments. (no json?)
so I hope we know the dutch snomed codes now. Make a separate issue if there's still a problem there.
This was just an example of a false error that I happened to encounter. The point is that there will probably always remain some errors that you know to be false and that you just want to suppress if you're using the Validator in an automated fashion (but you're right that I should file a ticket about this particular topic).
I'm.. bothered.. by the idea of suppressing warnings with XML comments. (no json?)
Personally I'm not in favor of using XML comments to suppress messages as well as it is a non-structured way to document this, it's invisible to implementers and like you say, it doesn't work for JSON. A separate YAML file like we use is of course also sub-optimal as it's an out-of-band solution -- we have a policy of documenting these things in the profile comments as well (when the check is on a profile, that is).
We will try the approach of doing this in a wrapper first.
In many cases, we review hl7validator warnings, and want to mark these for suppression in subsequent runs, together with some rationale. We do not want to suppress all warnings of that type - since every occurrence requires its own justification.
Request to be able to review/track and suppress these warnings individually, via some kind of tags/pragma in the FHIR artifact source itself, rather than wrappers or out-of-band lists. for example add a
<!-- hl7validator:warning:suppress rationale goes here -->
annotation on the line that triggers a warning, that is then picked up by the hl7validator on a per warning basis.Our interest is to suppress Warnings and Information, maybe others also want to suppress Errors. Also, this may be a start of a more generic mechanism to have extensible tooling-annotations in FHIR artifacts.
Reference: https://chat.fhir.org/#narrow/stream/179166-implementers/topic/Expression.20Constraint.20Language.20.28ECL.29-.20available.20elsewhere.3F/near/248348941