Open dubgeis opened 2 months ago
Incorporating conditional review logic and data parameterization in the Hedera Guardian is a positive step, as it allows for flexibility in emissions tracking and reporting. Given the diverse and evolving nature of industry and regulatory standards, it’s essential that auditors have the ability to set specific parameters for auditing. While the system should assist in flagging data that falls outside these parameters, the final validation should rest with the auditors to ensure accuracy and compliance.
This would certainly be helpful. Some inputs by taking example of my work of digitising GS' metered energy methodology -
Future work could include having ML models to predict consumption patterns and identify more complex anomalies automatically.
FWIW, here's static & dynamic MRV schema I'm planning to have for metered energy policy.
Problem description
Currently the Guardian data which is entered in the form of Verifiable Credentials requires reviews by independent auditors or VVBs. In policy/methodology structuring it should be possible to determine normal or variances of answers that would be allowable by Standards and or Auditors. Typically in other settings this would be treated as anomaly detection.
It is unclear if this requires a conditional workflow, which is possible today in the Guardian via a policy, or conditional logic for review based on a specific answer. In an ideal setting Machine Learning Models would be able to parse data for answers within the norm range or flag for additional review/rejection.
Requirements
The ability to set parameters, which may not be public on Verifiable Credential based answers within a schema.
Definition of done
Ability for an auditor or standards body to enable conditional logic, this may be adjustable even after a policy is published without required migrations.
Acceptance criteria
Auditors, and VVBs, along with Standards can submit ranges, accepted responses, and data formats that would be allowable for an answer.