Something I've seen in a lot of the W3C repos are test vectors for defined models.
This can help greatly when implementing a spec, where it is not always very explicit in what is valid and isn't valid, or how something exactly looks.
E.g. when implementing the did:peer spec I had a hard time as there's no test vector how a did document should be encoded in a did:peer:1, resulting in different frameworks (e.g. AFGO and AFJ) having different output for the same input.
I think we should add test vectors for all models defined in this specification, which could be taken as leading. Then, I'd like to run tests in the framework based on those test vectors so we're aligned with what is defined in this specification.
The models we should covers (may be missing some):
schema
credential definition
revocation registry definition
revocation status list
credential offer
credential request
credential
proof request
presentation
We should also cover different variantions, such as:
cred def with and without revocation
proof request with and without non revocation internval
proof with and without non revocation proof
credential with and without revocation
Thinking whether we also need to have some test vectors for the private parts. Although the models isn't defined explicitly, we have a set of output values (e.g. the private keys generated for a credential definition). Not sure what the best way would be to add test vectors for those, but important nevertheless
Something I've seen in a lot of the W3C repos are test vectors for defined models.
This can help greatly when implementing a spec, where it is not always very explicit in what is valid and isn't valid, or how something exactly looks.
E.g. when implementing the did:peer spec I had a hard time as there's no test vector how a did document should be encoded in a did:peer:1, resulting in different frameworks (e.g. AFGO and AFJ) having different output for the same input.
I think we should add test vectors for all models defined in this specification, which could be taken as leading. Then, I'd like to run tests in the framework based on those test vectors so we're aligned with what is defined in this specification.
The models we should covers (may be missing some):
We should also cover different variantions, such as:
Thinking whether we also need to have some test vectors for the private parts. Although the models isn't defined explicitly, we have a set of output values (e.g. the private keys generated for a credential definition). Not sure what the best way would be to add test vectors for those, but important nevertheless