Some concerns I've seen in other test architecture that we need to address as well as some opinionated comments:
The w3c test suites are based on the VC-API endpoints as test endpoints. Some implementers will not want to expose these endpoints therefore won't be able to leverage these test suites.
The vc-api group already has a test-suite model/infrastructure, it might be worthwhile to consider delegating some the technical testing to the vc-api work, telling implementers to submit their implementation to that group and publish their reports on canivc.com. This is a win/win since it will bring more adoption to the VC-API and avoid some duplication of the work.
A model we are currently designing at the traceability call is to have issued VC fixtures from issuers that are sent to all verifier implementers for verification and assert the response. I think this model would fit well in our case since VC will already be published.
For the technical interoperability, what is really important to test is 'has this implementer correctly implemented the specification'. This goes for specific cryptosuites, bitstringstatuslist, data model, etc...
Since this doesn't need dynamic testing, we can have a published VC (DPP, CC, TE) be sent to a verification endpoint by a test client and do some assertions on the response (ex: this test VC is a revoked Conformity Credential, therefore I expect the verification to return a warning). These wouldn't be production endpoints but a backchannel endpoint to test underlying issuance/verification libraries.
The second part of testing is the traceability-graph. This can be divided in 2 ways:
Is your VC 'graphable'
Can your implementation 'graph' a conformant VC
For test case 1 we will need a reference implementation of a graph resolver to test an issued DPP
For test case 2 I'm unsure how this can be tested
Some of this is already outlined in the related PR.
Some concerns I've seen in other test architecture that we need to address as well as some opinionated comments:
The w3c test suites are based on the VC-API endpoints as test endpoints. Some implementers will not want to expose these endpoints therefore won't be able to leverage these test suites.
The vc-api group already has a test-suite model/infrastructure, it might be worthwhile to consider delegating some the technical testing to the vc-api work, telling implementers to submit their implementation to that group and publish their reports on canivc.com. This is a win/win since it will bring more adoption to the VC-API and avoid some duplication of the work.
A model we are currently designing at the traceability call is to have issued VC fixtures from issuers that are sent to all verifier implementers for verification and assert the response. I think this model would fit well in our case since VC will already be published.
For the technical interoperability, what is really important to test is 'has this implementer correctly implemented the specification'. This goes for specific cryptosuites, bitstringstatuslist, data model, etc...
Since this doesn't need dynamic testing, we can have a published VC (DPP, CC, TE) be sent to a verification endpoint by a test client and do some assertions on the response (ex: this test VC is a revoked Conformity Credential, therefore I expect the verification to return a warning). These wouldn't be production endpoints but a backchannel endpoint to test underlying issuance/verification libraries.
The second part of testing is the
traceability-graph
. This can be divided in 2 ways:For test case 1 we will need a reference implementation of a graph resolver to test an issued DPP For test case 2 I'm unsure how this can be tested
Some of this is already outlined in the related PR.