Open swcurran opened 4 years ago
It will be trivial to generalize those step definitions. Question is, will all agents expose did, schema, and credential definition operations? Will we assume the test harness calls to these operations will be taken care of in some way in the backchannels? My assumption is yes, there needs to be a way to do these things, and the test harness needs to control when and what is done through these operations, ie. when, what schemas, how many, etc. If my assumption is wrong then I will need to know now, because this will directly affect what I am implementing to support multiple credentials in a single proof request.
In regards to the parameters/externalization of the schemas/creds/proofs, you will see this fully realized in the proof tests. Biggest bang for the buck was to get the end to end tests to work with this parameterization since proofs are the most likely to need differing schemas etc. Right now the issue credential tests just use the default schema and cred data, which is used when no parameters are given. Keep in mind the steps of the issue cred tests are directly used in the proof tests, so, if the issue cred tests followed the parameterization pattern using tags and examples data tables in the feature files as the proof tests do, they should work with the externalized schemas and cred data (I haven't tried it though).
If my assumption is incorrect on the did, schema, and credential definition operations, then the test harness/backchannels will have to use/define has-public-did, issue-new-cred-type, and ready-to-issue operations, that the backchannel takes care of in some way. Right now the tests call GET did, POST schema, POST credential-definition, which are directly related to calls in the Aca-py admin api.
Question is, will all agents expose did, schema, and credential definition operations?
All Indy implementations will have endpoints for those, but any non-Indy systems will not. Schema and CredDef objects are Indy only. For example, BBS+ (which is most likely what we will be using Real Soon Now) JSON-LD replaces schema, and they generally exist at a public URL (not on the ledger) and there is no CredDef. Thus, the "prepares to issue a credential type" becomes more or less a no-op, although there may be somethings needed --- perhaps preparing a a revocation registry. We need a place for it, but it may not be exactly "schema" and "creddef".
Will we assume the test harness calls to these operations will be taken care of in some way in the backchannels?
Yes. So if the reference to the credential type is externalized as a "schema", or the use of Indy is implied, then the backchannel will take care of taking the Indy steps for "preparing to issue a credential type".
It sounds like ready to issue
could be the only prerequisite step/operation for any agent, and everything is done in that operation. That would also make the reference to the externalized schemas an indy-ism as well, no? If so, should I rename schema to cred_type, that for Indy it contains schema and cred def info, and contains what ever is needed for the other Agents, if anything.
I think it should make reference to an externalized cred-type, which in Indy would point to an indy schema, but may be different in non-Indy (or future Indy) systems.
Yes, I think the rename to "cred_type" and the elimination of "cred_def" references would be good.
So to be clear this effort will remove indy-isms from the test features files, the test code, change the calls to did, schema, & cred def to a ready to issue
call, and move the those calls to the Aca-py backchannel, and rename the externalized schema info to cred_type.
The feature files will change to look like this;
Background: Prepare the issuer agent to be able to issue a credentials
Given "Acme" is ready to issue credentials
@T001-API10-RFC0036 @AcceptanceTest @P1 @Indy
Scenario: Issue a credential with the Holder beginning with a proposal
Given "2" agents
| name | role |
| Acme | issuer |
| Bob | holder |
And "Acme" and "Bob" have an existing connection
When "Bob" proposes a credential to "Acme"
And "Acme" offers a credential
And "Bob" ...
or to simplify things since, it is a one liner that is more general it can be integrated directly into each scenario like so;
@T001-API10-RFC0036 @AcceptanceTest @P1 @Indy
Scenario: Issue a credential with the Holder beginning with a proposal
Given "2" agents
| name | role |
| Acme | issuer |
| Bob | holder |
And "Acme" is ready to issue credentials
And "Acme" and "Bob" have an existing connection
When "Bob" proposes a credential to "Acme"
And "Acme" offers a credential
And "Bob" ...
There are two issues here that might make this task a slightly harder job than trivial.
What is the priority on this? I will finish the support for multiple cred_types in proofs first.
And if someone can think of another more descriptive name for that @Indy
tag in the tests, that might be good.
I think the goal is to have the tests work across implementations (DIDs ledger and VC models), so there is not a need for an @Indy
tag.
I think the goal is to have the tests work across implementations (DIDs ledger and VC models), so there is not a need for an
@Indy
tag.
... but you need to specify the type of credential you are testing, so you need to pass something like "indy/driverslicense" to specify the type of credential you are issuing ...
That makes sense. I was thinking you could just pass "driverslicense" and have the backchannel decide what to do, but you are right, you have to specify the format and cred-type since both backchannels (issuer, holder) need to know.
Sorry, the @Indy
tag has nothing to do with cred_types, I shouldn't of mentioned it. It was just another indy-ism that needed to be removed. It is used to differentiate agents that can't start at the credential send request.
As far as cred_types in tags, it looks like this;
@T001.2-API10-RFC0037 @P1 @AcceptanceTest @Schema_DriversLicense @Indy
Scenario Outline: Present Proof of specific types and proof is acknowledged
Given "2" agents
| name | role |
| Faber | verifier |
| Bob | prover |
And "Faber" and "Bob" have an existing connection
And "Bob" has an issued credential from <issuer> with <credential_data>
When "Faber" sends a <request_for_proof> presentation to "Bob"
And "Bob" makes the <presentation> of the ...
Examples:
| issuer | credential_data | request_for_proof | presentation |
| Acme | Data_DL_MaxValues | proof_request_DL_address | presentation_DL_address |
| Faber | ...
So the above would be changed to Cred_Type_DriversLicense
. If the format is important for the cred_type, indy or otherwise, I would leave that in the actual external json file for Cred_Type_DriversLicense
.
For me this raises the question: how are different types of credentials / ledgers tested?
Say one agent supports both indy and another method, How is it decided which method to run the tests for? Should that be a startup parameter of the AATH? Or should it run the tests for all methods supported?
For the '@Indy' problem: what if backchannels specify what they support. E.g.
So a test could have the '@StartFromRequest' tag. The client will resolve this using an API call on startup and only run the test if the feature is present in the backchannel. Kinda like discover features protocol
I like the discover features concept. Telling Behave what to run is done as the entry point to the tests, so there would need to be a routine that asks the backchannels what you support, then construct the Behave command, listing the tags that are needed to meet those supported features.
That's a pretty cool concept. Let me repeat it back in details to see if I have it right:
behave
with the /noop option to get a list of the requested tests to be run (tests-requested
).test-to-run
.tests-requested
- tests-to-run
)test-to-run
probably by writing out a temp-behave.ini
file or just a long command lineThere should be an option to './managethat allows just running everything (
--force`).
Thoughts?
Yeah that sounds right @swcurran
Queries the agents with the role they are playing and a list of tests to be run, getting back a list of supported tests.
Only question for me is to what level of detail this should go. This includes the protocol and role. However some tests could be run with exactly the same steps with some different parameters. e.g. the issue credential 2.0 Credential Format
:
Scenario: Issue a credential with the Holder beginning with a proposal
...
And "Acme" offers a "hlindy-zkp-v1.0" credential
And "Bob" ...
Scenario: Issue a credential with the Holder beginning with a proposal
...
And "Acme" offers a "dif/credential-manifest@v1.0" credential
And "Bob" ...
Any thoughts on that?
I think it is a good idea to pick up this issue again.
Now that we are adding multiple attachment formats, multiple signature suites and multiple did methods the matrix of what an agent can support is getting bigger and bigger.
The way we currently deal with the features an agent supports is using tags. This works pretty well, but I think overtime this can give us some trouble.
E.g. let's look at the following test. The tags indicate three things:
@CredFormat_JSON-LD
@ProofType_Ed25519Signature2018
@DidMethod_Key
If we want to test combinations, we need to keep adding new tests that combine different sets of those tags. As the list of signature suites and did methods grows, so does the tests.
@T001.1-RFC0453 @RFC0593 @critical @wip @AcceptanceTest @DIDExchangeConnection @CredFormat_JSON-LD @Schema_DriversLicense_v2 @ProofType_Ed25519Signature2018 @DidMethod_Key
Scenario Outline: Issue a JSON-LD Ed25519Signature2018 credential with the Holder beginning with a proposal
Given "2" agents
| name | role |
| Acme | issuer |
| Bob | holder |
And "Acme" is ready to issue a "json-ld" credential
And "Acme" and "Bob" have an existing connection
When "Bob" proposes a "json-ld" credential to "Acme" with <credential_data>
And "Acme" offers the "json-ld" credential
And "Bob" requests the "json-ld" credential
And "Acme" issues the "json-ld" credential
And "Bob" acknowledges the "json-ld" credential issue
Then "Bob" has the "json-ld" credential issued
Examples:
| credential_data |
| Data_DL_MaxValues |
If we could somehow rewrite the following test to something like:
@T001.1-RFC0453 @RFC0593 @critical @wip @AcceptanceTest @DIDExchangeConnection @CredFormat_JSON-LD @Schema_DriversLicense_v2
Scenario Outline: Issue a JSON-LD Ed25519Signature2018 credential with the Holder beginning with a proposal
Given "2" agents
| name | role |
| Acme | issuer |
| Bob | holder |
And "Acme" is ready to issue a "json-ld" credential with method <did_method> and signature suite <signature_suite>
And "Acme" and "Bob" have an existing <connection_rfc> connection
When "Bob" proposes a "json-ld" credential to "Acme" with <credential_data>
And "Acme" offers the "json-ld" credential
And "Bob" requests the "json-ld" credential
And "Acme" issues the "json-ld" credential
And "Bob" acknowledges the "json-ld" credential issue
Then "Bob" has the "json-ld" credential issued
Examples:
| credential_data | did_method | signature_suite | connection_rfc
| Data_DL_MaxValues | sov | Ed25519Signature2018 | RFC0023
| Data_DL_MaxValues | key | BbsBlsSignature2020 | RFC0023
| Data_DL_MaxValues | key | Ed25519Signature2018 | RFC0160
The difference is that all the tag combinations are now described in the examples. Making combinations a lot easier. We can then dynamically skip tests using (scenario.mark_skipped()
) if a backchannel / agent doesn't support this exact combination.
I think we could solve this "easily" by adding a single endpoint that returns all supported capabilities of an agent. It could include stuff like:
It should probably be in some matrix as supported capabilities can differ based on did method, signature suite, etc...
Even if we don't start work on this in the near future, thinking about the API and requirements would be a good idea I think.
I agree with this. I think the dynamically skipping of tests is going to key. For example, AIP 2.0 has "sub-targets" like Indy and BBS, where some agents will support one or the other but not both. What we want are tests that do both, but where tests are skipped when some of the agents don't support some of the options.
More precise would be to have the scenario skipped depending only on those involved in the test (e.g. Bob, Acme) in the example above, and in the role they are playing for the test.
Managing complexity...
I really like the idea of putting the did_method and signature in the example data tables and then excluding that scenario in the before_scenario with scenario.mark_skipped(). I'm not completely convinced about including the connection type in the data tables. I lean toward having RFC level items as top level tags so they are explicit and obvious at the command line for a test run. I guess I could be convinced otherwise, but that's where I am right now.
In an effort to remove the Indy-specific features in the test cases, I'm thinking that we should replace the current test case flow from:
To be something like the following. For Indy, the "prepares to issue a new credential type" would be create a new schema and create a new cred def, likewise for the "is ready to issue an existing credential type".
I was also expecting that the the above would have parameters indicating what credential type (e.g. driverslicence) to create/check for. Is that not they way it works?
What do you think @nodlesh @TimoGlastra @ianco ?